<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:dc="http://purl.org/dc/elements/1.1/" version="2.0">
<channel>
<title>Computational Science &amp; Engineering Doctoral Theses (CSE PhD &amp; Dept-CSE PhD)</title>
<link>https://hdl.handle.net/1721.1/145728</link>
<description/>
<pubDate>Fri, 03 Apr 2026 17:44:21 GMT</pubDate>
<dc:date>2026-04-03T17:44:21Z</dc:date>
<item>
<title>Optical Property Prediction and Molecular Discovery through Multi-Fidelity Deep Learning and Computational Chemistry</title>
<link>https://hdl.handle.net/1721.1/155385</link>
<description>Optical Property Prediction and Molecular Discovery through Multi-Fidelity Deep Learning and Computational Chemistry
Greenman, Kevin P.
Optical properties are crucial for the design of molecules for numerous applications, including for display technologies and biological imaging. The accurate prediction of these properties has been the subject of decades of work in both physics-based approaches and statistical modeling. Recently, large datasets of both computed and experimental optical properties have become available, along with the advent of powerful deep learning approaches cable of learning meaningful representations from these large datasets. This thesis presents new approaches for predicting optical properties by fusing the experimental and computational data in multi-fidelity models that achieve greater accuracy and generalizability than previous methods. Additionally, it conducts a thorough benchmark of various strategies for handling multi-fidelity data to inform the modeling choices of future practitioners working with optical properties and beyond. Despite the greater availability of optical property data recently, the near-infrared (NIR) region of the spectrum remains more data-sparse despite its promise in many applications. This thesis demonstrates the shortcomings of existing methods for predicting optical properties in this region of chemical space and recommends best practices for future research in this area. Finally, this thesis highlights successful usage of data-driven optical property prediction for the discovery of novel molecules for specific applications.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/155385</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Leveraging the Linear Response Theory in Sensitivity Analysis of Chaotic Dynamical Systems and Turbulent Flows</title>
<link>https://hdl.handle.net/1721.1/151245</link>
<description>Leveraging the Linear Response Theory in Sensitivity Analysis of Chaotic Dynamical Systems and Turbulent Flows
Sliwiak, Adam Andrzej
The linear response theory (LRT) provides a set of powerful mathematical tools for the analysis of system’s reactions to controllable perturbation. In applied sciences, LRT is particularly useful in approximating parametric derivatives of observables induced by a dynamical system. These derivatives, usually referred to as sensitivities, are critical components of optimization, control, numerical error estimation, risk assessment and other advanced computational methodologies. Efficient computation of sensitivities in the presence of chaos has been a major and still unresolved challenge in the field. While chaotic systems are prevalent in several fields of science and engineering, including turbulence and climate dynamics, conventional methods for sensitivity analysis are doomed to failure due to the butterfly effect. This inherent property of chaos means that any pair of infinitesimally close trajectories separates exponentially fast triggering serious numerical issues.&#13;
&#13;
A new promising method, known as the space-split sensitivity (S3), addresses the adverse butterfly effect and has several appealing features. S3 directly stems from Ruelle’s closed-form linear response formula involving Lebesgue integrals of input-output time correlations. Its linearly separable structure combined with the chain rule on smooth manifolds enables the derivation of ergodic-averaging schemes for sensitivities that rigorously converge in uniformly hyperbolic systems. Thus, S3 can be viewed as an LRT-based Monte Carlo method that averages data collected through regularized tangent equations along a random orbit. Despite the recent theoretical advancements, S3 in its current form is applicable to systems with one-dimensional unstable manifolds, which makes it useless for real-world models. &#13;
&#13;
In this thesis, we extend the concept of space-splitting to systems of arbitrary dimension, develop generic linear response algorithms for hyperbolic dynamical systems, and demonstrate their performance using common physical models. In particular, this work offers three major contributions to the field of nonlinear dynamics. First, we propose a novel algorithm for differentiating ergodic measures induced by chaotic systems. These quantities are integral components of the S3 method and arise from 3 the partial integration of Ruelle’s ill-conditioned expression. Our algorithm uses the concept of quantile functions to parameterize multi-dimensional unstable manifolds and computes the time evolution of measure gradients in a recursive manner. We also demonstrate that the measure gradients can be utilized as indicators of the differentiability of statistics, and might dramatically reduce the statistical-averaging error in the case of highly-oscillatory observables. Second, we blend the proposed manifold description, algorithm for measure gradients, and linear decomposition of the input perturbation, to derive a complete set of tangent equations for all by-products of the regularization process. We prove that all the recursive equations converge exponentially fast in uniformly hyperbolic systems, regardless of the choice of initial conditions. This result is used to assemble efficient one-step Monte Carlo algorithms applicable to high-dimensional discrete and continuous-time systems. Third, we argue that the effect of measure gradient could be negligible compared to the total linear response if the model is statistically homogeneous. Consequently, one could accurately approximate the sought-after sensitivity by evolving in time a single inhomogeneous tangent that is orthogonal to the unstable subspace everywhere along an orbit. This drastically reduces the computational complexity of the full algorithm.&#13;
&#13;
Every major step of theoretical and algorithmic developments is corroborated by several numerical examples. They also highlight aspects of the underlying dynamical systems, e.g., ergodic measure distributions, Lyapunov spectra, spatiotemporal structures of tangent solutions, that are relevant in the context of sensitivity analysis. This thesis considers different classes of chaotic systems, including low-dimensional discrete systems (e.g., cusp map, baker’s map, multi-dimensional solenoid map), ordinary differential equations (Lorenz oscillators) and partial differential equations (Kuramoto-Sivashinsky and 3D Navier-Stokes system).
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151245</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Modeling Feedback Effects of Transient Nuclear Systems Using Monte Carlo</title>
<link>https://hdl.handle.net/1721.1/151199</link>
<description>Modeling Feedback Effects of Transient Nuclear Systems Using Monte Carlo
Kreher, Miriam A.
Monte Carlo neutron transport is the gold standard for accurate neutronics simulation of nuclear reactors in steady-state because each term of the neutron transport equation can be directly tallied using continuous-energy cross sections rather than needing to make approximations in energy, angle, or geometry. However, the time dependent equation includes time derivatives of flux and delayed neutron precursors which are difficult to tally. While it is straightforward to explicitly model delayed neutron precursors, and thus solve the time dependent problem in Direct Monte Carlo, this is such a costly approach that the practical length of transient calculations is limited to about 1 second. In order to solve longer problems, a high-order/low-order approach was adopted that uses the omega method to approximate the time derivatives as frequencies. These frequencies are spatially distributed and provided by a low-order Time Dependent Coarse Mesh Finite Difference diffusion solver. While this scheme has been previously applied to prescribed transients, thermal feedback is now incorporated to provide a fully self-propagating Monte Carlo transient multiphysics solver which can be applied to transients of several seconds long.&#13;
&#13;
Several recently developed techniques are used in the implementation of the proposed coupling approaches. Firstly, underrelaxed Monte Carlo, which is a steady-state technique that stabilizes the search for temperature distributions, is applied to find initial conditions. Secondly, tally derivatives are a Monte Carlo perturbation technique that can identify how a tally will change with respect to a small change in the system. Test problems of varying complexity are carried out in flow-initiated transients to show the versatility of these methods.&#13;
&#13;
Overall, this multi-level, multiphysics, transient solver provides a bridge between high fidelity Monte Carlo neutronics and the fast multi-group diffusion methods that are currently used in safety analysis.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/151199</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Learning Mixed Multinomial Logit Models</title>
<link>https://hdl.handle.net/1721.1/150428</link>
<description>Learning Mixed Multinomial Logit Models
Hu, Yiqun
Multinomial logit (MNL) model is widely used to predict the probabilities of different outcomes. However, standard MNL model suffers from several issues, including but not limited to heterogeneous population, the restricted independence of irrelevant alternative (IIA) assumption, insufficient model capacity, etc. To alleviate these issues, mixed multinomial logit (MMNL) models were introduced. MMNL models are highly flexible. McFadden and Train [2000] showed that it can approximate any random utility based discrete choice models to arbitrary degree of accuracy under appropriate assumptions. In addition, it removes other limitations of standard MNL models, including lifting the IIA assumption, allowing correlation in unobserved utility factors overtime, and most importantly, reducing the chance of model misspecification when modeling real world applications where the data composition is often found to be heterogeneous.&#13;
&#13;
Despite its importance and versatility, the study on the learning theory of MMNL is limited and learning MMNL models remains an open research topic. In this thesis, we will tackle this learning problem from two different perspectives. First, inspired by the recent work in Gaussian Mixture Models (GMM), we aim to explore the polynomial learnability of MMNL models from a theoretical point of view. Next, we present an algorithm that is designed to be more applicable and utilizes the rich source of data available in the modern digitalization era, yet still yielding ideal statistical properties of the estimators.&#13;
&#13;
Chapter 2 studies the polynomial learnability of MMNL models with a general K number of mixtures. This work aims to extend the current results that only apply to 2-MNL models. We analyze the existence of ϵ-close estimates using tools from abstract algebra and will show that there exists an algorithm that can learn a general K-MNL models with probability at least 1−δ, if identifiable, using polynomial number of data samples and polynomial number of operations (in 1 ϵ and 1 δ ), under some reasonable assumptions.&#13;
&#13;
In Chapter 3, motivated by the Frank-Wolfe (FW) algorithm, we propose a framework that learns both mixture weights and component-specific logit parameters with provable convergence guarantees for arbitrary number of mixtures. Our algorithm utilizes historical choice data to generate a set of candidate choice probability vectors, each being ϵ-close to the ground truth with high probability. The convex hull of this set forms a shrunken feasible region with desired properties to the linear subproblems in FW, which subsequently enables independent parameter estimation within each mixture and in turn, leads to convergence of the mixture weights. This framework also resolves the issue of unboundedness in estimated parameters present in the original FW approach. Complexity analysis shows that only a polynomial number of samples is required for each candidate in the target population.&#13;
&#13;
Extensive numerical experiments are conducted in Chapter 4, including both simulation and case studies on the well-known Nielsen Consumer Panel Data, to demonstrate the effectiveness of recovering the true model parameters and/or learning realistic component-level parameters, as compared to the original FW framework.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150428</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mathematical and Computational Modeling of Injection-induced Seismicity</title>
<link>https://hdl.handle.net/1721.1/150113</link>
<description>Mathematical and Computational Modeling of Injection-induced Seismicity
Alghannam, Maryam
It has long been recognized that pumping fluids into or out of the Earth has the potential to cause earthquakes. Some of the earliest field evidence dates to the 1960s, when earthquakes were turned on and off by water injection in Rangely, Colorado. More recently, induced seismicity has been reported worldwide in connection with many subsurface technologies, including wastewater disposal, natural gas storage, enhanced geothermal systems, and hydraulic fracturing. As a result, there has been a growing public concern around the world about the potential seismic hazard and environmental impact of subsurface energy technologies. Understanding the physical mechanisms that lead to induced seismicity is essential in efforts to mitigate the risk associated with subsurface operations. As a first step in this thesis, we develop a spring-poroslider model of frictional slip as an analogue for induced seismicity, and analyze conditions for the emergence of stick-slip frictional instability—the mechanism for earthquakes—by carrying out a linear stability analysis and nonlinear simulations. We found that the likelihood of triggering earthquakes depends largely on the rate of increase in pore pressure rather than its magnitude. Thus, the model explains the common observation that abrupt increases in injection rate increase the seismic risk. Second, we perform an energy analysis using the same spring-poroslider model to shed light into the partitioning of energy released into frictional and radiated energy—since the latter is associated with the overall size of the earthquake and its potential for damage to man-made structures. Two key elements of the analysis are: (1) incorporating seismic radiation within the model using a precisely-defined viscous damper, and (2) partitioning the energy supplied by fluid injection into dissipated and stored energy in fluid and skeleton. The analysis shows how the rate of increase in pore pressure controls the radiated energy, stress drop, and total slip of the earthquake. Third, we study the effect of heterogeneity on the dynamics of frictional faults. In particular, we develop an objective (frame-indifferent) formulation of frictional contact between heterogeneous surfaces at a small scale, and introduce the notion that friction is a function of the states of the two surfaces in contact, each representing roughness and microstructural details for the surface. We then conduct dynamic simulations of a spring-slider model and show that heterogeneous Coulomb friction alone is capable of reproducing the transitions in complex frictional behavior, from stable creep to regular earthquakes and slow slip. This thesis, as a whole, enhances our understanding of the mechanics of fluid-injection-induced earthquakes and suggests strategies that mitigate or minimize the seismic risk associated with a wide range of subsurface operations, from hydraulic fracturing and geothermal energy extraction to wastewater injection and geologic CO2 sequestration.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/150113</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Three-dimensional Integral Boundary Layer Method for Viscous Aerodynamic Analysis</title>
<link>https://hdl.handle.net/1721.1/147502</link>
<description>Three-dimensional Integral Boundary Layer Method for Viscous Aerodynamic Analysis
Zhang, Shun
Viscous aerodynamic analysis is crucial for aircraft design in terms of understanding key performance metrics such as drag. However, despite advances in computational fluid dynamics (CFD) in the past few decades, a physics-based three-dimensional (3D) viscous analysis suitable for aircraft preliminary design remains a challenge. To that end, the integral boundary layer (IBL) method is a promising candidate primarily for its superior computational efficiency and aerodynamic design insights, evidenced from its success in existing two-dimensional (2D) applications. This thesis aims to develop a reliable off-the-shelf three-dimensional (3D) IBL method through contributions in both the physical and numerical modeling aspects.&#13;
&#13;
First, this thesis presents novel closure modeling strategies for 3D IBL and develops a new set of closure models, which were lacking in previous 3D IBL methods. Original 3D boundary layer data sets have been generated and form the basis for data-driven closure modeling in this work. New neural network regression models with embedded constraints are proposed for constructing 3D IBL closure and to help identify important parameters. Moreover, a model inversion formulation is devised for automated data-driven calibration of the turbulence shear stress transport model in the IBL context. Numerical studies demonstrate the effective boundary layer modeling by the proposed closure models through comparison against higher-fidelity reference solution and previous 3D IBL formulations.&#13;
&#13;
Second, the proper stabilization scheme is explored for the numerical discretization of the 3D IBL equations. On the one hand, difficulties have been identified for a rigorous stabilization formulation guided by conventional characteristic analysis. On the other hand, heuristically-defined numerical stabilization schemes are revealed to be ill-posed based on the numerical examples of this work. Instead, an intermediate fix to the numerical discretization is tailored for 3D IBL based on its underlying conservation principles. This fix is observed to produce well-behaved solution as in the numerical results throughout this thesis. &#13;
&#13;
Finally, this work develops the capability of flow transition prediction that is missing from existing 3D IBL methods. Two ways of numerical treatment for free transition are proposed and compared in detail, namely, transition fitting versus transition capturing. With its advantageous implementation convenience, solution robustness and interface resolution, the transition capturing approach is demonstrated to be effective based on both 2D and 3D test cases, and hence is recommended for 3D IBL transition modeling.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147502</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Scientific Machine Learning for Dynamical Systems: Theory and Applications to Fluid Flow and Ocean Ecosystem Modeling</title>
<link>https://hdl.handle.net/1721.1/147444</link>
<description>Scientific Machine Learning for Dynamical Systems: Theory and Applications to Fluid Flow and Ocean Ecosystem Modeling
Gupta, Abhinav
Complex dynamical models are used for prediction in many domains, and are useful to mitigate many of the grand challenges being faced by humanity, such as climate change, food security, and sustainability. However, because of computational costs, complexity of real-world phenomena, and limited understanding of the underlying processes involved, models are invariably approximate. The missing dynamics can manifest in the form of unresolved scales, inexact processes, or omitted variables; as the neglected and unresolved terms become important, the utility of model predictions diminishes. To address these challenges, we develop and apply novel scientific machine learning methods to learn unknown and discover missing dynamics in models of dynamical systems. &#13;
&#13;
In our Bayesian approach, we develop an innovative stochastic partial differential equation (PDE) - based model learning theory and framework for high-dimensional coupled biogeochemical-physical models. The framework only uses sparse observations to learn rigorously within and outside of the model space as well as in that of the states and parameters. It employs Dynamically Orthogonal (DO) differential equations for adaptive reduced-order stochastic evolution, and the Gaussian Mixture Model-DO (GMM-DO) filter for simultaneous nonlinear inference in the augmented space of state variables, parameters, and model equations. A first novelty is the Bayesian learning among compatible and embedded candidate models enabled by parameter estimation with special stochastic parameters. A second is the principled Bayesian discovery of new model functions empowered by stochastic piecewise polynomial approximation theory. Our new methodology not only seamlessly and rigorously discriminates between existing models, but also extrapolates out of the space of models to discover newer ones. In all cases, the results are generalizable and interpretable, and associated with probability distributions for all learned quantities. To showcase and quantify the learning performance, we complete both identical-twin and real-world data experiments in a multidisciplinary setting, for both filtering forward and smoothing backward in time. Motivated by active coastal ecosystems and fisheries, our identical-twin experiments consist of lower-trophic-level marine ecosystem and fish models in a two-dimensional idealized domain with flow past a seamount representing upwelling due to a sill or strait. Experiments have varying levels of complexities due to different learning objectives and flow and ecosystem dynamics. We find that even when the advection is chaotic or stochastic from uncertain nonhydrostatic variable-density Boussinesq flows, our framework successfully discriminates among existing ecosystem candidate models and discovers new ones in the absence of prior knowledge, along with simultaneous state and parameter estimation. Our framework demonstrates interdisciplinary learning and crucially provides probability distributions for each learned quantity including the learned model functions. In the real-world data experiments, we configure a one-dimensional coupled physical-biological-carbonate model to simulate the state conditions encountered by a research cruise in the Gulf of Maine region in August, 2012. Using the observed ocean acidification data, we learn and discover a salinity based forcing term for the total alkalinity (TA) equation to account for changes in TA due to advection of water masses of different salinity caused by precipitation, riverine input, and other oceanographic processes. Simultaneously, we also estimate the multidisciplinary states and an un- certain parameter. Additionally, we develop new theory and techniques to improve uncertainty quantification using the DO methodology in multidisciplinary settings, so as to accurately handle stochastic boundary conditions, complex geometries, and the advection terms, and to augment the DO subspace as and when needed to capture the effects of the truncated modes accurately. Further, we discuss mutual-information-based observation planning to determine what, when, and where to measure to best achieve our learning objectives in resource-constrained environments. &#13;
&#13;
Next, motivated by the presence of inherent delays in real-world systems and the Mori-Zwanzig formulation, we develop a novel delay-differential-equations-based deep learning framework to learn time-delayed closure parameterizations for missing dynamics. We find that our neural closure models increase the long-term predictive capabilities of existing models, and require smaller networks when using non-Markovian over Markovian closures. They efficiently represent truncated modes in reduced-order-models, capture effects of subgrid-scale processes, and augment the simplification of complex physical-biogeochemical models. To empower our neural closure models framework with generalizability and interpretability, we further develop neural partial delay differential equations theory that augments low-fidelity models in their original PDE forms with both Markovian and non-Markovian closure terms parameterized with neural networks (NNs). For the first time, the melding of low-fidelity model and NNs with time-delays in the continuous spatiotemporal space followed by numerical discretization automatically provides interpretability and allows for generalizability to computational grid resolution, boundary conditions, initial conditions, and problem specific parameters. We derive the adjoint equations in the continuous form, thus, allowing implementation of our new methods across differentiable and non-differentiable computational physics codes, different machine learning frame- works, and also non-uniformly-spaced spatiotemporal training data. We also show that there exists an optimal amount of past information to incorporate, and provide methodology to learn it from data during the training process. Computational advantages associated with our frameworks are analyzed and discussed. Applications of our new Bayesian learning and neural closure modeling are not limited to the shown fluid and ocean experiments, but can be extended to other fields such as control theory, robotics, pharmacokinetic-pharmacodynamics, chemistry, economics, and biological regulatory systems.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147444</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>A new way to do epidemic modeling</title>
<link>https://hdl.handle.net/1721.1/147364</link>
<description>A new way to do epidemic modeling
Abhijit Dandekar, Raj
The Coronavirus respiratory disease 2019 originating from the virus SARS-COV-2 led to a global pandemic, leading to more than 500 million confirmed global cases and approximately 6 million deaths in more than 50 countries. Since the outbreak of this pandemic, a number of modeling frameworks have been used to analyze various&#13;
aspects of the pandemic such as prediction of infected and recovered case counts, hospitalizations, travel restrictions, reopening and non-pharmaceutical interventions. These frameworks can be divided broadly into the following categories: (a) compartment models which are interpretable but cannot capture complex effects and (b) agent based models which can capture varying ranges of complexity; but are generally non interpretable.&#13;
&#13;
In this thesis, we introduce another category for epidemic modeling, which is rooted in Scientific Machine Learning. Scientific Machine Learning (SciML) leverages the interpretability of ODEs with the expressivity of neural networks. We thus aim to retain the interpretability of compartment models along with the complexity of&#13;
agent based models using the SciML modeling paradigm. Using such a framework, we tackle a wide variety of application based problems including:&#13;
&#13;
• How quarantine control policies shaped the outbreak evolution in different countries around the world.&#13;
• Effect of early reopening in the Southern and West Central US states; and how it led to an exponential explosion of infected cases in the USA during the period of June-Aug 2020.&#13;
• Virtual Virus spread through Bluetooth tokens; and how it can be used to obtain real time estimates of the pandemic.&#13;
&#13;
Towards the end, we analyze the robustness of the proposed SciML methodology and provide a general set of guidelines for training such models in other domains.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/147364</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>A novel equivalence method for high fidelity hybrid stochastic-deterministic neutron transport simulations</title>
<link>https://hdl.handle.net/1721.1/145790</link>
<description>A novel equivalence method for high fidelity hybrid stochastic-deterministic neutron transport simulations
Giudicelli, Guillaume Louis.
With ever increasing available computing resources, the traditional nuclear reactor physics computation schemes, that trade off between spatial, angular and energy resolution to achieve low cost highly-tuned simulations, are being challenged. While existing schemes can reach few-percent accuracy for the current fleet of light water reactors, thanks to a plethora of astute engineering approximations, they cannot provide sufficient accuracy for evolutionary reactor designs with highly heterogeneous geometries. The decades-long process to develop and qualify these simulation tools is also not in phase with the fast-paced development of innovative new reactor designs seeking to address the climate crisis. Enabled by those computing resources, high fidelity Monte Carlo methods can easily tackle challenging geometries, but they lack the computational and algorithmic efficiency of deterministic methods. However, they are increasingly being used for group cross section generation. Downstream highly parallelized 3D deterministic transport can then use those cross sections to compute accurate solutions at the full core scale. This hybrid computation scheme makes the most of both worlds to achieve fast and accurate reactor physics simulations. Among the few remaining approximations are neglecting the angular dependence of group cross sections, which lead to an over-estimation of resonant absorption rates, especially for the lower resonances of ²³⁸U. This thesis presents a novel equivalence method based on introducing discontinuities in the track angular fluxes, with a polar dependence of discontinuity factors to preserve the polar dependence of the neutron currents as well as removing the self-shielding error. This new method is systematically benchmarked against the state-of-the-art method, SuPerHomogenization in three different approaches to obtaining equivalence factors: a same-scale iterative approach, a multiscale approach, and a single-step non-iterative approach. Both methods show remarkable agreement with a reference Monte Carlo solution on a wide array of test cases, from 2D pin cells to 3D full core calculations, for the iterative and the multi-scale approaches. The self-shielding error is eliminated, improving significantly the predictive capabilities of the scheme for the distribution of ²³⁸U absorption in the core. A single-step non-iterative approach to obtaining equivalence factors is also pursued, and was shown to only be adequate with the novel discontinuity factor-based method. This study is largely enabled by a significant optimization effort of the 3D deterministic neutron transport solver. By leveraging low level parallelism through vectorization of the multi-group neutron transport equation, by increasing the memory locality of the method of characteristics implementation and with a novel inter-domain communication algorithm enabling a near halving of memory requirements, the 3D full core case can now be tackled with only 50 nodes on an industrial sized computing cluster rather than the many thousands of nodes on a TOP20 supercomputer used previously. This thesis presents fully resolved solutions to the steady-state multi-group neutron transport equation for full-core 3D light water reactors, and these solutions are comparable to gold-standard continuous-energy Monte Carlo solutions.
Thesis: Ph. D. in Computational Nuclear Science and Engineering, Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, September, 2020; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/145790</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Monte Carlo framework for nuclear data uncertainty propagation via the windowed multipole formalism</title>
<link>https://hdl.handle.net/1721.1/145789</link>
<description>A Monte Carlo framework for nuclear data uncertainty propagation via the windowed multipole formalism
Alhajri, Abdulla&#13;
            (Abdulla Abdulaziz)
A new framework has been developed that calculates the uncertainty in calculated quantities, such as K[subscript eff], reactivity coefficients, multigroup cross sections, and reaction rate ratios, that arise due to uncertainties in the underlying nuclear data. This framework relies on first order uncertainty analysis using sensitivity methods. The major innovation in the proposed framework is the use of the windowed multipole formalism for calculating the sensitivities. The use of the windowed multipole formalism provides a natural, physics-inspired binning strategy for the sensitivity coefficients, while also aiding in the statistical convergence of the calculated sensitivity tallies. Additionally, our framework improves on existing methods by fully accounting for temperature effects. The proposed method allows for identifying exactly the resonances and parameters that are driving the uncertainty, and thus provides guidance to nuclear data evaluators and experimenters on how to reduce the uncertainty in the most efficient manner. Calculating the uncertainty requires two key pieces of information; the windowed multipole sensitivity coefficients, and the windowed multipole covariance matrix. A sensitivity coefficient calculation algorithm based on the CLUTCH-FM methodology was implemented in OpenMC. Several methods for obtaining the windowed multipole covariance matrix from the resonance parameter covariance matrix were explored, and ultimately an approach based on random-sampling was selected. Along the way, an analytical benchmark was developed for the purposes of validating the framework, as well as the implementation. This analytical benchmark consists of a solution to the forward and adjoint neutron transport equations. The windowed multipole covariance matrix was calculated for three isotopes; ²³⁸U , ¹⁵⁷Gd , and ²³Na . The uncertainty in K[subscript eff] due to the uncertainty in the ²³⁸U and ¹⁵⁷Gd cross sections was calculated for two criticality safety benchmarks, and a beginning-of-life PWR model. The uncertainty of several reaction rate ratios due to the uncertainty in the ¹⁵⁷Gd cross section was also calculated for the PWR model. The resonances of ²³⁸U and ¹⁵⁷Gd that have the largest contribution to the uncertainty were identified for the criticality safety benchmarks.
Thesis: Ph. D. in Computational Nuclear Science &amp; Engineering, Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, September, 2020; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 225-228).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/145789</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Probabilistic modeling and Bayesian inference via triangular transport</title>
<link>https://hdl.handle.net/1721.1/145049</link>
<description>Probabilistic modeling and Bayesian inference via triangular transport
Baptista, Ricardo Miguel
Probabilistic modeling and Bayesian inference in non-Gaussian settings are pervasive challenges for science and engineering applications. Transportation of measure provides a principled framework for treating non-Gaussianity and for generalizing many methods that rest on Gaussian assumptions. A transport map deterministically couples a simple reference distribution (e.g., a standard Gaussian) to a complex target distribution via a bijective transformation. Finding such a map enables efficient sampling from the target distribution and immediate access to its density. Triangular maps comprise a general class of transports that are attractive from the perspectives of analysis, modeling, and computation. This thesis: (1) develops a general representation for monotone triangular maps, and adaptive methodologies for estimating such maps (and their associated pushforward densities) from samples; (2) uses triangular maps and their compositions to perform Bayesian computation in likelihood-free settings, including new ensemble methods for nonlinear filtering; and (3) proposes parameter and data dimension reduction techniques with error guarantees for high-dimensional inverse problems.&#13;
&#13;
The first part of the thesis explores the use of triangular transport maps for density estimation and for learning probabilistic graphical models. To construct triangular maps, we represent monotone functions as smooth transformations of unconstrained (non-monotone) functions. We show how certain structural choices for these transformations lead to smooth optimization problems with no spurious local minima, i.e., where all local minima are global minima. Given samples, we then propose an adaptive algorithm that estimates maps with sparse variable dependence. We demonstrate how this framework enables joint and conditional density estimation across a range of sample sizes, and how it can explicitly learn the Markov properties of a continuous non-Gaussian distribution. To this end, we introduce a consistent estimator for the Markov structure based on integrated Hessian information from the log-density. We then propose an iterative algorithm for learning sparse graphical models by exploiting a corresponding sparsity structure in triangular maps. A core advantage of triangular maps is that their components expose conditionals of the target distribution. Hence, learning a map that depends on both parameters and observations enables efficient sampling from the posterior distribution in a Bayesian inference problem. Crucially, this can be done without evaluating the likelihood function, which is often inaccessible or computationally prohibitive in scientific applications (as with forward models given by stochastic partial differential equations, which we consider here). In the second part of this thesis, we propose and analyze a specific composition of transport maps that directly transforms prior samples into posterior samples. We show that this approach, termed the stochastic map (SM) algorithm, improves over other transport-based methods for conditional sampling by reducing the bias and variance of the associated posterior approximation. We then use the SM algorithm to sequentially estimate the state of a chaotic dynamical system given online observations, a nonlinear filtering problem known in geophysical applications as “data assimilation” (DA). We show that when the SM algorithm is restricted to linear maps, it reduces to the ensemble Kalman filter (EnKF), a workhorse algorithm for DA; with nonlinear updates, however, the SM algorithm substantially improves on the performance of the EnKF in challenging regimes.&#13;
&#13;
Finally, we extend the use of transport for high-dimensional inference problems by developing a joint dimension reduction strategy for parameters and observations. We identify relevant low-dimensional projections of these variables by minimizing an information theoretic upper bound on the error in the posterior approximation. We show that this approach reduces to canonical correlation analysis in the linear– Gaussian setting, while outperforming standard dimension reduction strategies in a variety of nonlinear and non-Gaussian inference problems.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/145049</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Stochastic Control Through a Modern Lens: Applications in Supply Chain Analytics and Logistical Systems</title>
<link>https://hdl.handle.net/1721.1/144691</link>
<description>Stochastic Control Through a Modern Lens: Applications in Supply Chain Analytics and Logistical Systems
Qin, Hanzhang
The thesis investigates classical multi-period stochastic control problems through a modern lens, including stochastic inventory control, dynamic pricing and vehicle routing. A brief history of the academic works on stochastic control is presented in Chapter 1, where the relevance of papers on stochastic processes, dynamic programming and reinforcement learning is also discussed. This thesis then focuses on revisiting inventory control, dynamic pricing and vehicle routing i) in a data-driven fashion; ii) with flexible architectures. Chapters 2-3 present several state-of-the-art results on data-driven inventory control. In Chapter 2, the following question is revisited: how much data is needed in order to obtain a (nearly) optimal policy for inventory control? To resolve this long-standing open question, a novel sample-based algorithm is proposed for the backlog setting and a matching (up to a logarithmic factor) lower-bound is also given. In Chapter 3, the same question for the joint pricing and inventory control problem is studied and the first sample-efficient solution is proposed. Chapter 4 is dedicated to the vehicle routing problem with stochastic demands (VRPSD). By combining ideas from vehicle routing and manufacturing process flexibility, a new approach to VRPSD is proposed, that uses overlapped routing with customer sharing in route determination, whose performance is close to the theoretical lower-bound, and significantly improves upon the routing strategy without overlapped routes. Chapter 5 concludes the thesis, and points out several future research directions.
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/144691</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Efficient Sampling Methods of, by, and for Stochastic Dynamical Systems</title>
<link>https://hdl.handle.net/1721.1/143353</link>
<description>Efficient Sampling Methods of, by, and for Stochastic Dynamical Systems
Zhang, Benjamin Jiahong
This thesis presents new methodologies that lie at the intersection of computational statistics and computational dynamics. Stochastic differential equations (SDEs) are used to model a variety of physical systems, and computing expectations over marginal distributions of SDEs is important for the analysis of such systems. In particular, quantifying the probabilities of rare events in SDEs -- and elucidating the mechanisms by which these events occur -- are critical to the design and safe operation of engineered systems.&#13;
&#13;
In the first part of the thesis, we use data-driven tools for dynamical systems to create methods for efficient rare event simulation in nonlinear SDEs. Our approach exploits the relationship between the stochastic Koopman operator and the Kolmogorov backward equation to derive optimal importance sampling and multilevel splitting estimators. By expressing an indicator function over a rare event in terms of the eigenfunctions of the stochastic Koopman operator, we directly approximate the associated zero-variance importance sampling estimator. We also devise efficient multi-level splitting schemes for SDEs by using the Koopman eigenfunctions to approximate the optimal importance function.&#13;
&#13;
Stochastic dynamical systems can also be tools for solving problems in computational statistics. Creative uses of SDEs have been instrumental in developing efficient sampling methods for high-dimensional, non-Gaussian probability distributions. The second part of the thesis develops new sampling methods that employ judiciously constructed SDEs. We first present a framework for constructing \emph{controlled} SDEs that can sample from a large class of probability distributions with Gaussian tails, in finite time. By choosing a linear SDE to be the uncontrolled reference system, we synthesize feedback controllers that drive the sampling of such distributions. We identify and approximate these controllers by solving only a static optimization problem.&#13;
&#13;
Next, we develop novel approaches for accelerating the convergence of Langevin dynamics-based samplers. Reversible and irreversible perturbations of Langevin dynamics can improve the performance of Langevin samplers. We present the geometry-informed irreversible perturbation (GiIrr) and show that it accelerates convergence of Riemannian manifold Langevin dynamics more than standard irreversible perturbations. We then propose the transport map unadjusted Langevin algorithm (TMULA), and show that the use of transport enables rapid convergence of the unadjusted Langevin algorithm for distributions that are not strongly log-concave. We also make connections between transport maps and Riemannian manifold Langevin dynamics to elucidate how transport maps accelerate convergence.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/143353</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>An efficient algorithm for sensitivity analysis of chaotic systems</title>
<link>https://hdl.handle.net/1721.1/140194</link>
<description>An efficient algorithm for sensitivity analysis of chaotic systems
Chandramoorthy, Nisha
How does long-term chaotic behavior respond to small parameter perturbations? Using detailed models, chaotic systems are frequently simulated across disciplines – from climate science to astrophysics. But, an efficient computation of parametric derivatives of their statistics or long-term averages, also known as linear response, is an open problem. The difficulty is due to an inherent feature of chaos: an exponential growth over time of infinitesimal perturbations, which renders conventional methods for sensitivity computation inapplicable. More sophisticated recent approaches, including ensemble-based and shadowing-based methods are either computationally impractical or lack convergence guarantees. We propose a novel alternative known as space-split sensitivity or S3, which evaluates linear response as an efficiently computable, provably convergent ergodic average. The main contribution of this thesis is the development of the S3 algorithm for uniformly hyperbolic systems – the simplest setting in which chaotic attractors occur – with one-dimensional unstable manifolds. S3 can enable applications of the computed sensitivities to optimization, control theory and uncertainty quantification, in the realm of chaotic dynamics, wherein these applications remain nascent.&#13;
&#13;
We propose a transformation of Ruelle’s rigorous linear response formula, which is ill-conditioned in its original form, into a well-conditioned ergodic-averaging computation. We prove a decomposition of Ruelle’s formula, called the S3 decomposition, that is differentiable on the unstable manifold. The S3 decomposition ensures that one of the resulting terms, the stable contribution, can be computed using a regularized tangent equation, similar to in a non-chaotic system. The remainder, known as the unstable contribution, is regularized and converted into a computable ergodic average. The S3 algorithm presented here can be naturally extended to systems with higher-dimensional unstable manifolds.&#13;
&#13;
The secondary contributions of this thesis are analysis and applications of existing methods, including those shadowing-based and ensemble-based, to compute linear response. A feasibility analysis of ensemble sensitivity calculation, which is a direct evaluation of Ruelle’s formula, reveals a problem-dependent, typically poor rate of convergence, rendering it computationally impractical. Shadowing-based sensitivity computation is not guaranteed to converge because of atypicality of shadowing orbits. This atypicality also implies that small parameter perturbations can lead, contrary to popular belief, to a large change in the statistics of a chaotic system, a consequence being that numerical simulations of chaotic systems may not reproduce their true long-term behaviors.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/140194</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Applications of Deep Learning to Scientific Inverse&#13;
Problems</title>
<link>https://hdl.handle.net/1721.1/139969</link>
<description>Applications of Deep Learning to Scientific Inverse&#13;
Problems
Li, Matthew T. C.
The first part of this thesis introduces an end-to-end deep learning architecture, called the wide-band butterfly network (WideBNet), which comprehensively solves the inverse wave scattering problem across all length scales. Our architecture incorporates the physics of wave propagation using tools from computational harmonic analysis, specifically the butterfly factorization, and traditional multi-scale methods such as the Cooley-Tukey FFT algorithm. This allows WideBNet to automatically adapt to the dimension of the data so that the number of trainable parameters scales linearly, up to logarithmic factors, with the inherent complexity of the inverse problem. While our trained network provides competitive results in classical imaging regimes, most notably it also succeeds in the super-resolution regime where other comparable methods fail. This encompasses both (i) reconstruction of scatterers with sub-wavelength geometric features, and (ii) accurate imaging when two or more scatterers are separated by less than the classical diraction limit. We demonstrate these properties are retained even in the presence of strong noise and extend to scatterers not previously seen in the training set. In addition, we also demonstrate that our proposed framework outperforms both classical inversion and competing wave scattering specialized architectures across a variety of wave scattering mediums.&#13;
&#13;
The second contribution of this thesis concerns scientific inverse problems in which uncontrollable experimental conditions induce nuisance variations in data and encumber inference. In particular, domain experts in these settings contend with the challenge of disambiguating whether changes in data arise from evolution of the physical quantities of interest (in eect, the signal), or from experimental fluctuations (in eect, the noise). We address this question using a bespoke auto-encoding architecture called the symmetric autoencoder (SymAE). SymAE embeds the data into explanatory latent coordinates corresponding to either coherent physical information or nuisance information. We assume weak supervision in the data and explicitly incorporate symmetries into the architecture to achieve this partitioning. As a result, this endows SymAE with the ability to align datapoints to a common nuisance variation by swapping relevant coordinates in the structured latent space. These resulting virtual datapoints can then be reliably used by domain experts for the purpose of extracting the physics retained in the coherent information. As a motivating example we consider applications to time-lapse monitoring in which geophysicists aim to determine whether changes in data arise on account of evolution in subsurface variabilities (e.g. leaks of supercritical CO2), or arise from uncontrollable conditions encountered during the seismic survey (e.g. from inherent randomness of the micro-seismic sources). We provide numerical experiments demonstrating SymAE is capable of disentangling coherent and nuisance eects in its latent space for a broad range of models for wave propagation. Furthermore, we quantify the accuracy of SymAE redatuming using examples with synthetic seismic data.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139969</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Sea Spray-Mediated Fluxes at Extreme Wind Speeds</title>
<link>https://hdl.handle.net/1721.1/139956</link>
<description>Sea Spray-Mediated Fluxes at Extreme Wind Speeds
Sroka, Sydney
Tropical cyclones are complex systems that are challenging to forecast and model. Since tropical cyclones are powered by the warm ocean surface, the accuracy of intensity forecasts depends heavily on the air-sea interaction scheme. However, at extreme wind speeds the air-sea transition layer becomes replete with sea spray such that there is no longer a well-defined interface. This means that the microphysics of sea spray plays a critical role in mediating the fluxes which control tropical cyclone intensity.&#13;
&#13;
The first part of this thesis reviews and synthesizes results from the literature on parameterizations of air-sea enthalpy and momentum fluxes in tropical cyclones, with an emphasis on work that estimated the sea spray-mediated fluxes. The second part of this thesis analyzes the microphysical equations that describe how sea spray mediates enthalpy and momentum. An analysis of an ensemble of temperature, radius, and speed time histories of evaporating drops suggests that, for sufficiently high wind speeds, the formulation for air-sea exchange can be substantially simplified. The third part of this thesis describes the results from multiphase, direct numerical simulations of the sea surface subject to a large wind stress. The preliminary results suggest that the simulated vertical transport of liquid water is comparable to the expected volume flux, which is an encouraging outcome for the prospect of being able to supplement sparse observations of sea spray with numerical simulations. Finally, the fourth part of this thesis analyzes the turbulent air-sea heat flux over ocean mesoscale eddies in reanalysis data to determine whether persistent sea surface temperature perturbations have a significant effect on the time-averaged turbulent heat flux. The findings show that the ocean mesoscale eddies have a small but detectable influence on the time-averaged turbulent heat flux in the reanalysis data.&#13;
&#13;
This thesis explores how small-scale processes can project onto large-scale dynamics. For tropical cyclones in particular, as model resolution improves, previously unresolved mechanisms will come into focus and help illuminate the workings of these complex natural phenomena.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139956</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>From Spheres to Sheets: Colloidal Hydrodynamics, Thermodynamics, and Statistical Inference</title>
<link>https://hdl.handle.net/1721.1/139891</link>
<description>From Spheres to Sheets: Colloidal Hydrodynamics, Thermodynamics, and Statistical Inference
Silmore, Kevin Stanton
This thesis involves the development of Bayesian methods for statistical inference of distributions, the construction of optimization and thermodynamic sampling algorithms, and the use of hydrodynamical simulations to better understand the physics of soft matter systems consisting of particles ranging in shape from spheres to sheets.&#13;
&#13;
In the first part of this thesis, we introduce a Bayesian method that we call Maximum A posteriori Nanoparticle Tracking Analysis (MApNTA) for estimating the size distributions of nanoparticle samples from high-throughput single-particle tracking experiments. We show that this approach infers nanoparticle size distributions with high resolution by performing extensive Brownian dynamics simulations and experiments with mono- and polydisperse solutions of gold nanoparticles as well as single walled carbon nanotubes. We then extend the non-parametric Bayesian framework developed to infer the orientation probability distribution function (OPDF) of suspensions of rod-like particles from small-angle neutron scattering data with a method that we call Maximum A Posteriori Scattering Inference (MAPSI).&#13;
&#13;
In the second part of this thesis, we create two high-performance algorithms — one for feasible optimization and the other for accelerated thermodynamic sampling — to aid in the simulation of large-scale physical models. Drawing on the Riemannian optimization and sequential quadratic programming literature, a practical algorithm that we call Locally Feasibly Projected Sequential Quadratic Programming (LFPSQP) is constructed to conduct feasible optimization on arbitrary implicitly defined constraint manifolds. Specifically, with n (potentially bound-constrained) variables and m &lt; n nonlinear constraints, each outer optimization loop iteration involves a single O(nm^2)-flop factorization, and computationally efficient retractions are constructed that involve O(nm)-flop inner loop iterations. The second algorithm developed, called Collective Mode Brownian Dynamics (CMBD), is a method based on Brownian dynamics simulations that uses a specially constructed mobility matrix that can reduce the computational time it takes to reach equilibrium and draw decorrelated thermodynamic samples. Importantly, the method is completely agnostic to particle configuration and the specifics of interparticle forces and runs in O(N) time on graphics processing units, where N is the number of particles.&#13;
&#13;
In the final part of this thesis, we study the behavior of flexible 2D materials. Using the LFPSQP algorithm for feasible optimization, the minimum-energy shapes of membranes with boundaries subject to fixed area and contour lengths (relevant to 2D biological objects like kinetoplasts) are found over a range of dimensionless areas and dimensionless spontaneous curvatures. Notably, as spontaneous curvature is increased, it is found that axisymmetry is broken. The constrained normal modes of the sheets are also computed and shed light on the behavior of fluctuations. Additionally, we perform numerical simulations of "tethered" semiflexible sheets with hydrodynamic interactions in shear flow. With athermal sheets, we find buckling instabilities of different mode numbers that vary with bending stiffness and can be understood with a quasi-static model of elasticity. For different initial orientations, chaotic tumbling trajectories are observed. With thermal sheets, we observe a dynamical transition from stochastic flipping to significant crumpling and continuous tumbling consistent with the onset of chaotic dynamics found for athermal sheets. The effects of different dynamical conformations on rheological properties such as viscosity and normal stress differences are also quantified.
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139891</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Nuclear Computations under Uncertainty New methods to infer and propagate nuclear data uncertainty across Monte Carlo simulations</title>
<link>https://hdl.handle.net/1721.1/139530</link>
<description>Nuclear Computations under Uncertainty New methods to infer and propagate nuclear data uncertainty across Monte Carlo simulations
Ducru, Pablo
This thesis introduces new methods to efficiently infer and propagate nuclear data uncertainty across Monte Carlo simulations of nuclear technologies.  The main contributions come in two areas: 1. novel statistical methods and machine learning algorithms (Embedded Monte Carlo); 2. new mathematical parametrizations of the quantum physics models of nuclear interactions and their uncertainties (Stochastic Windowed Multipole Cross Sections).&#13;
&#13;
1. Embedded Monte Carlo infers the uncertainty in nuclear codes inputs (reactor geometry, nuclear data, etc.)  from samples of noisy outputs (e.g.  experimental observations), and in turn propagates this uncertainty back to the simulation outputs(reactor power, reaction rates, flux, multiplication factor, etc.), without ever converging any single Monte Carlo reactor simulation. Such embedding of the uncertainty within the Nested Monte Carlo computations vastly outperforms previous methods(10–100 times less runs), and is achieved by approximating the input parametersBayesian posterior via variational inference, and reconstructing the outputs distribution via moments estimators. We validate the Embedded Monte Carlo method on anew analytic benchmark for neutron slowdown we derived.&#13;
&#13;
2. Stochastic Windowed Multipole Cross Sections is an alternative way to parametrize nuclear interactions and their uncertainties (equivalent to R-matrix theory), whereby one can sample on-the-fly uncertain nuclear cross sections and analytically compute their thermal Doppler broadening. This drastically reduces the memory footprint of nuclear data (at least 1,000-fold), without incurring additional computational costs.&#13;
&#13;
These contributions are documented in nine peer-reviewed journal articles (eight published and one under review) and seven conference articles (six published and one under review), constituting the core of this thesis.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139530</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mathematical and Computational Foundations to Enable Predictive Digital Twins at Scale</title>
<link>https://hdl.handle.net/1721.1/139514</link>
<description>Mathematical and Computational Foundations to Enable Predictive Digital Twins at Scale
Kapteyn, Michael G.
A digital twin is a computational model that evolves over time to persistently represent a unique physical asset. Digital twins underpin intelligent automation by enabling asset-specific analysis and data-driven decision-making. Although the promise of digital twins is well established, state-of-the-art digital twins are typically bespoke, one-off implementations that require considerable expertise and deployment resources. This thesis develops mathematical and computational foundations to support the transition from this custom implementation phase toward accessible and robust digital twins at scale.&#13;
&#13;
First, a unified mathematical foundation for digital twins is established. A mathematical abstraction of a digital twin and its associated physical asset is presented. This abstraction is then developed into a probabilistic graphical model describing the evolution of the coupled system. This model affords a unified treatment of all the aspects of a digital twin and can span the entire asset lifecycle. While mathematically rigorous, the model is flexible and extensible to enable application in a wide range of application areas.&#13;
&#13;
Building on this mathematical foundation, scalable computational methodologies are developed to enable asset-specific physics-based models to be incorporated into a digital twin. A central element of the proposed approach is a library of component-based reduced-order models derived from high-fidelity simulations of the asset in various states. The component-based approach scales efficiently to complex systems and provides a flexible and expressive framework for model adaptation—both critical features in the digital twin context. A methodology is proposed for combining these physics-based models with interpretable machine learning techniques in order to determine which observational data are most informative, and how these data can be fused within an interpretable classifier. This classifier can be deployed online to enable dynamic data-driven updating of the digital twin.&#13;
&#13;
The proposed methodologies are demonstrated through the creation, calibration, and deployment of a structural digital twin for a custom-built 12ft wingspan unmanned aerial vehicle. In flight, the digital twin assimilates sensor data to update its internal structural models in response to damage or degradation. The dynamically updated digital twin provides rapid computational analysis of the vehicle’s structural health, which in turn enables intelligent self-aware decision-making.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139514</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Risk Assessment and Optimal Response Strategies for Resilience of Electric Power Infrastructure to Extreme Weather</title>
<link>https://hdl.handle.net/1721.1/139339</link>
<description>Risk Assessment and Optimal Response Strategies for Resilience of Electric Power Infrastructure to Extreme Weather
Chang, Hao-Yu Derek
Extreme weather is an increasingly critical threat to infrastructure systems. This thesis develops a stochastic modeling and decision-making framework for proactive resource allocation and response strategies to improve the resilience of electric power infrastructure in the wake of severe weather events. The framework is based on a physically-based, probabilistic risk assessment approach to estimating the weather-induced damage, and accounts for power flow constraints in designing response actions within electricity distribution networks. &#13;
&#13;
Firstly, we formulate an asymmetric hurricane wind field model that is applicable to forecasting and large-scale ensemble simulation. The hurricane wind field model incorporates low-wavenumber asymmetries, and its parameters are estimated using a Constrained Nonlinear Least Squares problem. Inclusion of asymmetries in the model improves the accuracy of wind risk assessment in the hurricane eye wall, where wind velocities are maximized. &#13;
&#13;
Secondly, the wind field forecasts are used as inputs to a probabilistic model for damage estimation in infrastructure systems. The novelty of this damage model is that it accounts for the spatial variability in damage estimates resulting from the hurricane wind field and forecast uncertainty in the hurricane’s temporal evolution. We demonstrate that our model is capable of accurately predicting outage rates resulting from damage to the electrical grid following Hurricane Michael. &#13;
&#13;
Thirdly, we develop a computational approach for optimal resource allocation and multi-step response operations. Using a two-stage stochastic mixed-integer formulation, we model the strategic deployment of distributed energy resources (DERs) ahead of a storm’s landing, and the joint operation of islanded microgrids and repair of damaged components in the post-storm stage. The failure scenarios in this formulation are drawn from our physically-based damage model. The key challenge here is that the size of the optimization problem increases super-linearly with the network size. To address this computational bottleneck, we develop three solution approaches based on L-shaped Benders decomposition. These approaches incorporate the network structure and power flow constraints to derive more effective Benders cuts. We evaluate the scalability of these approaches on benchmark networks, and show that they are useful in evaluating the resiliency improvements due to optimal DER allocation and response strategies under various resource constraints.
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/139339</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Combining numerical simulation and machine learning - modeling coupled solid and fluid mechanics using mesh free methods</title>
<link>https://hdl.handle.net/1721.1/138524</link>
<description>Combining numerical simulation and machine learning - modeling coupled solid and fluid mechanics using mesh free methods
Raymond, Samuel J.
            (Samuel James)
The prediction and understanding of physical systems is largely divided into two camps, those based on data, and those based on the numerical models. These two approaches have long been developed independently of each other. This work shows further improvements of the modeling of physical systems and also presents a new way to inject the data from simulations into deep learning architecture to aid in the engineering design process. In this thesis the computational mechanics technique, the Material Point Method (MPM) is extended to model the mixed-failure of damage propagation and plasticity in the aggregate materials commonly found deep underground. To achieve this, the Grady-Kipp damage model and the pressure dependent Drucker-Prager plasticity model are coupled to allow for mixed-mode failure to develop in the material. This is tested against analytical results for brittle materials, as well as a series of experimental results. In addition, the brittle fracture in thin silicon wafers is also modeled to better understand the tolerances in manufacturing loads on these delicate objects. Finally, in a novel approach to combine the results of a numerical simulation and the power of a deep neural network, biomedical device design is studied. Here the simulation of the acoustofluidics of a microchip is performed to generate a large dataset of boundary conditions and solved pressure fields. This dataset is then used to train a neural network so that the inverse relationship between the boundary condition and the pressure field can be obtained. Once this training is complete, the network is used as a design tool for a specified pressure field and the results are fabricated and tested.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Civil and Environmental Engineering, February, 2020; Manuscript.; Includes bibliographical references (pages 137-150).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138524</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Enhancement of unconventional oil and gas production forecasting using mechanistic-statistical modeling</title>
<link>https://hdl.handle.net/1721.1/138523</link>
<description>Enhancement of unconventional oil and gas production forecasting using mechanistic-statistical modeling
Montgomery, Justin B.
            (Justin Bruce)
Unconventional oil and gas basins have rapidly become expansive and critical energy resource systems. However, accurately predicting highly variable well production rates remains challenging, given the typically poor subsurface characterization and complex flow behavior involved. This creates uncertainty about future resource availability, undermining reliable economic assessments and good stewardship of the resource. Production, drilling, and hydraulic fracturing datasets from thousands of wells offer insight into patterns of productivity but are noisy and incomplete. Fully exploiting this information is only possible by leveraging contextual knowledge to structure observations. This thesis provides a novel framework for combining machine learning and probabilistic modeling with domain knowledge and physics to understand and predict well productivity. Technology is a constantly evolving driver of productivity that must be captured in forecasts. This thesis shows that the immense geological heterogeneity of unconventional basins can lead to overestimating the role of technology when the best areas are increasingly targeted alongside design improvements. This conflation is remedied using spatial structure to infer geological productivity as a latent variable. A regression-kriging technique is shown to effectively disentangle technology from geology--which play roughly equal roles--and reduce error in initial well productivity predictions by more than a third compared to established methods. Long-term production dynamics for unconventional wells are unpredictable and current forecasting approaches have considerable limitations. Fitted production curve models are ill-posed and unreliable, but aggregated type-well curves ignore important differences between wells. This thesis introduces Tikhonov regularization as a way of effectively sharing information across wells, cutting error in the earliest long-term productivity forecasts in half. Additionally, a spatiotemporal hierarchical Bayesian approach is developed that incorporates physical relationships to enhance predictions and interpretability while quantifying and reducing uncertainty. Sampling from this high dimensional model is enabled by designing a unique Metropolis-Hastings within Gibbs scheme to take advantage of the model's structure. This novel mechanistic-statistical approach is able to learn and generalize physical relationships across ensembles of wells with vastly different properties--realistic scenarios where current techniques generate two to five times as much error--providing an important and practical advance in better understanding and managing these resources.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Civil and Environmental Engineering, February, 2020; Manuscript.; Includes bibliographical references (pages 107-115).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/138523</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Bayesian learning for high-dimensional nonlinear dynamical systems : methodologies, numerics and applications to fluid flows</title>
<link>https://hdl.handle.net/1721.1/132760</link>
<description>Bayesian learning for high-dimensional nonlinear dynamical systems : methodologies, numerics and applications to fluid flows
Lin, Jing,
            Ph. D.
            Massachusetts Institute of Technology.
The rapidly-growing computational power and the increasing capability of uncertainty quantification, statistical inference, and machine learning have opened up new opportunities for utilizing data to assist, identify and refine physical models. In this thesis, we focus on Bayesian learning for a particular class of models: high-dimensional nonlinear dynamical systems, which have been commonly used to predict a wide range of transient phenomena including fluid flows, heat transfer, biogeochemical dynamics, and other advection-diffusion-reaction-based transport processes. Even though such models often express the differential form of fundamental laws, they commonly contain uncertainty in their initial and boundary values, parameters, forcing and even formulation. Learning such components from sparse observation data by principled Bayesian inference is very challenging due to the systems' high-dimensionality and nonlinearity. We systematically study the theoretical and algorithmic properties of a Bayesian learning methodology built upon previous efforts in our group to address this challenge. Our systematic study breaks down into the three hierarchical components of the Bayesian learning and we develop new numerical schemes for each. The first component is on uncertainty quantification for stochastic dynamical systems and fluid flows. We study dynamic low-rank approximations using the dynamically orthogonal (DO) equations including accuracy and computational costs, and develop new numerical schemes for re-orthonormalization, adaptive subspace augmentation, residual-driven closure, and stochastic Navier-Stokes integration. The second part is on Bayesian data assimilation, where we study the properties of and connections among the different families of nonlinear and non-Gaussian filters. We derive an ensemble square-root filter based on minimal-correction second-moment matching that works especially well under the adversity of small ensemble size, sparse observations and chaotic dynamics. We also obtain a localization technique for filtering with high-dimensional systems that can be applied to nonlinear non-Gaussian inference with both brute force Monte Carlo (MC) and reduced subspace modeling in a unified way. Furthermore, we develop a mutual-information-based adaptive sampling strategy for filtering to identify the most informative observations with respect to the state variables and/or parameters, utilizing the sub-modularity of mutual information due to the conditional independence of observation noise. The third part is on active Bayesian model learning, where we have a discrete set of candidate dynamical models and we infer the model formulation that best explains the data using principled Bayesian learning. To predict the observations that are most useful to learn the model formulation, we further extend the above adaptive sampling strategy to identify the data that are expected to be most informative with respect to both state variables and the uncertain model identity. To investigate and showcase the effectiveness and efficiency of our theoretical and numerical advances for uncertainty quantification, Bayesian data assimilation, and active Bayesian learning with stochastic nonlinear high-dimensional dynamical systems, we apply our dynamic data-driven reduced subspace approach to several dynamical systems and compare our results against those of brute force MC and other existing methods. Specifically, we analyze our advances using several drastically different dynamical regimes modeled by the nonlinear Lorenz-96 ordinary differential equations as well as turbulent bottom gravity current dynamics modeled by the 2-D unsteady incompressible Reynolds-averaged Navier-Stokes (RANS) partial differential equations. We compare the accuracy, efficiency, and robustness of different methodologies and algorithms. With the Lorenz- 96 system, we show how the performance differs under periodic, weakly chaotic, and very chaotic dynamics and under different observation layouts. With the bottom gravity current dynamics, we show how model parameters, domain geometries, initial fields, and boundary forcing formulations can be identified and how the Bayesian methodology performs when the candidate model space does not contain the true model. The results indicate that our active Bayesian learning framework can better infer the state variables and dynamical model identity with fewer observations than many alternative approaches in the literature.
Thesis: Ph. D. in Mechanical Engineering and Computation, Massachusetts Institute of Technology, Department of Mechanical Engineering, September, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 553-567).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/132760</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Prediction, analysis, and learning of advective transport in dynamic fluid flows</title>
<link>https://hdl.handle.net/1721.1/130845</link>
<description>Prediction, analysis, and learning of advective transport in dynamic fluid flows
Kulkarni, Chinmay Sameer.
Transport of any material quantity due to background fields, i.e. advective transport, in fluid dynamical systems has been a widely studied problem. It is of crucial importance in classical fluid mechanics, geophysical flows, micro and nanofluidics, and biological flows. Even though mathematical models that thoroughly describe such transport exist, the inherent nonlinearities and the high dimensionality of complex fluid systems make it very challenging to develop the capabilities to accurately compute and characterize advective material transport. We systematically study the problems of predicting, uncovering, and learning the principal features of advective material transport in this work.; The specific objectives of this thesis are to: (i) develop and apply new numerical methodologies to compute the solutions of advective transport equations with minimal errors and theoretical guarantees, (ii) propose and theoretically investigate novel criteria to detect sets of fluid parcels that remain the most coherent / incoherent throughout an extended time interval to quantify fluid mixing, and (iii) extend and develop new machine learning methods to infer and predict the transport features, given snapshot data about passive and active material transport. The first part of this work deals with the development of the PDE-based 'method of flow map composition', which is a novel methodology to compute the solutions of the partial differential equation describing classical advective and advective-diffusive-reactive transport. The method of composition yields solutions almost devoid of numerical errors, and is readily parallelizable.; It can compute more accurate solutions in less time than traditional numerical methods. We also complete a comprehensive theoretical analysis and analytically obtain the value of the numerical timestep that minimizes the net error. The method of flow map composition is extensively bench-marked and its applications are demonstrated in several analytical flow fields and realistic data-assimilative ocean plume simulations. We then utilize the method of flow map composition to analyze Lagrangian material coherence in dynamic open domains. We develop new theory and schemes to efficiently predict the sets of fluid parcels that either remain the most or the least coherent over an extended amount of time. We also prove that these material sets are the ones to maximally resist advective stretching and diffusive transport. Thus, they are of significant importance in understanding the dynamics of fluid mixing and form the skeleton of material transport in unsteady fluid systems.; The developed theory and numerical methods are utilized to analyze Lagrangian coherence in analytical and realistic scenarios. We emphasize realistic marine flows with multiple time-dependent inlets and outlets, and demonstrate applications in diverse dynamical regimes and several open ocean regions. The final part of this work investigates the machine inference and prediction of the principal transport features from snapshot data about the transport of some material quantity. Our goals include machine learning the underlying advective transport features, coherent / incoherent sets, and attracting and repelling manifolds, given the snapshots of advective and advective-diffusive material fields. We also infer and predict high resolution transport features by optimally combining coarse resolution snapshot data with localized high resolution trajectory data.; To achieve these goals, we use and extend recurrent neural networks, including a combination of long short-term memory networks with hypernetworks. We develop methods that leverage our knowledge of the physical system in the design and architecture of the neural network and enforce the known constraints that the results must satisfy (e.g. mass conservation) in the training loss function. This allows us to train the networks only with partial supervision, without samples of the expected output fields, and still infer and predict physically consistent quantities. The developed theory, methods, and computational software are analyzed, validated, and applied to a variety of analytical and realistic fluid flows, including high-resolution ocean transports in the Western Mediterranean Sea.
Thesis: Ph. D. in Mechanical Engineering and Computation, Massachusetts Institute of Technology, Department of Mechanical Engineering, February, 2021; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 251-282).
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130845</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design and optimization of shared mobility on demand : dynamic routing and dynamic pricing</title>
<link>https://hdl.handle.net/1721.1/130843</link>
<description>Design and optimization of shared mobility on demand : dynamic routing and dynamic pricing
Guan, Yue,Ph. D.Massachusetts Institute of Technology.
Mobility of people and goods has been critical to urban life ever since cities emerged thousands of years ago. With the ushering in Cyber-Physical Systems enabled by the development of smart mobile devices, telecommunication technologies, as well as affordable, accessible and powerful computing resources, new paradigms are revolutionizing urban mobility. Among these, Shared Mobility on Demand Service (SMoDS) has changed the landscape of urban transportation, providing alternatives with a customized combination of affordability, flexibility, and carbon footprint. Dynamic routing and dynamic pricing are two central pillars of an SMoDS solution, where the former offers customized routes according to the specific passenger request and real time traffic conditions, and the latter provides incentive signals that appropriately influence the passengers' subscription of the service. Although emerging SMoDS solutions have seen remarkable successes, further improvements are in need.; In this thesis, we present an integrated SMoDS design with dynamic routing and dynamic pricing that introduces two major improvements over the state of the art: (i) enhanced optimality in travel times through dynamic routing with added spatial flexibility, and (ii) explicit accommodation of behavioral modelling of empowered passengers so as to lead to an accurate dynamic pricing strategy. The first part of this thesis focuses on the development of the dynamic routing framework with a new concept of space window. To accommodate the complexity introduced by space window in the optimization of dynamic routes, we propose an algorithm based upon the Alternating Minimization (AltMin) paradigm, and demonstrate an order of magnitude improvement in computational efficiency compared to benchmarks provided by standard solvers.; The second part of this thesis, related to dynamic pricing, is broken down into two modules, with the first related to behavioral modelling of empowered passengers based on Cumulative Prospect Theory (CPT). The CPT based behavioral model is able to capture the subjective and potentially irrational behaviors of passengers when deciding upon the SMoDS ride offer amidst uncertainties and risks associated with framing effects, loss aversion, diminishing sensitivity, and probability distortion. Key properties and the implications of the CPT based passenger behavioral model on dynamic pricing are discussed in detail. The second module of dynamic pricing determines the desired probability of acceptance from each passenger so as to optimize key performance indicators of the SMoDS such as the estimated waiting time.; A Reinforcement Learning (RL) based approach combined with the problem formulation in the form of a Markov Decision Process (MDP) is used to estimate this desired probability of acceptance. The proposed RL algorithm deploys an integrated planning and learning architecture where the planning phase is carried out by a lookahead tree search, and the learning phase is achieved via value iteration using a neural network as the value function approximator. Two major challenges that arise in this context is the varying dimension of the underlying state and the arrival of information in a sequential manner where long-term dependency needs to be preserved. These are addressed through the incorporation of Long Short-Term Memory (LSTM), convolutional and fully-connected layers.; Their judicious incorporation in the underlying neural network architecture allows the extraction of this information and successful estimation of the desired probability of acceptance that leads to the optimization of the SMoDS. A number of computational experiments are carried out using various datasets of large-scale problems and are shown to result in a superior capability of the proposed RL algorithm.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, February, 2021; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 181-192).
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130843</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A scientific machine learning approach to learning reduced models for nonlinear partial differential equations</title>
<link>https://hdl.handle.net/1721.1/130748</link>
<description>A scientific machine learning approach to learning reduced models for nonlinear partial differential equations
Qian, Elizabeth Yi.
This thesis presents a new scientific machine learning method which learns from data a computationally inexpensive surrogate model for predicting the evolution of a system governed by a time-dependent nonlinear partial differential equation (PDE), an enabling technology for many computational algorithms used in engineering settings. The proposed approach generalizes to the PDE setting an Operator Inference method previously developed for systems of ordinary differential equations (ODEs) with polynomial nonlinearities. The method draws on ideas from traditional physics-based modeling to explicitly parametrize the learned model by low-dimensional polynomial operators which reflect the known form of the governing PDE. This physics-informed parametrization is then united with tools from supervised machine learning to infer from data the reduced operators.; The Lift &amp; Learn method extends Operator Inference to systems whose governing PDEs contain more general (non-polynomial) nonlinearities through the use of lifting variable transformations which expose polynomial structure in the PDE. The proposed approach achieves a number of desiderata for scientific machine learning formulations, including analyzability, interpretability, and making underlying modeling assumptions explicit and transparent. This thesis therefore provides analysis of the Operator Inference and Lift &amp; Learn methods in both the spatially continuous PDE and spatially discrete ODE settings. Results are proven regarding the mean square errors of the learned models, the impact of spatial and temporal discretization, and the recovery of traditional reduced models via the learning method. Sensitivity analysis of the operator inference problem to model misspecifications and perturbations in the data is also provided.; The Lift &amp; Learn method is demonstrated on the compressible Euler equations, the FitzHugh-Nagumo reaction-diffusion equations, and a large-scale three-dimensional simulation of a rocket combustion experiment with over 18 million degrees of freedom. For the first two examples, the Lift &amp; Learn models achieve 2-3 orders of magnitude dimension reduction and match the generalization performance of traditional reduced models based on Galerkin projection of the PDE operators, predicting the system evolution with errors between 0.01% and 1% relative to the original nonlinear simulation. For the combustion application, the Lift &amp; Learn models accurately predict the amplitude and frequency of pressure oscillations as well as the large-scale structures in the flow field's temperature and chemical variables, with 5-6 orders of magnitude dimension reduction and 6-7 orders of magnitude computational savings.; The demonstrated ability of the Lift &amp; Learn models to accurately approximate the system evolution with orders-of-magnitude dimension reduction and computational savings makes the learned models suitable for use in many-query computations used to support scientific discovery and engineering decision-making.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, February, 2021; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 165-172).
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/130748</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Parallel, asynchronous ray-tracing for scalable, 3D, full-core method of characteristics neutron transport on unstructured mesh</title>
<link>https://hdl.handle.net/1721.1/129911</link>
<description>Parallel, asynchronous ray-tracing for scalable, 3D, full-core method of characteristics neutron transport on unstructured mesh
Gaston, Derek Ray.
One important goal in nuclear reactor core simulations is the computation of detailed 3D power distributions that will enable higher confidence in licensing of next-generation reactors and lifetime extensions/power up-rates for current-generation reactors. To date, there have been only a few demonstrations of such high-fidelity deterministic neutron transport calculations. However, as computational power continues to grow, such capabilities continue to move closer to being practically realized. Predictive reactor physics needs both neutronics calculations and full-core, 3D coupled multiphysics simulations (e.g., neutronics, fuel performance, fluid mechanics, structural mechanics). Therefore, new reactor physics tools should harness supercomputers to enable full-core reactor simulations and be capable of coupling for multiphysics feedback. One candidate for full-core nuclear reactor neutronics is the method of characteristics (MOC).; Recent advancements have seen a pellet-resolved 3D MOC solution for the BEAVRS benchmark. However, MOC is traditionally implemented using constructive solid geometry (CSG) that makes it difficult (if not impossible) to accurately deform material to capture physical feedback effects such as fuel pin thermal expansions, assembly bowings, or core flowering. An alternative to CSG is to use unstructured, finite-element mesh for spatial discretization of MOC. Such mesh-based geometries permit directly linking to unstructured mesh-based multiphysics tools, such as fuels performance. Utilizing unstructured mesh has been attempted in the past, but those attempts have fallen short of producing usable 3D reactor simulators.; Several key issues have hindered these attempts: lack of fuel volume preservation, approximations of boundary conditions, inefficient spatial domain decompositions, excessive memory requirements, ineffective parallel load balancing, and lack of scalability on massively parallel modern computer clusters. This thesis resolves these issues by developing a massively parallel, 3D, full-core MOC code, called MOCkingbird, using unstructured meshes. Underpinning MOCkingbird is a new algorithm for parallel ray tracing: the Scalable Massively Asynchronous Ray Tracing (SMART) algorithm. This algorithm enables efficient parallel ray-tracing across the full reactor domain, alleviating issues of reduced convergence associated with standard parallel MOC algorithms. In addition, to enable full-core simulation using unstructured mesh MOC, several new algorithms are developed, including reactor mesh generation, sparse parallel communication, parallel cyclic track generation, and weighted partitioning.; Within this work MOCkingbird and SMART are tested for scalability from 10 to 20,000 cores on the Lemhi supercomputer at Idaho National Laboratory. Accuracy is tested using a suite of benchmarks that ultimately culminate in a first-of-a-kind, 3D, full-core, simulation of the BEAVRS benchmark using unstructured mesh MOC.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, February, 2020; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 213-224).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129911</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Provably convergent anisotropic output-based adaptation for continuous finite element discretizations</title>
<link>https://hdl.handle.net/1721.1/129891</link>
<description>Provably convergent anisotropic output-based adaptation for continuous finite element discretizations
Carson, Hugh Alexander.
The expansion of modern computing power has seen a commensurate rise in the reliance on numerical simulations for engineering and scientific purposes. Output error estimation combined with metric-based mesh adaptivity provides a powerful means of quantifiably controlling the error in these simulations, for output quantities of interest to engineers and scientists. The Mesh Optimization via Error Sampling and Synthesis (MOESS) algorithm, developed by Yano for Discontinuous Galerkin (DG) discretization, is a highly effective method of this class. This work begins with the extension of the MOESS algorithm to Continuous Galerkin (CG) discretization which requires fewer Degrees Of Freedom (DOF) on a given mesh compared to DG. The algorithm utilizes a vertex-based local error decomposition, and an edge-based local solve process in contrast to the element-centric construction of the original MOESS algorithm.; Numerical results for linear problems in two and three dimensions demonstrate the improved DOF efficiency for CG compared to DG on adapted meshes. A proof of convergence for the new MOESS extension is then outlined, entailing the description of an abstract metric-conforming mesh generator. The framework of the proof is rooted in optimization, and its construction enables a proof of higher-order asymptotic rate of convergence irrespective of singularities. To the author's knowledge, this is the first such proof for a Metric-based Adaptive Finite Element Method in the literature. A three dimensional Navier Stokes simulation of a delta wing is then used to compare the new formulation to the original MOESS algorithm. The required stabilization of the CG discretization is performed using a new stabilization technique: Variational Multi-Scale with Discontinuous sub-scales (VMSD).; Numerical results confirm that VMSD adapted meshes require significantly fewer DOFs to achieve a given error level when compared to DG adapted meshes; these DOF savings are shown to translate into a reduction in overall CPU time and memory usage for a given accuracy
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, February, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages [123]-131).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129891</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Development and assessment of a physics-based model for subcooled flow boiling with application to CFD</title>
<link>https://hdl.handle.net/1721.1/129051</link>
<description>Development and assessment of a physics-based model for subcooled flow boiling with application to CFD
Kommajosyula, Ravikishore.
Boiling is an extremely efficient mode of heat transfer and is the preferred heat removal mechanism in power systems in general and, more recently, in electronics cooling. Physics-based models that describe boiling heat transfer, when coupled with Computational Fluid Dynamics (CFD), can be an invaluable tool to increase the performance of such systems. Existing modeling approaches do not incorporate all relevant heat transfer mechanisms at the wall, limiting their predictive capability and general applicability. These shortcomings restrict the application of CFD in the design process. For the nuclear industry, this means having to rely on expensive experimental campaigns to develop and license new reactor designs. A second-generation mechanistic heat flux partitioning framework developed in our group provides an enhanced physical description of flow boiling.; It introduces several mechanisms not accounted for in previous formulations, such as 1) bubbles sliding on the heater surface, 2) interaction of nucleation sites and 3) microlayer evaporation. The framework requires describing the complete bubble ebullition cycle, including bubble nucleation, growth, and departure through closure models, which are currently lacking. This thesis extends the framework into a closed-formulation by developing closure models that adequately represent the underlying physics. New models for predicting the bubble departure diameter and frequency are developed based on insights gathered from experiments and direct numerical simulations. An assessment against existing approaches to model boiling heat transfer demonstrates the model's ability to predict over 80% of the boiling curves within a 20% error, while also capturing the correct trends with flow conditions.; The model implementation in a commercial CFD software is demonstrated using data from the Bartolomei experiment. The extendability of the model to novel heater surfaces is further demonstrated for a sapphire heater substrate, where fewer cavities for nucleation shift the boiling curves to considerably higher wall superheats. This mechanistic representation of boiling heat transfer has the potential to support predictive design with optimal boiling heat transfer for improved system efficiency, with the specific objective to accelerate the development of novel nuclear fuel concepts.
Thesis: Ph. D. in Mechanical Engineering and Computation, Massachusetts Institute of Technology, Department of Mechanical Engineering, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 113-119).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129051</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A molecular dynamics study of the tribological properties of diamond like carbon</title>
<link>https://hdl.handle.net/1721.1/129046</link>
<description>A molecular dynamics study of the tribological properties of diamond like carbon
Swisher, Mathew M.
Diamond like carbon (DLC) is an attractive choice as a coating for mechanical components, because of its excellent wear resistance and very low coefficient of friction . We use molecular dynamics (MD) simulations with a reactive force field (ReaxFF) to study the friction and wear between DLC counterfaces, both in comparison to and in contact with steel counterfaces. We show that the tribological properties of DLC in dry sliding friction are heavily dependent on both the structure of the DLC as well as the passivation layer that forms on the sliding counterfaces under different environmental conditions, and that when optimizing for the lowest COF the best structure for the DLC depends on the type of passivation layer. We also find that, by preventing bonding across the counterfaces as the thin film of lubricant is squeezed out at the point of contact, the passivation layer is instrumental in the material's ability to resist scuffing and wear. Additionally, we find that the strength and hardness of DLC makes damaging the passivation layer due to contact forces unlikely under real world conditions. Finally, we use MD simulations to study in more detail the transition from lubricated to dry friction, and in particular, the role of DLC surface chemistry and the resulting passivation layer in this transition. Our work shows that the frictional force can be described quite accurately across the transition from pure slip ( dry friction) to the purely hydrodynamic regime using a simple model which superposes the two effects, provided it also accounts for any immobile fluid layers at the fluid-solid interface. We show that, for water lubrication, the transition from the pure slip to the purely hydrodynamic regime occurs at smaller lengthscales in DLC counterfaces compared with steel.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 103-111).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129046</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Fast modeling of multi-phase mixture transport in piston/ring/liner system via GAN-augmented progressive modeling</title>
<link>https://hdl.handle.net/1721.1/129038</link>
<description>Fast modeling of multi-phase mixture transport in piston/ring/liner system via GAN-augmented progressive modeling
Zhang, Qin,Ph.D.Massachusetts Institute of Technology.
As a continued effort to advance the understanding of the power cylinder system and design capacities, we develop a modeling framework for multi-phase macro mixture transport that integrates all length scales, time scales and flow regimes using a hybrid approach combining deterministic modeling and machine learning. This framework considers various mechanical and physical processes including ring dynamics, gas flow, oil redistribution and multi-phase transport to paint a detailed picture of the global lubrication environment in the piston/ring/liner system. The main contributions of this thesis can be summarized as the following: 1) designed a modular architecture that decouples various processes to manage complex dependencies, 2) achieved fast inference of flow separation and vortices near ring gaps by a physics-informed Generative Adversarial Network, and 3) established a lower bound estimation of oil consumption based on the "healthy system" oil distribution pattern. This thesis provides a powerful modeling methodology that can achieve fast modeling and monitoring of oil consumption and PM emissions from IC engines, which is of immediate economic, environmental and health concern.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 177-183).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129038</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Modeling of piston pin lubrication in internal combustion engines</title>
<link>https://hdl.handle.net/1721.1/129019</link>
<description>Modeling of piston pin lubrication in internal combustion engines
Meng, Zhen,Ph.D.Massachusetts Institute of Technology.
The piston pin joins the piston and the connecting rod to transfer the linear force on the piston to rotate the crankshaft that is the eventual power outlet of the engine. The interfaces between the piston pin and the pin bore as well as the connecting rod small end are one of the most heavily loaded tribo pairs in engines. Piston pin seizure still occurs often in the engine development and the solution often comes from applying expensive coatings. Furthermore, it has been found that the friction loss associated with the pin can be a significant contributor to the total engine mechanical loss. Yet, there lacks a basic understanding of the lubrication behavior of the pin interfaces. This work is aimed to develop a piston pin lubrication model with consideration of all the important mechanical processes. The model predicts the dynamics of the pin and the lubrication of the interfaces between the pin and pin bore as well as small end.; The model couples the dynamics of the pin with the structural deformation of the mating parts, the hydrodynamic and boundary lubrication of all the interfaces, and oil transport. The model is successfully implemented with an efficient and robust numerical solver with the second order accuracy to compute this highly stiff system. The preliminary results applying the model to a gasoline engine show that the boundary lubrication is the predominant contributor to the total friction. As a result, the interface with more asperity contact tends to hold the pin with it. Thus, the pin friction loss is coming from the interface with less contact. Solely from friction reduction point of view, ensuring efficient hydrodynamics lubrication in one interface is sufficient.; Furthermore, as the heavy load is supported in several small areas, mechanical and thermal deformation of all the parts are critical to load distribution, oil transport, and the generation of hydrodynamic and asperity contact pressure, providing the necessity of the elements integrated in the model. This work represents the first step to establishing a more comprehensive engineering model that helps the industry understand the pin lubrication and find cost-effective solutions to overcome the existing challenges.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 120-121).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/129019</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>3D organ property mapping using freehand ultrasound scans</title>
<link>https://hdl.handle.net/1721.1/128989</link>
<description>3D organ property mapping using freehand ultrasound scans
Benjamin, Alex(Alex Robert)
3D organ property mapping has gained a considerable amount of interest in the recent years because of its diagnostic and clinical significance. Existing methods for 3D property mapping include computed tomography (CT), magnetic resonance imaging (MRI), and 3D ultrasound (3DUS). These methods, while capable of producing 3D maps, suffer from one or more of the following drawbacks: high cost, long scan times, computational complexity, use of ionizing radiation, lack of portability, and the need for bulky equipment. We propose the development of a framework that allows for the creation of 3D property maps at point of care (specifically structure and speed of sound). A fusion of multiple low-cost sensors in a Bayesian framework localizes a conventional 1D-ultrasound probe with respect to the room or the patient's body; localizing the probe relative to the body is achieved by using the patient's superficial vasculature as a natural encoding system. Segmented 2D ultrasound images and quantitative 2D speed of sound maps obtained using numeric inversion are stitched together to create 3D property maps. A further advantage of this framework is that it provides clinicians with dynamic feedback during freehand scans; specifically, it dynamically updates the underlying structural or property map to reflect high and low uncertainty regions. This allows clinicians to repopulate regions within additional scans. Lastly, the method also allows for the registration and comparison of longitudinally acquired 3D property/structural maps.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 141-151).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/128989</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Covariance estimation on matrix manifolds</title>
<link>https://hdl.handle.net/1721.1/127063</link>
<description>Covariance estimation on matrix manifolds
Musolas Otaño, Antoni M.(Antoni Maria)
The estimation of covariance matrices is a fundamental problem in multivariate analysis and uncertainty quantification. Covariance matrices are an essential modeling tool in climatology, econometrics, model reduction, biostatistics, signal processing, and geostatistics, among other applications. In practice, covariances often must be estimated from samples. While the sample covariance matrix is a consistent estimator, it performs poorly when the relative number of samples is small; improved estimators that impose structure must be considered. Yet standard parametric covariance families can be insufficiently flexible for many applications, and non-parametric approaches may not easily allow certain kinds of prior knowledge to be incorporated. In this thesis, we harness the structure of the manifold of symmetric positive-(semi)definite matrices to build families of covariance matrices out of geodesic curves.; These covariance families offer more flexibility for problem-specific tailoring than classical parametric families, and are preferable to simple convex combinations. Moreover, the proposed families can be interpretable: the internal parameters may serve as explicative variables for the problem of interest. Once a covariance family has been chosen, one typically needs to select a representative member by solving an optimization problem, e.g., by maximizing the likelihood associated with a data set. Consistent with the construction of the covariance family, we propose a differential geometric interpretation of this problem: minimizing the natural distance on the covariance manifold. Our approach does not require assuming a particular probability distribution for the data. Within this framework, we explore two different estimation settings.; First, we consider problems where representative "anchor" covariance matrices are available; these matrices may result from offline empirical observations or computational simulations of the relevant spatiotemporal process at related conditions. We connect multiple anchors to build multi-parametric covariance families, and then project new observations onto this family--for instance, in online estimation with limited data. We explore this problem in the full-rank and low-rank settings. In the former, we show that the proposed natural distance-minimizing projection and maximum likelihood are locally equivalent up to second order. In the latter, we devise covariance families and minimization schemes based on generalizations of multi-linear and Bézier interpolation to the appropriate manifold.; Second, for problems where anchor matrices are unavailable, we propose a geodesic reformulation of the classical shrinkage estimator: that is, we construct a geodesic family that connects the identity (or any other target) matrix to the sample covariance matrix and minimize the expected natural distance to the true covariance. The proposed estimator inherits the properties of the geodesic distance, for instance, invariance to inversion. Leveraging previous results, we propose a solution heuristic that compares favorably with recent non-linear shrinkage estimators. We demonstrate these covariance families and estimation approaches in a range of synthetic examples, and in applications including wind field modeling and groundwater hydrology.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 135-150).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127063</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Enhancing internet of things experience in augmented reality environments</title>
<link>https://hdl.handle.net/1721.1/127062</link>
<description>Enhancing internet of things experience in augmented reality environments
Sun, Yongbin,Ph. D.Massachusetts Institute of Technology.
Seamless perception of objects' physical properties, such as temperature, is a key to improving the way we live and work. Thanks to the rapid development of sensor technology, Internet of Things (IoT) is shaping our world by expanding digital connectivity to real objects. In this way, physical properties of objects can be effectively collected, processed, transmitted and shared. Yet, only being able to sense the surrounding environment is not enough: A user-friendly way to visualize information is also required. Today, Augmented Reality (AR), which overlays digital information onto physical objects, is growing fast, and has been adopted successfully in many fields. This thesis focuses on fusing advantages of various technologies to create a better IoT experience in AR environment.; First, we describe an integrated system to enhance users' IoT experience in AR environments: Users are allowed to directly visualize objects' physical properties and control IoT devices in an immersive manner. This system is able to localize in-view target objects based on their natural appearances without using fiducial markers, such as QR codes. In this way, a more seamless user experience can be achieved. Second, existing handcrafted computer vision methods can estimate objects' poses only for simple cases (i.e. textured patterns or simple shapes), and usually fail for complex cases. Recently, deep learning has shown promise to handle various tasks in a data-driven approach. In this thesis, 3D deep learning models are explored to estimate objects' pose parameters in a more accurate manner. Hence, better robustness and accuracy can be achieved to support IoT-AR applications.; Third, standard deep learning training pipeline for object pose estimation is supervised, which requires ground truth pose parameters to be known. Manually obtaining such data is time consuming and expensive, making it hard to scale. As the last contribution, methods using synthetic data are studied to automatically train object pose estimation models without human labelling.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 111-125).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127062</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Physics-constrained machine learning strategies for turbulent flows and bubble dynamics</title>
<link>https://hdl.handle.net/1721.1/127061</link>
<description>Physics-constrained machine learning strategies for turbulent flows and bubble dynamics
Wan, Zhong Yi,Ph. D.Massachusetts Institute of Technology.
Machine learning (ML) has in recent years become a sizzling trend in almost every science and engineering discipline. It enables scientists and engineers to make decisions or draw conclusions directly using information extracted from data, bypassing the necessity to unravel the delicate inner workings of the underlying phenomena. This, however, comes at the expense of having to search through an immense space of potential architectures and parameters for an optimized model that, not only provides the best description to the available data, but also applies to unseen cases. To cope with such difficulties, it is imperative that ample constraints are imposed on the architecture and parameter space, in order to facilitate efficient and generalizable learning. For physical systems, first-principle knowledge makes up a natural set of constraints that should be integrated into the ML system.; Past research efforts have mainly emphasized utilizing physical knowledge for cases where the system states are perfectly defined. For reduced-order settings, this condition is not satisfied and consequently, the existing physical knowledge is often incomplete and/or being compromised in accuracy. As a result, it remains a challenge to effectively leverage such knowledge in the design and implementation of ML frameworks. The objective of this thesis is to address the existing gaps and present physics-constrained ML strategies which work specifically in reduced-order spaces. The first part of this thesis focuses on using ML to improve imperfect reduced-order models, obtained from two common methods: (i) orthogonal subspace projection and (ii) slow-manifold reduction. The first case is particularly important for the modeling of rare extreme events such as those found in geophysical sciences.; In these problems, there is typically little data available to train a data-driven model, while reduced-order models of the full equations also fail to capture the relevant dynamics. We present a modeling strategy that allows the ML and physical model to complement each other. Specifically, physics-based equations are projected to a subspace which contains critical dynamical components associated with rare events and then combined with a data-driven model. In this way, the projected equations assist in modeling these events, which appear less frequently in data streams. On the other hand, observations are often plentiful in other regions of the state space, allowing ML to capture dynamics unaccounted by the projected equations. The effectiveness of this strategy is demonstrated through the prediction and modeling of extreme dissipation events in turbulent fluid flows.; Next we present a strategy for improving slow-manifold reduced-order models, suitable for systems with separated time scales. Our strategy employs ML to model the 'fast variables' of the system in terms of the 'slow variables', which can then be integrated with the equation-based dynamics of the latter to provide a complete description of the system evolution. In this way, we constrain ML to a specific part of the dynamics which cannot be easily derived or expressed analytically. We demonstrate the strategy through the modeling of finite-size (inertial) particle dynamics in generic fluid flows, a problem of critical importance for modeling bubbles in multi-phase flows, aerosols in the atmosphere, and ocean drifters. We first utilize training data obtained from the classical Maxey-Riley equation of motion and secondly via high-fidelity multi-phase direct numerical simulations. In both cases, we show that the kinematics, i.e.; the relationship between position (slow variable) and velocity (fast variable), can be effectively learned from limited trajectories and directly utilized to model interactions with complex, turbulent flows. We carefully study transferability of the ML models into different fluid flows and different parameters. To deal the problem of limited data the ML approach is complemented with a data-augmentation technique that enforces physical symmetries of the problem such as isotropicity of the particle dynamics. In the second part of the thesis, we consider a complementary problem related to the parameterization of the unmodeled variables of a reduced-order model with respect to the modeled/dynamically-resolved reduced-order states. This problem is particularly meaningful when explicit dynamical modeling of the target variables is prohibitive.; We present a formulation of this problem in the context of atmospheric modeling, where the spatially small-scaled features are much harder to measure/model than the corresponding large scales due to the high intrinsic dimensionality and the lack of predictability resulting from instabilities. To address these issues, we introduce the Stochastic Machine-Learning (SMaL) parameterization framework that decomposes the times-series for the small scales into a deterministic (predictable) and a stochastic (unpredictable) component. The deterministic component is directly captured with ML in terms of the large scale time-series. On the other hand, the local-in-time statistics of the stochastic components are estimated and learned with separate models. We then construct a non-stationary Gaussian process which enables efficiently drawing an ensemble of small-scaled trajectories that are consistent with the large scales.; The SMaL framework is illustrated on a realistic application, where the small-scaled vorticity fields are parameterized in terms of the large-scaled vorticity and temperature data from reanalysis over Europe. We show that the small-scaled random samples exhibit realistic characteristics in terms of the spatial spectrum, single-point probability density functions, and temporal spectral content.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 155-163).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127061</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>New overlapping finite elements and their application in the AMORE paradigm</title>
<link>https://hdl.handle.net/1721.1/127051</link>
<description>New overlapping finite elements and their application in the AMORE paradigm
Huang, Junbin,Ph. D.Massachusetts Institute of Technology.
The finite element method has become a fundamental analysis tool for modern sciences and engineering. Despite the great improvement in theory and application over the past decades, the need for regular conforming meshes in finite element analysis still requires much human effort in engineering practice. In this thesis we focus on designing novel finite element procedures to reduce the meshing effort expended on constructing a finite element model for solids and structures. The new meshing paradigm of "a̲utomatic m̲eshing with overlapping and regular elements", the AMORE paradigm, has recently been formulated. In this paradigm, the finite elements interior to the domain of interest are undistorted traditional elements and overlapping of elements is used for the discretization near the boundaries. The overlapping of elements gives much freedom to the meshing procedure and results in a much reduced meshing effort. Two types of overlapping are investigated.; In the first case we consider the overlapping of individual polygonal elements and propose new quadrilateral overlapping finite elements. The new formulation combines advantageous aspects from both traditional finite elements and meshless methods. The new overlapping finite elements, being insensitive to mesh distortions and giving high-order accuracy, are used to mesh the boundary regions. Such use leads to an effective meshing procedure as desired. In the second case we study the overlapping of conforming finite element meshes. Each individual mesh is spanned over a regular subdomain and is allowed to overlap with other meshes in any geometric form. Local fields on individual meshes are then assembled using a partition of unity to give the global compatible field. This new scheme allows very convenient local meshing and enriching so that the meshes can be easily adapted to various geometric features and solution gradients with a reasonable computational expense.; We formulate new schemes, analyze their convergence properties, and demonstrate their performance and their use in AMORE in the solution of various problems.
Thesis: Ph. D. in Mechanical Engineering and Computation, Massachusetts Institute of Technology, Department of Mechanical Engineering, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 129-134).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/127051</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A two-step port-reduced reduced-basis component method for time domain elastodynamic PDE with application to structural health monitoring</title>
<link>https://hdl.handle.net/1721.1/125483</link>
<description>A two-step port-reduced reduced-basis component method for time domain elastodynamic PDE with application to structural health monitoring
Bhouri, Mohamed Aziz.
We present a two-step parameterized Model Order Reduction (pMOR) technique for elastodynamic Partial Differential Equations (PDE). pMOR techniques for parameterized time domain PDEs offer opportunities for faster solution estimation. However, due to the curse of dimensionality, basic pMOR techniques fail to provide sufficiently accurate approximation when applied for large geometric domains with multiple localized excitations. Moreover, considering the time domain PDE for the construction of the reduced basis greatly increases the computational cost of the offline stage and treatment of hyperbolic PDEs suffers from pessimistic error bounds. Therefore, within the context of linear time domain PDEs for large domains with localized sources, it is of great interest to develop a pMOR approach that provides relatively low-dimensional spaces and which guarantees sufficiently accurate approximations.; Towards that end, we develop a two-step Port-Reduced Reduced-Basis Component approach (PR-RBC) for linear time domain PDEs. First, our approach takes advantage of the domain decomposition technique to develop reduced bases for subdomains, which, when assembled, form the domain of interest. This reduces the effective dimensionality of the parameter spaces and solves the curse of dimensionality issue. Moreover, the time domain solution is the inverse Laplace transform of a frequency domain function. Therefore, we can approximate the time domain solution as a linear combination of the PR-RBC solutions to the frequency domain PDE. Hence, we first apply the PR-RBC method on the elliptic frequency domain PDE. Second, we consider the resulting approximations to form a reduced space that is used for the time solver. We apply our two-step PR-RBC approach to a Simulation-Based Classification task for Structural Health Monitoring of deployed mechanical structure such as bridges.; For such task, we consider random ambient-local excitation with probabilistic nuisance parameters. We build time-domain cross-correlation based features and apply several state-of-the-art machine learning algorithms to perform a damage detection on the structure. In our context of many queries, the quality of the classification task is enhanced by the sufficiently large synthetic training dataset and the accuracy of the numerical solutions, both obtained thanks to the use of the two-step PR-RBC approach which reduces the computational burden associated with the construction of such dataset.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2020; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 245-250).
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/125483</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A container-based lightweight fault tolerance framework for high performance computing workloads</title>
<link>https://hdl.handle.net/1721.1/124188</link>
<description>A container-based lightweight fault tolerance framework for high performance computing workloads
Sindi, Mohamad(Mohamad Othman)
According to the latest world's top 500 supercomputers list, ~90% of the top High Performance Computing (HPC) systems are based on commodity hardware clusters, which are typically designed for performance rather than reliability. The Mean Time Between Failures (MTBF) for some current petascale systems has been reported to be several days, while studies estimate it may be less than 60 minutes for future exascale systems. One of the largest studies on HPC system failures showed that more than 50% of failures were due to hardware, and that failure rates grew with system size. Hence, running extended workloads on such systems is becoming more challenging as system sizes grow. In this work, we design and implement a lightweight fault tolerance framework to improve the sustainability of running workloads on HPC clusters. The framework mainly includes a fault prediction component and a remedy component.; The fault prediction component is implemented using a parallel algorithm that proactively predicts hardware issues with no overhead. This allows remedial actions to be taken before failures impact workloads. The algorithm uses machine learning applied to supercomputer system logs. We test it on actual logs from systems from Sandia National Laboratories (SNL). The massive logs come from three supercomputers and consist of ~750 million logs (~86 GB data). The algorithm is also tested online on our test cluster. We demonstrate the algorithm's high accuracy and performance in predicting cluster nodes with potential issues. The remedy component is implemented using the Linux container technology. Container technology has proven its success in the microservices domain. We adapt it towards HPC workloads to make use of its resilience potential.; By running workloads inside containers, we are able to migrate workloads from nodes predicted to have hardware issues, to healthy nodes while workloads are running. This does not introduce any major interruption or performance overhead to the workload, nor require application modification. We test with multiple real HPC applications that use the Message Passing Interface (MPI) standard. Tests are performed on various cluster platforms using different MPI types. Results demonstrate successful migration of HPC workloads, while maintaining integrity of results produced.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Civil and Environmental Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 122-130).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/124188</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>An inverse problem framework for reconstruction of phonon properties using solutions of the Boltzmann transport equation</title>
<link>https://hdl.handle.net/1721.1/123770</link>
<description>An inverse problem framework for reconstruction of phonon properties using solutions of the Boltzmann transport equation
Forghani, Mojtaba.
A methodology for reconstructing phonon properties in a solid material, such as the frequency-dependent relaxation time distribution, from thermal spectroscopy experimental results is proposed and extensively validated. The reconstruction is formulated as a non-convex optimization problem whose goal is to minimize the difference between the experimental results and the one calculated by a Boltzmann transport equation (BTE)-based model of the experimental process, with the desired material property treated as the unknown in the optimization process. Crucially, the proposed approach makes no assumption of an underlying Fourier behavior, thus avoiding all approximations associated with that assumption. The proposed method combines a derivative-free optimization method, referred to as the Nelder-Mead algorithm, with a graduated (multi-stage) optimization framework.; Our results show that, compared to other reconstruction methods, the proposed method is less sensitive to scarcity of data in a specific transport regime (such as submicron length scales). The method is also very versatile in incorporating known information into the optimization process, such as the known value of the material thermal conductivity or solid-solid interface conductance if a material interface is present; addition of this information improves the quality of the optimization. In the presence of a material interface of unknown conductance, we show that simultaneous reconstruction of both the solid-solid interface frequency-dependent transmissivity function and the relaxation time function is possible. The optimization algorithm is validated using both synthetically generated temperature profiles (generated by solving the BTE), as well as experimentally measured signals.; In the case of synthetic input data, the reconstructed properties are compared to the material models used to create the input data. In the case of experimental data, we compare the reconstructed phonon properties with their corresponding benchmark values, obtained using either theoretical predictions, such as relaxation times from density functional theory, or experimentally measured, such as the experimentally measured interface transmissivities. The interface transmissivity reconstruction is also validated on the 2D-dots geometry in the presence of Al-Si interface. Our results show good accuracy in all cases. The reliability and uniqueness of the optimized solution as well as its statistical properties due to the presence of noise are studied using a number of statistical techniques.; Our analysis provides strong evidence that the formulated optimization problem has a unique solution; furthermore the proposed optimization-based framework is capable of finding that solution with good accuracy.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 137-144).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123770</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Strategic and analytics-driven inspection operations for critical infrastructure resilience</title>
<link>https://hdl.handle.net/1721.1/123226</link>
<description>Strategic and analytics-driven inspection operations for critical infrastructure resilience
Dahan, Mathieu.
Resilience of infrastructure networks is a key requirement for a functioning modern society. These networks work continuously to enable the delivery of critical services such as water, natural gas, and transportation. However, recent natural disasters and cyber-physical security attacks have demonstrated that the lack of effective failure detection and identification capabilities is one of the main contributors of economic losses and safety risks faced by service utilities. This thesis focuses on both strategic and operational aspects of inspection processes for large-scale infrastructure networks, with the goal of improving their resilience to reliability and security failures. We address three combinatorial problems: (i) Strategic inspection for detecting adversarial failures; (ii) Strategic interdiction of malicious network flows; (iii) Analytics-driven inspection for localizing post-disaster failures.; We exploit the structural properties of these problems to develop new and practically relevant solutions for inspection of large-scale networks, along with approximation guarantees. Firstly, we address the question of determining a randomized inspection strategy with minimum number of detectors that ensures a target detection performance against multiple adversarial failures in the network. This question can be formulated as a mathematical program with constraints involving the Nash equilibria of a large strategic game. We solve this inspection problem with a novel approach that relies on the submodularity of the detection model and solutions of minimum set cover and maximum set packing problems. Secondly, we consider a generic network security game between a routing entity that sends its flow through the network, and an interdictor who simultaneously interdicts multiple edges.; By proving the existence of a probability distribution on a partially ordered set that satisfies a set of constraints, we show that the equilibrium properties of the game can be described using primal and dual solutions of a minimum-cost circulation problem. Our analysis provides a new characterization of the critical network components in strategic flow interdiction problems. Finally, we develop an analytics-driven approach for localizing failures under uncertainty. We utilize the information provided by failure prediction models to calibrate the generic formulation of a team orienteering problem with stochastic rewards and service times. We derive a compact mixed-integer programming formulation of the problem that computes an optimal a-priori routing of the inspection teams. Using the data collected by a major gas utility after an earthquake, we demonstrate the value of predictive analytics for improving their response operations.
Thesis: Ph. D. in Civil Engineering and Computation, Massachusetts Institute of Technology, Department of Civil and Environmental Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 213-221).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123226</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards stable principles of collective intelligence under an environment-dependent framework</title>
<link>https://hdl.handle.net/1721.1/123223</link>
<description>Towards stable principles of collective intelligence under an environment-dependent framework
Almaatouq, Abdullah Mohammed.
A large body of work has shown that a group of individuals can often achieve higher levels of intelligence than the group members working alone. Despite these expectations of group advantage, many examples of collective failure have been documented--from market crashes to the spread of false and harmful rumors. To reconcile these results, a major effort in the study of collective decision making has been focused on understanding the role of group composition and communication patterns in promoting the "wisdom of the crowd" or, conversely, leading to the "madness of the mob." In the past decades, much of this effort has been devoted to inferring the importance of a particular attribute, in isolation, by its capacity to explain the accuracy of collective judgments. In this thesis, we argue that such a perspective can lead to inconsistent conclusions: an 'incoherency problem.' We assert that the importance of an individual-level or structural attribute may change as a function of the environment in which the group is situated. Hence, we propose a research agenda to investigate the relative importance of the group composition and the structure of interaction networks under an environment-dependent framework. We show that under such a framework, we can reconcile previously conflicting claims from the collective intelligence literature and motivate a future research program to identify stable principles of collective performance. Although implementing such a program is logistically challenging, "virtual lab" experiments of the sort discussed in this thesis, in combination with emerging "open science" practices such as pre-registration, data availability, open code, and "many-labs" collaborations, offer a promising route forward.
Thesis: Ph. D. in Computational Science and Engineering, Massachusetts Institute of Technology, Department of Civil and Environmental Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 135-152).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123223</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The swept rule for breaking the latency barrier in time-advancing PDEs</title>
<link>https://hdl.handle.net/1721.1/123222</link>
<description>The swept rule for breaking the latency barrier in time-advancing PDEs
Alhubail, Maitham Makki(Maitham Makki Hussain)
This thesis describes a method to accelerate parallel, explicit time integration of unsteady PDEs. The method is motivated by our observation that network latency, not bandwidth or computing power, often limits how fast PDEs can be solved in parallel. The method is called the swept rule of space-time domain decomposition. Compared to conventional, space-only domain decomposition, it communicates similar amount of data, but in fewer messages. The swept rule achieves this by decomposing space and time among computing nodes in ways that exploit the domains of influence and the domain of dependency, making it possible to communicate once per many time steps with no redundant computation. By communicating less often, the swept rule effectively breaks the latency barrier, advancing on average more than one time step per round-trip latency of the network. The thesis describes the algorithms, presents simple theoretical analysis to the performance of the swept rule, and supports the analysis with numerical experiments.
Thesis: Ph. D. in Computational Science and Engineering, Massachusetts Institute of Technology, Department of Civil and Environmental Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 103-104).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123222</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Resilient operations of smart electricity networks under security and reliability failures</title>
<link>https://hdl.handle.net/1721.1/123207</link>
<description>Resilient operations of smart electricity networks under security and reliability failures
Shelar, Devendra(Devendra Anil)
Blackouts (or cascading failures) in Electricity Networks (ENs) can result in severe consequences for economic activity, human safety and national security. Recent incidents suggest that risk of blackouts due to cyber-security attacks and extreme weather events is steadily increasing in many regions of the world.; This thesis develops a systematic approach to evaluate and improve the resilience of ENs by addressing following questions: (a) How to model security and reliability failures and assess their impact on ENs? (b) What strategies EN operators can implement to plan for and quickly respond to such failures and minimize their overall impact? (c) How to leverage the operational flexibility of "smart" ENs to implement these strategies in a structured manner and provide guarantees against worst-case failure scenarios? We focus on three classes of cyber-physical failures: (i) Inefficient or unsafe economic dispatch decisions induced by an external hacker who exploits the vulnerabilities of control center software; (ii) Simultaneous disruption of a large number of customer-side components (loads and/or distributed generators) by a strategic remote adversary; (iii) Correlated failures of power system components caused by storm events (or hurricanes) with high-intensity wind fields.; We develop new network models to capture the impact of these failures, while accounting for a broad range of operator response actions. These actions include: partial load control, pre-emptive disconnection of non-critical loads, active and reactive power supply by Distributed Energy Resources (DERs) capable of providing grid-forming services, and formation of microgrid islands. We develop practically relevant operational strategies to improve the ENs' resilience to failure classes (i) and (ii) (resp. (iii)) based on solutions of bilevel mixed integer programming (resp. two-stage stochastic optimization) formulations. Our bilevel mixed integer programming formulations capture the worst-case impacts of attacks on radial distribution networks operating under grid-connected or microgrid configurations.; For the case when the operator response can be modeled as continuous decision variables, we provide a greedy heuristic that exploits the radial network structure and provides near-optimal solutions. For the more general case of mixed-binary decision variables, we develop a computationally tractable solution approach based on Benders Decomposition method. This approach can be used to evaluate the value of timely response actions in reducing various losses to the network operator during contingencies induced by attacker-induced failures. We provide some guidelines on improving the network resilience by proactive allocation of contingency resources, and securing network components in a strategic manner. Furthermore, under reasonable assumptions, we show that myopically reconnecting the disrupted components can be eective in restoring the network operation back to nominal condition.; Our two-stage stochastic optimization formulation is motivated by the need of a decision-theoretic framework for allocating DERs and other contingency resources in ENs facing the risk of multiple failures due to high-intensity storm events. The stochastic model in this formulation captures the dependence of probabilistic failure rates on the spatio-temporal wind intensities. Importantly, the formulation allows for the formation of microgrid islands (powered by the allocated DERs), and considers joint DER dispatch and component repair decisions over a multi-period restoration time horizon. We present computational results based on the classical sample average approximation method, with Benders Decomposition applied to solve the mixed-binary programs associated with the restoration stage. Finally, we compare the optimal repair decisions with a simpler greedy scheduling strategy that satisfies soft-precedence constraints.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D. in Computational Science and Engineering, Massachusetts Institute of Technology, Department of Civil and Environmental Engineering, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 265-276).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123207</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>On traffic disruptions : event detection from visual data and Bayesian congestion games</title>
<link>https://hdl.handle.net/1721.1/123189</link>
<description>On traffic disruptions : event detection from visual data and Bayesian congestion games
Liu, Jeffrey,Ph.D.Massachusetts Institute of Technology.
Road traffic is often subject to random disturbances due to weather, incidents, or special events. Effectively detecting and disseminating information about disturbances is a key goal of modern, "smart" infrastructure. Toward this, this dissertation investigates two related questions. First, how can traffic managers better utilize existing traffic cameras to automatically identify traffic disturbances? Second, how can we model different aspects of information-such as human misperception or ignorance of other's information-and their effects on the travelers' route choices? Part I addresses analyzing unstructured, sequential image data, such as traffic CCTV footage, with a novel, semantics-oriented approach based on natural language and semantic features. The approach extracts structured, human-interpretable "topic signals" from distributions of common object labels, which correspond to physical processes depicted in the footage.; Changes and anomalies in these topic signals are used to identify notable events in weather conditions and traffic congestion. This is demonstrated on a new, real-world dataset collected from Boston freeway CCTV footage. In notable event detection, the use of topic signal representation outperforms the use of any individual label signal. Part II addresses game theoretic modeling of informational effects on travelers' route choices. It considers both access and accuracy of information about the network state, as well as the perception of other's information. It introduces the Subjective Bayesian Congestion Game (BCG), which models a broader set of player beliefs than those allowed by the conventional common prior assumption (Objective BCG). This enables modeling of uncertainty about other's information, such as when one population is unaware of information services.; Analytical solutions are provided for a stylized configuration of the Subjective BCG, and a numerical solver is provided for more general configurations. Compared to the Objective BCG, the Subjective BCG has qualitatively distinct solutions and costs, indicating that the perception of other's information significantly affects equilibrium route choices.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Civil and Environmental Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 123-131).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123189</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Entropy-stable hybridized discontinuous Galerkin methods for large-eddy simulation of transitional and turbulent flows</title>
<link>https://hdl.handle.net/1721.1/122496</link>
<description>Entropy-stable hybridized discontinuous Galerkin methods for large-eddy simulation of transitional and turbulent flows
Fernández, Pablo.
The use of computational fluid dynamics (CFD) in the aerospace industry is limited by the inability to accurately and reliably predict complex transitional and turbulent flows. This has become a major barrier to further reduce the costs, times and risks in the design process, further optimize designs, and further reduce fuel consumption and toxic emissions. Large-eddy simulation (LES) is currently the most promising simulation technique to accurately predict transitional and turbulent flows. LES, however, remains computationally expensive and often suffers from accuracy and robustness issues to the extent that it is still not practical for most applications of interest. In this thesis, we develop a series of methods and techniques to improve efficiency, accuracy and robustness of large-eddy simulations with the goal of making CFD a more powerful tool in the aerospace industry.; First, we introduce a new class of high-order discretization schemes for the Euler and Navier-Stokes equations, referred to as the entropy-stable hybridized discontinuous Galerkin (DG) methods. As hybridized methods, they are amenable to static condensation and hence to more efficient implementations than standard DG methods. As entropy-stable methods, they are superior to conventional (non-entropy stable) methods for LES of compressible flows in terms of stability, robustness and accuracy. Second, we develop parallel iterative methods to efficiently and scalably solve the nonlinear system of equations arising from the discretization. The combination of hybridized DG methods with the proposed solution method provides excellent parallel scalability up to petascale and, for moderately high accuracy orders, leads to about one order of magnitude speedup with respect to standard DG methods.; Third, we introduced a non-modal analysis theory that characterizes the numerical dissipation of high-order discretization schemes, including hybridized DG methods. Non-modal analysis provides critical guidelines on how to define the polynomial approximation space and the Riemann solver to improve accuracy and robustness in LES. Forth, we investigate how to best account for the effect of the subgrid scales (SGS) that, by definition, exist in LES. Numerical and theoretical results show the Riemann solver in the DG scheme plays the role of an implicit SGS model. More importantly, a change in the current best practices for SGS modeling is required in the context of high-order DG methods. And fifth, we present a physics-based shock capturing method for LES of high-Mach-number and high-Reynolds-number flows. The shock capturing method performs robustly from transonic to hypersonic regimes, provides sharp shock profiles, and has a small impact on the resolved turbulent structures.; These are all critical ingredients to advance the state-of-the-art of high-order methods for LES, both in terms of methodology and understanding the relationship between the physics and the numerics.
Thesis: Ph. D. in Computational Science and Engineering, Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 109-212).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122496</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>An adaptive space-time discontinuous Galerkin method for reservoir flows</title>
<link>https://hdl.handle.net/1721.1/122471</link>
<description>An adaptive space-time discontinuous Galerkin method for reservoir flows
Jayasinghe, Yashod Savithru.
Numerical simulation has become a vital tool for predicting engineering quantities of interest in reservoir flows. However, the general lack of autonomy and reliability prevents most numerical methods from being used to their full potential in engineering analysis. This thesis presents work towards the development of an efficient and robust numerical framework for solving reservoir flow problems in a fully-automated manner. In particular, a space-time discontinuous Galerkin (DG) finite element method is used to achieve a high-order discretization on a fully unstructured space-time mesh, instead of a conventional time-marching approach. Anisotropic mesh adaptation is performed to reduce the error of a specified output of interest, by using a posteriori error estimates from the dual weighted residual method to drive a metric-based mesh optimization algorithm.; An analysis of the adjoint equations, boundary conditions and solutions of the Buckley-Leverett and two-phase flow equations is presented, with the objective of developing a theoretical understanding of the adjoint behaviors of porous media models. The intuition developed from this analysis is useful for understanding mesh adaptation behaviors in more complex flow problems. This work also presents a new bottom-hole pressure well model for reservoir simulation, which relates the volumetric flow rate of the well to the reservoir pressure through a distributed source term that is independent of the discretization. Unlike Peaceman-type models which require the definition of an equivalent well-bore radius dependent on local grid length scales, this distributed well model is directly applicable to general discretizations on unstructured meshes.; We show that a standard DG diffusive flux discretization of the two-phase flow equations in mass conservation form results in an unstable semi-discrete system in the advection-dominant limit, and hence propose modifications to linearly stabilize the discretization. Further, an artificial viscosity method is presented for the Buckley-Leverett and two-phase flow equations, as a means of mitigating Gibbs oscillations in high-order discretizations and ensuring convergence to physical solutions. Finally, the proposed adaptive solution framework is demonstrated on compressible two-phase flow problems in homogeneous and heterogeneous reservoirs. Comparisons with conventional time-marching methods show that the adaptive space-time DG method is significantly more efficient at predicting output quantities of interest, in terms of degrees-of-freedom required, execution time and parallel scalability.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2018; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 205-216).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122471</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>CFD simulation of long slender offshore structures at high Reynolds number</title>
<link>https://hdl.handle.net/1721.1/122262</link>
<description>CFD simulation of long slender offshore structures at high Reynolds number
Olaoye, Abiodun Timothy.
Slender cylindrical structures are common in many offshore engineering applications such as floating wind turbines and subsea risers. These structures are vulnerable to flow-induced vibrations under certain environmental conditions which impacts their useful life. Flow-induced vibrations have been widely studied both experimentally and numerically especially at low Reynolds number. However, many questions remain unanswered in detail regarding the effects of high Re on structural responses and fluid-structure interaction (FSI) phenomena such as lock-in for different design configurations. Furthermore, under realistic environmental conditions, the oncoming flow velocity profile may not be uniform. In such scenarios, effects of large changes in Re along span on nature of structural responses may be significant.; This research project is focused on computational fluid dynamics (CFD) simulation of slender structures under realistic oncoming ocean currents with relatively higher Reynolds number (Re &gt;/- 10,000) compared to existing literature. Computational methods for investigating FSI phenomena are limited by high Reynolds number, complex flow profiles, low mass ratio and large aspect ratio of structures. Despite these challenges, numerical approach potentially offers more detailed analysis and ease of parameter tuning to investigate unique cases too expensive to conduct in experiments. Therefore, advances in research is increasingly supported by numerical modeling. In the framework of Fourier Spectral/hp element method implemented in NEKTAR code, an entropy-based viscosity method (EVM) was employed to account for turbulence effects not captured by the numerical grid and fictitious added mass method was utilized in the structure solver to handle low mass ratio problems.; Also, the mapping-enabled smoothed profile method (SPM) in addition to already stated techniques was used to simulate cases involving buoyancy modules. A thorough verification and validation of the current algorithms was carried out for stationary cylinders with uniform cross-sections, flexibly-mounted rigid cylinders and flexible cylinders. Major contributions include EVM enabled simulations of dynamic responses of flexibly-mounted rigid cylinders with low mass ratio in higher Reynolds number uniform flows (Re = 140,000) compared with existing literature thereby yielding numerically novel response maps. The new results provide more insights on the role of Re in amplitude responses and FSI phenomena associated with vortex-induced vibrations in practical applications. Another major contribution is the investigation in detail of complex flows past a flexible cylinder at Re[subscript max] &lt;/- 11,000 which is higher than existing literature (Re[subscript max] 2000).; The relatively large change in Re along span revealed new fluid-structure energy transfer behavior in linearly and exponentially sheared flows.
Thesis: Ph. D. in Mechanical Engineering and Computation, Massachusetts Institute of Technology, Department of Mechanical Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 129-131).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122262</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Visual and auditory scene parsing</title>
<link>https://hdl.handle.net/1721.1/122101</link>
<description>Visual and auditory scene parsing
Zhao, Hang,Ph.D.Massachusetts Institute of Technology.
Scene parsing is a fundamental topic in computer vision and computational audition, where people develop computational approaches to achieve human perceptual system's ability in understanding scenes, e.g. group visual regions of an image into objects and segregate sound components in a noisy environment. This thesis investigates fully-supervised and self-supervised machine learning approaches to parse visual and auditory signals, including images, videos, and audios. Visual scene parsing refers to densely grouping and labeling of image regions into object concepts. First I build the MIT scene parsing benchmark based on a large scale, densely annotated dataset ADE20K. This benchmark, together with the state-of-the-art models we open source, offers a powerful tool for the research community to solve semantic and instance segmentation tasks. Then I investigate the challenge of parsing a large number of object categories in the wild. An open vocabulary scene parsing model which combines a convolutional neural network with a structured knowledge graph is proposed to address the challenge. Auditory scene parsing refers to recognizing and decomposing sound components in complex auditory environments. I propose a general audio-visual self-supervised learning framework that learns from a large amount of unlabeled internet videos. The learning process discovers the natural synchronization of vision and sounds without human annotation. The learned model achieves the capability to localize sound sources in videos and separate them from mixture. Furthermore, I demonstrate that motion cues in videos are tightly associated with sounds, which help in solving sound localization and separation problems.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D. in Mechanical Engineering and Computation, Massachusetts Institute of Technology, Department of Mechanical Engineering, 2019; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 121-132).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122101</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Atomistic engineering of fluid Structure at the fluid-solid interface</title>
<link>https://hdl.handle.net/1721.1/121850</link>
<description>Atomistic engineering of fluid Structure at the fluid-solid interface
Wang, Gerald J.(Gerald Jonathan)
Under extreme confinement, fluids exhibit a number of remarkable effects that cannot be predicted using macroscopic fluid mechanics. These phenomena are especially pronounced when the confining length scale is comparable to the fluid's internal (molecular) length scale. Elucidating the physical principles governing nanoconfined fluids is critical for many pursuits in nanoscale engineering. In this thesis, we present several theoretical and computational results on the structure and transport properties of nanoconfined fluids. We begin by discussing the phenomenon of fluid layering at a solid interface. Using molecular-mechanics principles and molecular-dynamics (MD) simulations, we develop several models to characterize density inhomogeneities in the interfacial region. Along the way, we introduce a non-dimensional number that predicts the extent of fluid layering by comparing the effects of fluid-solid interaction to thermal energy.; We also present evidence for a universal scaling relation that relates the density enhancement of layered fluid to the non-dimensional temperature, valid for dense-fluid systems. We then apply these models of fluid layering to the problem of anomalous fluid diffusion under nanoconfinement. We show that anomalous diffusion is controlled by the degree of interfacial fluid layering; in particular, layered fluid exhibits restricted diffusive dynamics, an effect whose origins can be traced to the (quasi-) two dimensionality and density enhancement of the fluid layer. We construct models for the restricted diffusivity of interfacial fluid, which enables accurate prediction of the overall diffusivity anomaly as a function of confinement length scale. Finally, we use these earlier developments to tackle the notorious problem of dense fluid slip at a solid interface.; We propose a molecular-kinetic theory that formulates slip as a series of thermally activated hops performed by interfacial fluid molecules, under the influence of the bulk fluid shear stress, within the corrugated energy landscape generated by the solid. This theory linearizes to the Navier slip condition in the limit of low shear rate, captures the central features of existing models, and demonstrates excellent agreement with MD simulation as well as experiments.
Thesis: Ph. D. in Mechanical Engineering and Computation, Massachusetts Institute of Technology, Department of Mechanical Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 131-141).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121850</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>First-principles approaches for accurate predictions of nanostructured materials</title>
<link>https://hdl.handle.net/1721.1/121849</link>
<description>First-principles approaches for accurate predictions of nanostructured materials
Zhao, Qing,Ph. D.Massachusetts Institute of Technology.
Nanostructured materials have attracted increasing interest in recent years due to their unusual mechanical, electrical, electronic and optical properties. First-principles electronic structure calculations (e.g., with density functional theory or DFT) provide unique insights into the structure-property relationships of nanostructured materials that can enable further design and engineering. The favorable balance between efficiency and accuracy of DFT has led to its wide application in chemistry, solid-state physics and biology. However, DFT still has limitations and suffers from large pervasive errors in its predicted properties. For small systems, more accurate methods are available but challenges remain for studying nm-scale materials. In the solid-state, unique challenges arise from both the strong sensitivity of correlated transition metal oxides on approximations in DFT and the periodic boundary condition.; Therefore, a greater understanding of approximations inherent in DFT is needed for nanostructured materials. In this thesis, we study nanostructured semiconducting materials, where conventional DFT can be expected to perform well. We develop methods for sampling amorphous materials, rationalizing periodic table dependence in material stability for materials discovery of ordered materials, and bring a surface reactivity perspective to understanding growth processes during materials synthesis. Within the challenging cases of transition metal oxides, we explore how common approximations (e.g., DFT+U and hybrids) affect key nanoscale properties, such as the nature of density localization, and as a result, key observables such as surface stability and surface reactivity. Observation of divergent behavior between these two methods highlights the limited interchangeability of DFT+U and hybrids in the solid-state community.; Finally, leveraging the understanding developed in the first two parts of the thesis, we employ a multiscale approach to systematically tailor DFT functional choice for challenging condensed phase systems using accurate reference data from higher level methods. The combination of large-scale electronic structure modeling with state-of-the-art methodology will provide important, predictive insight into tailoring the nanoscale properties of useful materials, and further development in approximate DFT.
Thesis: Ph. D. in Mechanical Engineering and Computation, Massachusetts Institute of Technology, Department of Mechanical Engineering, 2019; Cataloged from PDF version of thesis. "February 2019."; Includes bibliographical references (pages 154-180).
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121849</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Prediction under uncertainty : from models for marine-terminating glaciers to Bayesian computation</title>
<link>https://hdl.handle.net/1721.1/121812</link>
<description>Prediction under uncertainty : from models for marine-terminating glaciers to Bayesian computation
Davis, Andrew D.(Andrew Donaldson)
The polar ice sheets have enormous potential impact on future global mean sea level rise. Recent observations suggest they are losing mass to the ocean at an accelerated rate. Skillful prediction of the ice sheets' future mass loss remains difficult, however; observations of key variables are insufficient and physical processes are poorly understood. Even when a relatively accurate dynamical model is available, computational limitations make it difficult to characterize uncertainties associated with the model's predictions. To address this prediction challenge, this thesis presents complementary developments in glaciology and in Bayesian computation.; In particular, (i) we develop new models of marine-terminating glaciers whose dynamics are controlled by an extended set of physical processes and geometric constraints; and (ii) we develop new sampling algorithms to efficiently characterize selected marginals of a high-dimensional probability distribution describing uncertain parameters. The latter algorithms have broader utility in Bayesian modeling and inference with computationally intensive models. We begin by studying laterally confined ice streams that terminate in the ocean, where they may form floating ice shelves. Such marine-terminating outlet glaciers are the main conduits by which Greenland and Antarctica drain their ice mass into the ocean. Ice shelves play an important role in buttressing the grounded inland ice. The seaward ice flow is typically accompanied by acceleration and thinning. Increased thinning eventually leads to flotation of the ice supported by buoyant forces from the ocean.; The transition region from grounded to floating ice is referred to as the grounding line (or zone), and the mass transport across the grounding line as the output flux. Previous work by Weertman (1974) and Schoof (2007) considers laterally unconfined ice streams, showing that their output flux is a monotonically increasing function of the bedrock rock depth at the grounding line. This scenario leads to the marine ice sheet instability (MISI): retreating into deeper water increases the output flux, and retreat accelerates. Therefore, stable steady states cannot exist on downward sloping beds. We extend this analysis to laterally confined glaciers and investigate when side-wall drag is sufficient to stabilize glaciers on downward sloping beds. Additionally, we include a parameterization of sub-shelf melt. We find that, whereas lateral drag can stabilize glaciers that would otherwise be subject to the MISI, sub-shelf melt can destabilize them.; Our ultimate goal is to predict future ice sheet volume and to quantify its uncertainty. We do so in the Bayesian statistical setting, conditioning our prediction on available observations. Yet characterizing a posterior distribution-using, for example, Markov chain Monte Carlo (MCMC)-involves repeated evaluations of an ice stream model, which are prohibitively expensive. Furthermore, the model parameters that need to be inferred are high dimensional, even though we are primarily interested in a low dimensional quantity: the future ice volume. We address this computational challenge by developing new structure-exploiting Monte Carlo methods that combine marginalization with surrogate modeling. Given a high-dimensional (posterior) distribution on the model parameters, whose density evaluations are computationally intensive, we construct an MCMC chain that directly targets a particular low-dimensional marginal of interest. In general, the marginal density is not available analytically.; Instead, we can compute unbiased noisy estimates of this density. Our MCMC algorithm incrementally constructs a local regression approximation of the target marginal density using these estimates. Continual refinement of the approximation, as MCMC sampling proceeds, leads to an asymptotically exact characterization of the desired marginal distribution. Analysis of the bias-variance tradeoff guides an ideal refinement strategy that balances the decay rates of different components of the error. Our approach exploits regularity in the marginal density to significantly reduce computational expense relative to both full-dimensional and pseudo-marginal MCMC.
Thesis: Ph. D. in Computational Science and Engineering, Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2018; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 255-266).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/121812</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Multi-agent real-time decision making in water resources systems</title>
<link>https://hdl.handle.net/1721.1/120636</link>
<description>Multi-agent real-time decision making in water resources systems
Sahu, Reetik Kumar
Optimal utilization of natural resources such as water, wind and land over extended periods of time requires a carefully designed framework coupling decision making and a mathematical abstraction of the physical system. On one hand, the choice of the decision-strategy can set limits/bounds on the maximum benefit that can be extracted from the physical system. On the other hand the mathematical formulation of the physical system determines the limitations of such strategies when applied to real physical systems. The nuances of decision making and abstraction of the physical system are illustrated with two classical water resource problems: optimal hydropower reservoir operation and competition for a common pool groundwater source. Reservoir operation is modeled as a single agent stochastic optimal control problem where the operator (agent) negotiates a firm power contract before operations begin and adjusts the reservoir release during operations. A probabilistic analysis shows that predictive decision strategies such as stochastic dynamic programming and model predictive control give better performance than standard deterministic operating rules. Groundwater competition is modeled as a multi-agent dynamic game where each farmer (agent) aims to maximize his/her personal benefit. The game analysis shows that uncooperative competition for the resource reduces economic efficiency somewhat with respect to the cooperative socially optimum behavior. However, the efficiency reduction is relatively small compared to what might be expected from incorrect assumptions about uncertain factors such as future energy and crop prices. Spatially lumped and distributed models of the groundwater system give similar pictures of the inefficiencies that result from uncooperative behavior. The spatially distributed model also reveals the important roles of the geometry and density of the pumping well network. Overall, the game analysis provides useful insight about the factors that make cooperative groundwater management beneficial in particular situations.
Thesis: Ph. D. in Computational Science and Engineering, Massachusetts Institute of Technology, Department of Civil and Environmental Engineering, 2018.; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 77-83).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/120636</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Viscosity stabilized adjoint method for unsteady compressible Navier-Stokes equations</title>
<link>https://hdl.handle.net/1721.1/120425</link>
<description>Viscosity stabilized adjoint method for unsteady compressible Navier-Stokes equations
Talnikar, Chaitanya Anil.
Design optimization methods are a popular tool in computational fluid dynamics for designing components or finalizing the flow parameters of a system. The adjoint method accelerates the design process by providing gradients of the design objective with respect to the system parameters. But, typically, adjoint-based design optimization methods have used low fidelity simulations like Reynolds Averaged Navier-Stokes (RANS). To reliably capture the complex flow phenomena like turbulent boundary layers, turbulent wakes and fluid separation involved in high Reynolds number flows, high fidelity simulations like large eddy simulation (LES) are required. Unfortunately, due to the chaotic dynamics of turbulence, the adjoint method for LES diverges and produces incorrect gradients. In this thesis, the adjoint method for unsteady flow equations is modified by adding artificial viscosity to the adjoint equations. The additional viscosity stabilizes the adjoint solution and maintains reasonable accuracy of the gradients obtained from it. The accuracy of the method is assessed on multiple turbulent flow problems, including subsonic flow over a cylinder and transonic flow over a gas turbine vane. The utility of the method is then tested in performing shape optimization of the trailing edge of a transonic turbine vane. The optimal design, found using a modified gradient-based Bayesian optimization algorithm, shows approximately 15% better aero-thermal performance than the baseline design. Such design optimizations are possible due to the availability of massively parallel supercomputers. Designing high performance fluid flow solvers for the next generation supercomputers is a challenging task. In this thesis, a two-level computational graph method for writing optimized distributed flow solvers on heterogeneous architectures is presented. A checkpoint-based automatic differentiation method is used to derive the corresponding adjoint flow solver in this framework.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: Ph. D. in Computational Science and Engineering, Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2018; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 187-195).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/120425</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Analyzing cities' complex socioeconomic networks using computational science and machine learning</title>
<link>https://hdl.handle.net/1721.1/119325</link>
<description>Analyzing cities' complex socioeconomic networks using computational science and machine learning
Alabdulkareem, Ahmad
By 2050, it is expected that 66% of the world population will be living in cities. The urban growth explosion in recent decades has raised many questions concerning the evolutionary advantages of urbanism, with several theories delving into the multitude of benefits of such efficient systems. This thesis focuses on one important aspect of cities: their social dimension, and in particular, the social aspect of their complex socioeconomic fabric (e.g. labor markets and social networks). Economic inequality is one of the greatest challenges facing society today, in tandem with the eminent impact of automation, which can exacerbate this issue. The social dimension plays a significant role in both, with many hypothesizing that social skills will be the last bastion of differentiation between humans and machines, and thus, jobs will become mostly dominated by social skills. Using data-driven tools from network science, machine learning, and computational science, the first question I aim to answer is the following: what role do social skills play in today's labor markets on both a micro and macro scale (e.g. individuals and cities)? Second, how could the effects of automation lead to various labor dynamics, and what role would social skills play in combating those effects? Specifically, what are social skills' relation to career mobility? Which would inform strategies to mitigate the negative effects of automation and off-shoring on employment. Third, given the importance of the social dimension in cities, what theoretical model can explain such results, and what are its consequences? Finally, given the vulnerabilities for invading individuals' privacy, as demonstrated in previous chapters, how does highlighting those results affect people's interest in privacy preservation, and what are some possible solutions to combat this issue?
Thesis: Ph. D. in Computational Science &amp; Engineering, Massachusetts Institute of Technology, Department of Civil and Environmental Engineering, 2018.; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 133-141).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/119325</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Scaling Bayesian optimization for engineering design : lookahead approaches and multifidelity dimension reduction</title>
<link>https://hdl.handle.net/1721.1/119289</link>
<description>Scaling Bayesian optimization for engineering design : lookahead approaches and multifidelity dimension reduction
Lam, Remi Roger Alain Paul
The objective functions and constraints that arise in engineering design problems are often non-convex, multi-modal and do not have closed-form expressions. Evaluation of these functions can be expensive, requiring a time-consuming computation (e.g., solving a set of partial differential equations) or a costly experiment (e.g., conducting wind-tunnel measurements). Accordingly, whether the task is formal optimization or just design space exploration, there is often a finite budget specifying the maximum number of evaluations of the objectives and constraints allowed. Bayesian optimization (BO) has become a popular global optimization technique for solving problems governed by such expensive functions. BO iteratively updates a statistical model and uses it to quantify the expected benefits of evaluating a given design under consideration. The next design to evaluate can be selected in order to maximize such benefits. Most existing BO algorithms are greedy strategies, making decisions to maximize the immediate benefits, without planning over several steps. This is typically a suboptimal approach. In the first part of this thesis, we develop a novel BO algorithm with planning capabilities. This algorithm selects the next design to evaluate in order to maximize the long-term expected benefit obtained at the end of the optimization. This lookahead approach requires tools to quantify the effects a decision has over several steps in the future. To do so, we use Gaussian processes as generative models and combine them with dynamic programming to formulate the optimal planning strategy. We first illustrate the proposed algorithm on unconstrained optimization problems. In the second part, we demonstrate how the proposed lookahead BO algorithm can be extended to handle non-linear expensive inequality constraints, a ubiquitous situation in engineering design. We illustrate the proposed lookahead constrained BO algorithm on a reacting flow optimization problem. In the last part of this thesis, we develop techniques to scale BO to high dimension by exploiting a special structure arising when the objective function varies only in a low-dimensional subspace. Such a subspace can be detected using the (randomized) method of Active Subspaces. We propose a multifidelity active subspace algorithm that reduces the computational cost by leveraging a cheap-to-evaluate approximation of the objective function. We analyze the number of evaluations sufficient to control the error incurred, both in expectation and with high probability. We illustrate the proposed algorithm on an ONERA M6 wing shape-optimization problem.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2018.; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 105-111).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/119289</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Development of the random ray method of neutral particle transport for high-fidelity nuclear reactor simulation</title>
<link>https://hdl.handle.net/1721.1/119038</link>
<description>Development of the random ray method of neutral particle transport for high-fidelity nuclear reactor simulation
Tramm, John Robert
A central goal in computational nuclear engineering is the high-fidelity simulation of a full nuclear reactor core by way of a general simulation method. General full core simulations can potentially reduce design and construction costs, increase reactor performance and safety, reduce the amount of nuclear waste generated, and allow for much more complex and novel designs. To date, however, the time to solution and memory requirements for a general full core high fidelity 3D simulation have rendered such calculations impractical, even using leadership class supercomputers. Reactor designers have instead relied on calibrated methods that are accurate only within a narrow design space, greatly limiting the exploration of innovative concepts. One numerical simulation approach, the Method of Characteristics (MOC), has the potential for fast and efficient performance on a variety of next generation computing systems, including CPU, GPU, and Intel Xeon Phi architectures. While 2D MOC has long been used in reactor design and engineering as an efficient simulation method for smaller problems, the transition to 3D has only begun recently, and to our knowledge no 3D MOC based codes are currently used in industry. The delay of the onset of full 3D MOC codes can be attributed to the impossibility of "naively" scaling current 2D codes into 3D due to prohibitively high memory requirements. To facilitate transition of MOC based methods to 3D, we have developed a fundamentally new computational algorithm. This new algorithm, known as The Random Ray Method (TRRM), can be viewed as a hybrid between the Monte Carlo (MC) and MOC methods. Its three largest advantages compared to MOC are that it can handle arbitrary 3D geometries, it offers extreme improvements in memory efficiency, and it allows for significant reductions in algorithmic complexity on some simulation problems. It also offers a much lower time to solution as compared to MC methods. In this thesis, we will introduce the TRRM algorithm and a parallel implementation of it known as the Advanced Random Ray Code (ARRC). Then, we will evaluate its capabilities using a series of benchmark problems and compare the results to traditional deterministic MOC methods. A full core simulation will be run to assess the performance characteristics of the algorithm at massive scale. We will also discuss the various methods to parallelize the algorithm, including domain decomposition, and will investigate the new method's scaling characteristics on two current supercomputers, the IBM Blue Gene/Q Mira and the Cray XC40 Theta. The results of these studies show that TRRM is capable of breakthrough performance and accuracy gains compared to existing methods which we demonstrate to enable general, full core 3D high-fidelity simulations that were previously out of reach.
Thesis: Ph. D. in Computational Nuclear Science and Engineering, Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, 2018.; This electronic version was submitted by the student author.  The certified thesis is available in the Institute Archives and Special Collections.; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 177-188).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/119038</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Full core 3D neutron transport simulation using the method of characteristics with linear sources</title>
<link>https://hdl.handle.net/1721.1/119030</link>
<description>Full core 3D neutron transport simulation using the method of characteristics with linear sources
Gunow, Geoffrey Alexander
The development of high fidelity multi-group neutron transport-based simulation tools for full core Light Water Reactor (LWR) analysis has been a long-standing goal of the reactor physics community. While direct transport simulations have previously been far too computationally expensive, advances in computer hardware have allowed large scale simulations to become feasible. Therefore, many have focused on developing full core neutron transport solvers that do not incorporate the approximations and assumptions of traditional nodal diffusion solvers. Due to the computational expense of direct full core 3D deterministic neutron transport methods, many have focused on 2D/1D methods which solve 3D problems as a coupled system of radial and axial transport problems. However, the coupling of radial and axial problems also introduces approximations. Instead, the work in this thesis focuses on explicitly solving the 3D deterministic neutron transport equations with the Method of Characteristics (MOC). MOC has been widely used for 2D lattice physics calculations due to its ability to accurately and efficiently simulate reactor physics problems with explicit geometric detail. The work in this thesis strives to overcome the significant computational cost of solving the 3D MOC equations by implementing efficient track generation, axially extruded ray tracing, Coarse Mesh Finite Difference (CMFD) acceleration, linear track-based source approximations, and scalable domain decomposition. Transport-corrected cross-sections are used to account for anisotropic without needing to store angular-dependent sources. Additionally, significant attention has been given to complications that arise in full core simulations with transport-corrected cross-sections. The convergence behavior of transport methods is analyzed, leading to a new strategy for stabilizing the source iteration scheme for neutron transport simulations. The methods are incorporated into the OpenMOC reactor physics code and simulation results are presented for the full core BEAVRS LWR benchmark. Parameter refinement studies and comparisons with reference OpenMC Monte Carlo solutions show that converged full core 3D MOC simulations are feasible on modern supercomputers for the first time.
Thesis: Ph. D. in Computational Nuclear Science and Engineering, Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, 2018.; This electronic version was submitted by the student author.  The certified thesis is available in the Institute Archives and Special Collections.; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 269-274).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/119030</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Probabilistic regional ocean predictions : stochastic fields and optimal planning</title>
<link>https://hdl.handle.net/1721.1/115733</link>
<description>Probabilistic regional ocean predictions : stochastic fields and optimal planning
Narayanan Subramani, Deepak
The coastal ocean is a prime example of multiscale nonlinear fluid dynamics. Ocean fields in such regions are complex, with multiple spatial and temporal scales and nonstationary heterogeneous statistics. Due to the limited measurements, there are multiple sources of uncertainties, including the initial conditions, boundary conditions, forcing, parameters, and even the model parameterizations and equations themselves. To reduce uncertainties and allow long-duration measurements, the energy consumption of ocean observing platforms need to be optimized. Predicting the distributions of reachable regions, time-optimal paths, and risk-optimal paths in uncertain, strong and dynamic flows is also essential for their optimal and safe operations. Motivated by the above needs, the objectives of this thesis are to develop and apply the theory, schemes, and computational systems for: (i) Dynamically Orthogonal ocean primitive-equations with a nonlinear free-surface, in order to quantify uncertainties and predict probabilities for four-dimensional (time and 3-d in space) coastal ocean states, respecting their nonlinear governing equations and non-Gaussian statistics; (ii) Stochastic Dynamically Orthogonal level-set optimization to rigorously incorporate realistic ocean flow forecasts and plan energy-optimal paths of autonomous agents in coastal regions; (iii) Probabilistic predictions of reachability, time-optimal paths and risk-optimal paths in uncertain, strong and dynamic flows. For the first objective, we further develop and implement our Dynamically Orthogonal (DO) numerical schemes for idealized and realistic ocean primitive equations with a nonlinear free-surface. The theoretical extensions necessary for the free-surface are completed. DO schemes are researched and DO terms, functions, and operations are implemented, focusing on: state variable choices; DO norms; DO condition for flows with a dynamic free-surface; diagnostic DO equations for pressure, barotropic velocities and density terms; non-polynomial nonlinearities; semi-implicit time-stepping schemes; and re-orthonormalization consistent with leap-frog time marching. We apply the new DO schemes, as well as their theoretical extensions and efficient serial implementation to forecast idealized-to-realistic stochastic coastal ocean dynamics. For the realistic simulations, probabilistic predictions for the Middle Atlantic Bight region, Northwest Atlantic, and northern Indian ocean are showcased. For the second objective, we integrate data-driven ocean modeling with our stochastic DO level-set optimization to compute and study energy-optimal paths, speeds, and headings for ocean vehicles in the Middle Atlantic Bight region. We compute the energy-optimal paths from among exact time-optimal paths. For ocean currents, we utilize a data-assimilative multiscale re-analysis, combining observations with implicit two-way nested multi-resolution primitive-equation simulations of the tidal-to-mesoscale dynamics in the region. We solve the reduced-order stochastic DO level-set partial differential equations (PDEs) to compute the joint probability of minimum arrival-time, vehicle-speed time-series, and total energy utilized. For each arrival time, we then select the vehicle-speed time-series that minimize the total energy utilization from the marginal probability of vehicle-speed and total energy. The corresponding energy-optimal path and headings be obtained through a particle backtracking equation. For the missions considered, we analyze the effects of the regional tidal currents, strong wind events, coastal jets, shelfbreak front, and other local circulations on the energy-optimal paths. For the third objective, we develop and apply stochastic level-set PDEs that govern the stochastic time-optimal reachability fronts and paths for vehicles in uncertain, strong, and dynamic flow fields. To solve these equations efficiently, we again employ their dynamically orthogonal reduced-order projections. We develop the theory and schemes for risk-optimal planning by combining decision theory with our stochastic time-optimal planning equations. The risk-optimal planning proceeds in three steps: (i) obtain predictions of the probability distribution of environmental flows, (ii) obtain predictions of the distribution of exact time-optimal paths for the forecast flow distribution, and (iii) compute and minimize the risk of following these uncertain time-optimal paths. We utilize the new equations to complete stochastic reachability, time-optimal and risk-optimal path planning in varied stochastic quasi-geostrophic flows. The effects of the flow uncertainty on the reachability fronts and time-optimal paths is explained. The risks of following each exact time-optimal path is evaluated and risk-optimal paths are computed for different risk tolerance measures. Key properties of the risk-optimal planning are finally discussed. Theoretically, the present methodologies are PDE-based and compute stochastic ocean fields, and optimal path predictions without heuristics. Computationally, they are several orders of magnitude faster than direct Monte Carlo. Such technologies have several commercial and societal applications. Specifically, the probabilistic ocean predictions can be input to a technical decision aide for a sustainable fisheries co-management program in India, which has the potential to provide environment friendly livelihoods to millions of marginal fishermen. The risk-optimal path planning equations can be employed in real-time for efficient ship routing to reduce greenhouse gas emissions and save operational costs.
Thesis: Ph. D. in Mechanical Engineering and Computation, Massachusetts Institute of Technology, Department of Mechanical Engineering, 2018.; Cataloged from PDF version of thesis. "Submitted to the Department of Mechanical Engineering and Center for Computational Engineering."; Includes bibliographical references (pages 253-268).
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/115733</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Direct and adaptive quantification schemes for extreme event statistics in complex dynamical systems</title>
<link>https://hdl.handle.net/1721.1/113542</link>
<description>Direct and adaptive quantification schemes for extreme event statistics in complex dynamical systems
Mohamad, Mustafa A
Quantifying extreme events is a central issue for many technological processes and natural phenomena. As extreme events, we consider transient responses that push the system away from its statistical steady state and that correspond to large excursions. Complex systems exhibiting extreme events include dynamical systems found in nature, such as the occurrence of anomalous weather and climate events, turbulence, formation of freak waves in the ocean and optics, and dynamical systems in engineering applications, including mechanical components under environmental loads, ship rolling and capsizing, critical events in power grids, as well as chemical reactions and conformational changes in molecules. It has been recognized that extreme events occur more frequently than Gaussian statistics suggest and thus occur often enough that they have practical consequences, and sometimes catastrophic outcomes, that are important to understand and predict. A hallmark characteristic of extreme events in complex dynamical systems is non-Gaussian statistics (e.g. heavy-tails) in the probability density function (pdf) describing the response of their observables. For engineers and applied mathematicians, a central issue is how to efficiently and accurately describe this non-Gaussian behavior. For random dynamical systems with inherently nonlinear dynamics, expressed through intermittent events, nonlinear energy transfers, broad energy spectra, and large intrinsic dimensionality, it is largely the case that we are limited to (direct) Monte-Carlo sampling, which is too expensive to apply in real-world applications. To address these challenges, we present both direct and adaptive (sampling based) strategies designed to quantify the probabilistic aspects of extreme events in complex dynamical systems, effectively and efficiently. Specifically, we first develop a direct quantification framework that involves a probabilistic decomposition that separately considers intermittent, extreme events from the background stochastic attractor of the dynamical system. This decomposition requires knowledge of the dynamical mechanisms that are responsible for extreme events and partitions the phase space accordingly. We then apply different uncertainty quantification schemes to the two decomposed dynamical regimes: the background attractor and the intermittent, extreme-event component. The background component, describing the 'core' of the pdf, although potentially very high-dimensional, can be efficiently described by uncertainty quantification schemes that resolve low-order statistics. On the other hand, the intermittent component, related to the tails, can be described in terms of a low-dimensional representation by a small number of modes through a reduced order model of the extreme events. The probabilistic information from these two regimes is then synthesized according to a total probability law argument, to effectively approximate the heavy-tailed, non-Gaussian probability distribution function for quantities of interest. The method is demonstrated through numerous applications and examples, including the analytical and semi-analytical quantification of the heavy-tailed statistics in mechanical systems under random impulsive excitations (modeling slamming events in high speed craft motion), oscillators undergoing transient parametric resonances and instabilities (modeling ship rolling in irregular seas and beam bending), and extreme events in nonlinear Schrodinger based equations (modeling rogue waves in the deep ocean). The proposed algorithm is shown to accurately describe tail statistics in all of these examples and is demonstrated to be many orders of magnitude faster than direct Monte-Carlo simulations. The second part of this thesis involves the development of adaptive, sampling based strategies that aim to accurately estimate the probability distribution and extreme response statistics of a scalar observable, or quantity of interest, through a minimum number of experiments (numerical simulations). These schemes do not require specialized knowledge of the dynamics, nor understanding of the mechanism that cause or trigger extreme responses. For numerous complex systems it may not be possible or very challenging to analyze and quantify conditions that lead to extreme responses or even to obtain an accurate description of the dynamics of all the processes that are significant. To address this important class of problems, we develop a sequential algorithm that provides the next-best design point (set of experimental parameters) that leads to the largest reduction in the error of the probability density function estimate for the scalar quantity of interest when the adaptively predicted design point is evaluated. The proposed algorithm utilizes Gaussian process regression to infer dynamical properties of the quantity of interest, which is then used to estimate the desired pdf along with uncertainty bounds. We iteratively determine new design points through an optimization procedure that finds the optimal point in parameter space that maximally reduces uncertainty between the estimated bounds of the posterior pdf estimate of the observable. We provide theorems that guarantee convergence of the algorithm and analyze its asymptotic behavior. The adaptive sampling method is illustrated to an example in ocean engineering. We apply the algorithm to estimate the non-Gaussian statistics describing the loads on an offshore platform in irregular seas. The response of the platform is quantified through three-dimensional smoothed particle hydrodynamics simulations. Because of the extreme computational cost of these numerical models, quantification of the extreme event statistics for such systems has been a formidable challenge. We demonstrate that the adaptive algorithm accurately quantifies the extreme event statistics of the loads on the structure through a small number of numerical experiments, showcasing that the proposed algorithm can realistically account for extreme events in the design and optimization processes for large-scale engineering systems.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2017.; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 171-183).
</description>
<pubDate>Sun, 01 Jan 2017 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/113542</guid>
<dc:date>2017-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Simulation methods for plasmonic structures</title>
<link>https://hdl.handle.net/1721.1/112460</link>
<description>Simulation methods for plasmonic structures
Vidal-Codina, Ferran
In the recent years there has been a growing interest in studying electromagnetic wave propagation at the nanoscale. The interaction of light with metallic nanostructures produces a collective excitation of conduction electrons at the metal surface, also known as surface plasmons. These plasmonic resonances enable an unprecedented control of light by confining the electromagnetic field to regions well beyond the diffraction limit, thereby leading to nearfield enhancements of the incident wave of several orders of magnitude. These remarkable properties have motivated the application of plasmonic devices in sensing, nano-resolution imaging, energy harvesting, nanoscale electronics and cancer treatment. Despite state-of-the-art nanofabrication techniques are used to realize plasmonic devices, their performance is severely impacted by fabrication uncertainties arising from extreme manufacturing constraints. Mathematical modeling and numerical simulation are therefore essential to accurately predict the response of the physical system, and must be incorporated in the design process. Nonetheless, plasmonic simulations present notable challenges. From the physical perspective, the realistic behavior of conduction electrons in metallic nanostructures is not captured by Maxwell's equations, thus requiring additional modeling. From the simulation perspective, the disparity in length scales stemming from the extreme field localization exceeds the capabilities of most numerical simulation schemes. In addition, relevant data such as optical constants or geometry specifications are typically subject to measurement and manufacturing errors, hence simulations need to accommodate uncertainty in the data. In this thesis we present a collection of numerical methods to efficiently simulate electromagnetic wave propagation through metallic nanostructures. Firstly, we develop the hybridizable discontinuous Galerkin (HDG) method for Maxwell's equations augmented with the hydrodynamic model for metals, which accounts for the nonlocal interactions between electrons that become predominant at nanometric regimes. Secondly, we develop a reduced order modeling (ROM) framework for Maxwell's equations with the HDG method, enabling the incorporation of material and geometric uncertainties in the simulations. The result is a family of surrogate models that produces accurate yet inexpensive simulations of plasmonic devices. Finally, we apply these approaches to the study of periodic annular nanogaps, and present parametric analyses, verification with experimental data and design of novel structures.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2017.; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 129-148).
</description>
<pubDate>Sun, 01 Jan 2017 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/112460</guid>
<dc:date>2017-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Simulating fluid-solid interaction using smoothed particle hydrodynamics method</title>
<link>https://hdl.handle.net/1721.1/109642</link>
<description>Simulating fluid-solid interaction using smoothed particle hydrodynamics method
Pan, Kai, Ph. D. Massachusetts Institute of Technology
The fluid-solid interaction (FSI) is a challenging process for numerical models since it requires accounting for the interactions of deformable materials that are governed by different equations of state. It calls for the modeling of large deformation, geometrical discontinuity, material failure, including crack propagation, and the computation of flow induced loads on evolving fluid-solid interfaces. Using particle methods with no prescribed geometric linkages allows high deformations to be dealt with easily in cases where grid-based methods would introduce difficulties. Smoothed Particle Hydrodynamics (SPH) method is one of the oldest mesh-free methods, and it has gained popularity over the last decades to simulate initially fluids and more recently solids. This dissertation is focused on developing a general numerical modeling framework based on SPH to model the coupled problem, with application to wave impact on floating offshore structures, and the hydraulic fracturing of rocks induced by fluid pressure. An accurate estimate of forces exerted by waves on offshore structures is vital to assess potential risks to structural integrity. The dissertation first explores a weakly compressible SPH method to simulate the wave impact on rigid-body floating structures. Model predictions are validated against two sets of experimental data, namely the dam-break fluid impact on a fixed structure, and the wave induced motion of a floating cube. Following validation, this framework is applied to simulation of the mipact of large waves on an offshore structure. A new numerical technique is proposed for generating multi-modal and multi-directional sea waves with SPH. The waves are generated by moving the side boundaries of the fluid domain according to the sum of Fourier modes, each with its own direction, amplitude and wave frequency. By carefully selecting the amplitudes and the frequencies, the ensemble of wave modes can be chosen to satisfy a real sea wave spectrum. The method is used to simulate an extreme wave event, with generally good agreement between the simulated waves and the recorded real-life data. The second application is the modeling of hydro-fracture initiation and propagation in rocks. A new general SPH numerical coupling method is developed to model the interaction between fluids and solids, which includes non-linear deformation and dynamic fracture initiation and propagation. A Grady-Kipp damage model is employed to model the tensile failure of the solid and a Drucker-Prager plasticity model is used to predict material shear failures. These models are coupled together so that both shear and tensile failures can be simulated within the same scheme. Fluid and solid are treated as a single system for the entire domain, and are computed using the same stress representation within a uniform SPH framework. Two new stress coupling approaches are proposed to maintain the stress continuity at the fluid-solid interface, namely, a continuum approach and stress-boundary-condition approach. A corrected form of the density continuity equation is implemented to handle the density discontinuity of the two phases at the interface. The method is validated against analytic solutions for a hydrostatic problem and for a pressurized borehole in the presence of in-situ stresses. The simulation of hydro-fracture initiation and propagation in the presence of in-situ stresses is also presented. Good results demonstrate that SPH has the potential to accurately simulate the hydraulic-fracturing phenomenon in rocks.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Civil and Environmental Engineering, 2017.; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 97-102).
</description>
<pubDate>Sun, 01 Jan 2017 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/109642</guid>
<dc:date>2017-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Experimental study and modeling analysis of ion transport membranes for methane partial oxidation and oxyfuel combustion</title>
<link>https://hdl.handle.net/1721.1/108949</link>
<description>Experimental study and modeling analysis of ion transport membranes for methane partial oxidation and oxyfuel combustion
Dimitrakopoulos, Georgios T
The atmospheric concentration of CO 2 has recently exceeded 400 (ppm) (up from 285 (ppm) in 1850), largely because of the burning of fossil fuels. Despite the growth of alternatives, these fuels will continue to play a major role in the energy sector for many decades. In accordance with international agreements, action to curtail C02 emissions is necessary, including carbon capture, reuse and storage. For this purpose, some of the leading technologies are oxy-combustion for power generation and partial oxidation for syngas production. Both require significant quantities of oxygen, whose production can impose considerable energy and economic penalties. Alternative technologies, such as intermediate-temperature ceramic membranes, operating under reactive conditions, promise to ameliorate both. Challenges include the long term stability of the material, reactor design and integration into the overall system. The goal of this thesis is to develop a framework for the thermochemical and electrochemical modeling of oxygen-conducting membranes that can be used in reactor design, based on experimental measurements and detailed surface exchange kinetics and charged species transport. La0. 9Ca0.1Fe03-[delta] (LCF) perovskite membranes have been used because of their long term stability in a reducing environment. Using experimental measurements, we examine the impact of hydrogen, carbon monoxide and methane on oxygen permeation and defect chemistry. While LCF exhibits low flux under non-reactive conditions, in the presence of fuel oxygen permeation increases by more than one order of magnitude. Our experiments confirm that hydrogen surface oxidation is faster compared to carbon monoxide. With methane, syngas production is slow and oxygen permeation is limited by surface exchange on the permeate side. Adding C02 to the fuel stream doubles the oxygen flux and increases syngas production by an order of magnitude. Our modeling analysis shows that different oxidation states of Fe participate in the electron transfer process. To account for this dependency, oxygen transport is modeled using a multi-step (fuel dependent) surface reaction mechanism that preserves thermodynamic consistency and conserves site balance and electroneutrality. Charged species diffusion is modeled using the dilute-limit Poisson-Nernst-Planck formulation that accounts for transport due to concentration gradient as well as electromigration. We use the experimental data to extract kinetic parameters of the model. We couple the aforementioned model with CFD of the gas-phase transport and thermochemistry in an effort to develop a numerical tool that allows the design of membrane reactors that exhibit high oxygen permeation and fuel conversion.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2017.; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 211-223).
</description>
<pubDate>Sun, 01 Jan 2017 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/108949</guid>
<dc:date>2017-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Model order reduction methods for data assimilation : state estimation and structural health monitoring</title>
<link>https://hdl.handle.net/1721.1/108942</link>
<description>Model order reduction methods for data assimilation : state estimation and structural health monitoring
Taddei, Tommaso
The objective of this thesis is to develop and analyze model order reduction approaches for the efficient integration of parametrized mathematical models and experimental measurements. Model Order Reduction (MOR) techniques for parameterized Partial Differential Equations (PDEs) offer new opportunities for the integration of models and experimental data. First, MOR techniques speed up computations allowing better explorations of the parameter space. Second, MOR provides actionable tools to compress our prior knowledge about the system coming from the parameterized best-knowledge model into low-dimensional and more manageable forms. In this thesis, we demonstrate how to take advantage of MOR to design computational methods for two classes of problems in data assimilation. In the first part of the thesis, we discuss and extend the Parametrized-Background Data-Weak (PBDW) approach for state estimation. PBDW combines a parameterized best knowledge mathematical model and experimental data to rapidly estimate the system state over the domain of interest using a small number of local measurements. The approach relies on projection-by-data, and exploits model reduction techniques to encode the knowledge of the parametrized model into a linear space appropriate for real-time evaluation. In this work, we extend the PBDW formulation in three ways. First, we develop an experimental a posteriori estimator for the error in the state. Second, we develop computational procedures to construct local approximation spaces in subregions of the computational domain in which the best-knowledge model is defined. Third, we present an adaptive strategy to handle experimental noise in the observations. We apply our approach to a companioni heat transfer experiment to prove the effectiveness of our technique. In the second part of the thesis, we present a model-order reduction approach to simulation based classification, with particular application to Structural Health Monitoring (SHM). The approach exploits (i) synthetic results obtained by repeated solution of a parametrized PDE for different values of the parameters, (ii) machine-learning algorithms to generate a classifier that monitors the state of damage of the system, and (iii) a reduced basis method to reduce the computational burden associated with the model evaluations. The approach is based on an offline/online computational decomposition. In the offline stage, the fields associated with many different system configurations, corresponding to different states of damage, are computed and then employed to teach a classifier. Model reduction techniques, ideal for this many-query context, are employed to reduce the computational burden associated with the parameter exploration. In the online stage, the classifier is used to associate measured data to the relevant diagnostic class. In developing our approach for SHM, we focus on two specific aspects. First, we develop a mathematical formulation which properly integrates the parameterized PDE model within the classification problem. Second, we present a sensitivity analysis to take into account the error in the model. We illustrate our method and we demonstrate its effectiveness through the vehicle of a particular companion experiment, a harmonically excited microtruss.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2017.; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 243-258).
</description>
<pubDate>Sun, 01 Jan 2017 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/108942</guid>
<dc:date>2017-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Development of macroscopic nanoporous graphene membranes for gas separation</title>
<link>https://hdl.handle.net/1721.1/108931</link>
<description>Development of macroscopic nanoporous graphene membranes for gas separation
Boutilier, Michael S. H
Separating components of a gas from a mixture is a critical step in several important industrial processes including natural gas purification, hydrogen production, carbon dioxide sequestration, and oxy-combustion. For such applications, gas separation membranes are attractive because they offer relatively low energy costs but can be limited by low flow rates and low selectivities. Nanoporous graphene membranes have the potential to exceed the permeance and selectivity limits of existing gas separation membranes. This is made possible by the atomic thickness of the material, which can support sub-nanometer pores that enable molecular sieving while presenting low resistance to permeate flow. The feasibility of gas separation by graphene nanopores has been demonstrated experimentally on micron-scale areas of graphene. However, scaling up to macroscopic sizes presents significant challenges, including graphene imperfections and control of the selective nanopore size distribution across large areas. The overall objective of this thesis research is to develop macroscopic graphene membranes for gas separation. Investigation reveals that the inherent permeance of large areas of graphene results from the presence of micron-scale tears and nanometer-scale intrinsic defects. Stacking multiple graphene layers is shown to reduce leakage exponentially. A model is developed for the inherent permeance of multi-layer graphene and shown to accurately explain measured flow rates. Applying this model to membranes with created selective pores, it is predicted that by proper choice of the support membrane beneath graphene or adequate leakage sealing, it should be possible to construct a selectively permeable graphene membrane despite the presence of defects. Interfacial polymerization and atomic layer deposition steps during membrane fabrication are shown to effectively seal micron-scale tears and nanometer-scale defects in graphene. The support membrane is designed to isolate intrinsic defects and reduce leakage through tears. Methods of creating a high density of selectively permeable nanopores are explored. Knudsen selectivity is achieved using macroscopic three-layer graphene membranes on polymer supports by high density ion bombardment. Separation ratios exceeding the Knudsen effusion limit are achieved with single-layer graphene on optimized supports by low density ion bombardment followed by oxygen plasma etching, providing evidence of molecular sieving based gas separation through centimeter-scale graphene membranes.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2017.; Page 230 blank. Cataloged from PDF version of thesis.; Includes bibliographical references (pages 221-229).
</description>
<pubDate>Sun, 01 Jan 2017 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/108931</guid>
<dc:date>2017-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Continuous low-rank tensor decompositions, with applications to stochastic optimal control and data assimilation</title>
<link>https://hdl.handle.net/1721.1/108918</link>
<description>Continuous low-rank tensor decompositions, with applications to stochastic optimal control and data assimilation
Gorodetsky, Alex Arkady
Optimal decision making under uncertainty is critical for control and optimization of complex systems. However, many techniques for solving problems such as stochastic optimal control and data assimilation encounter the curse of dimensionality when too many state variables are involved. In this thesis, we propose a framework for computing with high-dimensional functions that mitigates this exponential growth in complexity for problems with separable structure. Our framework tightly integrates two emerging areas: tensor decompositions and continuous computation. Tensor decompositions are able to effectively compress and operate with low-rank multidimensional arrays. Continuous computation is a paradigm for computing with functions instead of arrays, and it is best realized by Chebfun, a MATLAB package for computing with functions of up to three dimensions. Continuous computation provides a natural framework for building numerical algorithms that effectively, naturally, and automatically adapt to problem structure. The first part of this thesis describes a compressed continuous computation framework centered around a continuous analogue to the (discrete) tensor-train decomposition called the function-train decomposition. Computation with the function-train requires continuous matrix factorizations and continuous numerical linear algebra. Continuous analogues are presented for performing cross approximation; rounding; multilinear algebra operations such as addition, multiplication, integration, and differentiation; and continuous, rank-revealing, alternating least squares. Advantages of the function-train over the tensor-train include the ability to adaptively approximate functions and the ability to compute with functions that are parameterized differently. For example, while elementwise multiplication between tensors of different sizes is undefined, functions in FT format can be readily multiplied together. Next, we develop compressed versions of value iteration, policy iteration, and multilevel algorithms for solving dynamic programming problems arising in stochastic optimal control. These techniques enable computing global solutions to a broader set of problems, for example those with non-affine control inputs, than previously possible. Examples are presented for motion planning with robotic systems that have up to seven states. Finally, we use the FT to extend integration-based Gaussian filtering to larger state spaces than previously considered. Examples are presented for dynamical systems with up to twenty states.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2017.; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 205-214).
</description>
<pubDate>Sun, 01 Jan 2017 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/108918</guid>
<dc:date>2017-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A study of Taylor bubbles in vertical and inclined slug flow using multiphase CFD with level set</title>
<link>https://hdl.handle.net/1721.1/104232</link>
<description>A study of Taylor bubbles in vertical and inclined slug flow using multiphase CFD with level set
Lizarraga-García, Enrique
Slug flow commonly occurs in gas and oil systems. Current predictive methods are based on mechanistic models, which require the use of closure relations to complement the conservation equations to predict integral flow parameters such as liquid holdup (or void fraction) and pressure gradient. These closure relations are typically developed either empirically or from semi-empirical models assuming idealized geometry of the interface, thus they carry the highest uncertainties in the mechanistic models. In this work, sensitivity analysis has determined that Taylor bubble velocity in slug flow is one such closure relation which significantly affects the calculation of these parameters. The main objective is to develop a unified higher-fidelity closure relation for Taylor bubble velocity. Here, we employ a novel approach to overcome the experimental limitations: validated 3D Computational Multiphase Fluid Dynamics (CMFD) with Interface Tracking Methods (ITMs) where the interface is tracked with a Level-Set method implemented in the commercial code TransAT®. In the literature, the Taylor bubble velocity is modeled based on two different contributions: (i) the drift velocity, i.e., the velocity of propagation of a Taylor bubble in stagnant liquid, and (ii) the liquid flow contribution. Here, we first analyze the dynamics of Taylor bubbles in stagnant liquid by generating a large numerical database that covers the most ample range of fluid properties and pipe inclination angles explored to date (Eo [epsilon] [10, 700], Mo [epsilon] [1 . 10-6, 5 . 103], and [theta] [epsilon] [0°, 90°]). A unified Taylor bubble velocity correlation, proposed for use as a slug flow closure relation in the mechanistic model, is derived from that database. The new correlation predicts the numerical database with 8.6% absolute average relative error and a coefficient of determination R² = 0.97, and other available experimental data with 13.0% absolute average relative error and R² = 0.84. By comparison, the second best correlation reports absolute average relative errors of 120% and 37%, and R² = 0.40 and 0.17, respectively. Furthermore, two key assumptions made in the CMFD simulations are justified with simulations and experiments: (i) the lubricating liquid film formed above the bubble as the pipe inclines with respect to the horizontal does not breakup, i.e., the gas phase never touches the pipe wall and triple line is not formed; and (ii) the Taylor bubble length does not affect its dynamics in inclined pipes. To verify the robustness of the first assumption, the gravity-induced film drainage is analytically modeled and experimentally validated. From this model, a criterion to avoid film breakup is obtained, which holds in the simulations performed. The second assumption is validated with both experiments and simulations. Finally, simulations of Taylor bubbles in upward and downward fluid flow in vertical and inclined pipes are performed, from where it is concluded that an improvement of the current velocity prediction models is needed. In particular, Taylor bubbles in vertical downward flow where the bubble becomes non-axisymmetric at high enough liquid flows are remarkably ill-predicted by current correlations.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2016.; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 203-220).
</description>
<pubDate>Fri, 01 Jan 2016 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/104232</guid>
<dc:date>2016-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Path planning and adaptive sampling in the coastal ocean</title>
<link>https://hdl.handle.net/1721.1/103438</link>
<description>Path planning and adaptive sampling in the coastal ocean
Lolla, Sri Venkata Tapovan
When humans or robots operate in complex dynamic environments, the planning of paths and the collection of observations are basic, indispensable problems. In the oceanic and atmospheric environments, the concurrent use of multiple mobile sensing platforms in unmanned missions is growing very rapidly. Opportunities for a paradigm shift in the science of autonomy involve the development of fundamental theories to optimally collect information, learn, collaborate and make decisions under uncertainty while persistently adapting to and utilizing the dynamic environment. To address such pressing needs, this thesis derives governing equations and develops rigorous methodologies for optimal path planning and optimal sampling using collaborative swarms of autonomous mobile platforms. The application focus is the coastal ocean where currents can be much larger than platform speeds, but the fundamental results also apply to other dynamic environments. We first undertake a theoretical synthesis of minimum-time control of vehicles operating in general dynamic flows. Using various ideas rooted in non-smooth calculus, we prove that an unsteady Hamilton-Jacobi equation governs the forward reachable sets in any type of Lipschitz-continuous flow. Next, we show that with a suitable modification to the Hamiltonian, the results can be rigorously generalized to perform time-optimal path planning with anisotropic motion constraints and with moving obstacles and unsafe 'forbidden' regions. We then derive a level-set methodology for distance-based coordination of swarms of vehicles operating in minimum time within strong and dynamic ocean currents. The results are illustrated for varied fluid and ocean flow simulations. Finally, the new path planning system is applied to swarms of vehicles operating in the complex geometry of the Philippine Archipelago, utilizing realistic multi-scale current predictions from a data-assimilative ocean modeling system. In the second part of the thesis, we derive a theory for adaptive sampling that exploits the governing nonlinear dynamics of the system and captures the non-Gaussian structure of the random state fields. Optimal observation locations are determined by maximizing the mutual information between the candidate observations and the variables of interest. We develop a novel Bayesian smoother for high-dimensional continuous stochastic fields governed by general nonlinear dynamics. This smoother combines the adaptive reduced-order Dynamically-Orthogonal equations with Gaussian Mixture Models, extending linearized Gaussian backward pass updates to a nonlinear, non-Gaussian setting. The Bayesian information transfer, both forward and backward in time, is efficiently carried out in the evolving dominant stochastic subspace. Building on the foundations of the smoother, we then derive an efficient technique to quantify the spatially and temporally varying mutual information field in general nonlinear dynamical systems. The globally optimal sequence of future sampling locations is rigorously determined by a novel dynamic programming approach that combines this computation of mutual information fields with the predictions of the forward reachable set. All the results are exemplified and their performance is quantitatively assessed using a variety of simulated fluid and ocean flows. The above novel theories and schemes are integrated so as to provide real-time computational intelligence for collaborative swarms of autonomous sensing vehicles. The integrated system guides groups of vehicles along predicted optimal trajectories and continuously improves field estimates as the observations predicted to be most informative are collected and assimilated. The optimal sampling locations and optimal trajectories are continuously forecast, all in an autonomous and coordinated fashion.
Thesis: Ph. D. in Mechanical Engineering and Computation, Massachusetts Institute of Technology, Department of Mechanical Engineering, 2016.; This electronic version was submitted by the student author.  The certified thesis is available in the Institute Archives and Special Collections.; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 299-315).
</description>
<pubDate>Fri, 01 Jan 2016 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/103438</guid>
<dc:date>2016-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Impact of fuel and oxidizer composition on premixed flame stabilization in turbulent swirling flows : dynamics and scaling</title>
<link>https://hdl.handle.net/1721.1/103437</link>
<description>Impact of fuel and oxidizer composition on premixed flame stabilization in turbulent swirling flows : dynamics and scaling
Taamallah, Soufien
The world relies on fossil fuels as its main energy source (86.7% in 1973, 81.7% in 2012). Several factors including the abundance of resources and the existing infrastructure suggest that this is likely to continue in the near future (potentially 75% in 2040). Meanwhile climate change continues to be a pressing concern that calls for the development of low CO2 energy systems. Among the most promising approaches are pre-combustion capture technologies, e.g., coal gasification and natural gas reforming that produce hydrogen-rich fuels. Another approach is oxy-combustion in which air is replaced by a mixture of O2/CO2/H2O as the oxidizer stream. However, modern gas turbines have been optimized to operate on methane-air combustion and several challenges, notably thermo-acoustic instability, arise when using other fuels or oxidizers because of their different thermochemical and transport properties. While these phenomena constitute a major challenge under conventional operations, using hydrogen-rich fuels or CO2-rich oxidizer exacerbates the problem by modifying the combustor stability map in ways that are not well understood. In this thesis, we identify combustion modes most prone to dynamics, predict the onset of thermo-acoustic instability over a wide range of fuel and oxidizer compositions, and define parameters that can scale the data. To this end, a combination of experimental and numerical tools were deployed. We carried out a series of experiments in an optically accessible laboratory-scale swirl-stabilized combustor typical of those found in modern gas turbines, using high-speed chemiluminescence to examine the flame macrostructure; high-speed Particle Image Velocimetry and OH Planar Laser Induced Fluorescence to probe the flow and flame microstructure. Numerical simulations were used to complement experiments and examine the complex three-dimensional two-way interaction between the flame and the turbulent swirling flow. Experimental data were used to construct the stability maps for different CH4-H2 mixtures and analyze the dynamic flame macrostructures and their transitions. A comparison with acoustically uncoupled combustion shows that the onset of thermo-acoustic instability is concomitant with a specific transition associated with the intermittent appearance of the flame in the outer recirculation zone (ORZ) and stabilization along the outer shear layer (forming between the swirling jet and the ORZ, as revealed by the PIV-PLIF data). The sudden onset of large amplitude limit cycle oscillations and the observed hysteresis suggest the existence of a sub-critical Hopf bifurcation typically characterized by a bistable or "triggering" zone; the flame intermittency in the ORZ can potentially provide the disturbance required to trigger these oscillations. Using a dual-camera method to track chemiluminescence in space and time, this flame transition was found to originate from a reacting kernel that detaches from the inner shear layer flame (forming between the jet and the vortex breakdown zone), reaching the ORZ and spinning at a specific frequency; its characteristic Strouhal number is independent of the Reynolds number and the fuel/oxidizer, only a function of the swirl strength. We propose a new Karlovitz number based criterion that defines the transition on a flow time - flame time space, the former being the inverse of the spinning frequency and the latter being the flame extinction strain rate. According to this scaling, the flame survives in the ORZ if and when it can overcome the region's bulk strain rate. This criterion is valid over a wide range of operating, fuel and oxidizer composition, covering a wide range of fast to slow chemistry scenarios. Given the role of this flame transition in triggering the instability, the same criterion is applicable to predicting the onset of thermo-acoustics. The interaction of the turbulent swirling flow with the flame is further examined using large eddy simulations. Numerical simulations show that the experimentally observed large scale flame structures along the inner shear layer are due to a helical vortex core that originates at the swirler's centerbody. This vortical structure stays aligned with the centerline in the combustor upstream section, but bends and reaches the inner shear layer-stabilized flame around the sudden expansion where it causes the flame wrinkling. We propose that the flame kernel igniting the ORZ/ OSL observed in the experiment may be related to the interaction between the helical vortical structure and the outer shear layer.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2016.; This electronic version was submitted by the student author.  The certified thesis is available in the Institute Archives and Special Collections.; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 205-214).
</description>
<pubDate>Fri, 01 Jan 2016 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/103437</guid>
<dc:date>2016-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Efficient multiscale methods for micro/nanoscale solid state heat transfer</title>
<link>https://hdl.handle.net/1721.1/101537</link>
<description>Efficient multiscale methods for micro/nanoscale solid state heat transfer
Péraud, Jean-Philippe M. (Jean-Philippe Michel)
In this thesis, we develop methods for solving the linearized Boltzmann transport equation (BTE) in the relaxation-time approximation for describing small-scale solidstate heat transfer. We first discuss a Monte Carlo (MC) solution method that builds upon the deviational energy-based Monte Carlo method presented in [J.-P. Péraud and N.G. Hadjiconstantinou, Physical Review B, 84(20), p. 205331, 2011]. By linearizing the deviational Boltzmann equation we formulate a kinetic-type algorithm in which each computational particle is treated independently; this feature is shown to be consequence of the energy-based formulation and the linearity of the governing equation and results in an "event-driven" algorithm that requires no time discretization. In addition to a much simpler and more accurate algorithm (no time discretization error), this formulation leads to considerable speedup and memory savings, as well as the ability to efficiently treat materials with wide ranges of phonon relaxation times, such as silicon. A second, complementary, simulation method developed in this thesis is based on the adjoint formulation of the linearized BTE, also derived here. The adjoint formulation describes the dynamics of phonons travelling backward in time, that is, being emitted from the "detectors" and detected by the "sources" of the original problem. By switching the detector with the source in cases where the former is small, that is when high accuracy is needed in small regions of phase-space, the adjoint formulation provides significant computational savings and in some cases makes previously intractable problems possible. We also develop an asymptotic theory for solving the BTE at small Knudsen numbers, namely at scales where Monte Carlo methods or other existing computational methods become inefficient. The asymptotic approach, which is based on a Hilbert expansion of the distribution function, shows that the macroscopic equation governing heat transport for non-zero but small Knudsen numbers is the heat equation, albeit supplemented with jump-type boundary conditions. Specifically, we show that the traditional no-jump boundary condition is only applicable in the macroscopic limit where the Knudsen number approaches zero. Kinetic effects, always present at the boundaries, become increasingly important as the Knudsen number increases, and manifest themselves in the form of temperature jumps that enter as boundary conditions to the heat equation, as well as local corrections in the form of kinetic boundary layers that need to be superposed to the heat equation solution. We present techniques for efficiently calculating the associated jump coefficients and boundary layers for different material models when analytical results are not available. All results are validated using deviational Monte Carlo methods primarily developed in this thesis. We finally demonstrate that the asymptotic solution method developed here can be used for calculating the Kapitza conductance (and temperature jump) associated with the interface between materials.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2015.; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 193-199).
</description>
<pubDate>Thu, 01 Jan 2015 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/101537</guid>
<dc:date>2015-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Numerical approaches for sequential Bayesian optimal experimental design</title>
<link>https://hdl.handle.net/1721.1/101442</link>
<description>Numerical approaches for sequential Bayesian optimal experimental design
Huan, Xun
Experimental data play a crucial role in developing and refining models of physical systems. Some experiments can be more valuable than others, however. Well-chosen experiments can save substantial resources, and hence optimal experimental design (OED) seeks to quantify and maximize the value of experimental data. Common current practice for designing a sequence of experiments uses suboptimal approaches: batch (open-loop) design that chooses all experiments simultaneously with no feedback of information, or greedy (myopic) design that optimally selects the next experiment without accounting for future observations and dynamics. In contrast, sequential optimal experimental design (sOED) is free of these limitations. With the goal of acquiring experimental data that are optimal for model parameter inference, we develop a rigorous Bayesian formulation for OED using an objective that incorporates a measure of information gain. This framework is first demonstrated in a batch design setting, and then extended to sOED using a dynamic programming (DP) formulation. We also develop new numerical tools for sOED to accommodate nonlinear models with continuous (and often unbounded) parameter, design, and observation spaces. Two major techniques are employed to make solution of the DP problem computationally feasible. First, the optimal policy is sought using a one-step lookahead representation combined with approximate value iteration. This approximate dynamic programming method couples backward induction and regression to construct value function approximations. It also iteratively generates trajectories via exploration and exploitation to further improve approximation accuracy in frequently visited regions of the state space. Second, transport maps are used to represent belief states, which reflect the intermediate posteriors within the sequential design process. Transport maps offer a finite-dimensional representation of these generally non-Gaussian random variables, and also enable fast approximate Bayesian inference, which must be performed millions of times under nested combinations of optimization and Monte Carlo sampling. The overall sOED algorithm is demonstrated and verified against analytic solutions on a simple linear-Gaussian model. Its advantages over batch and greedy designs are then shown via a nonlinear application of optimal sequential sensing: inferring contaminant source location from a sensor in a time-dependent convection-diffusion system. Finally, the capability of the algorithm is tested for multidimensional parameter and design spaces in a more complex setting of the source inversion problem.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2015.; This electronic version was submitted by the student author.  The certified thesis is available in the Institute Archives and Special Collections.; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 175-186).
</description>
<pubDate>Thu, 01 Jan 2015 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/101442</guid>
<dc:date>2015-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Finite element solution of interface and free surface three-dimensional fluid flow problems using flow-condition-based interpolation</title>
<link>https://hdl.handle.net/1721.1/97845</link>
<description>Finite element solution of interface and free surface three-dimensional fluid flow problems using flow-condition-based interpolation
You, Soyoung, Ph. D. Massachusetts Institute of Technology
The necessity for a highly accurate simulation scheme of free surface flows is emphasized in various industrial and scientific applications. To obtain an accurate response prediction, mass conservation must be satisfied. Due to a continuously moving fluid domain, however, it is a challenge to maintain the volume of the fluid while calculating the dynamic responses of free surfaces, especially when seeking solutions for long time durations. This thesis describes how the difficulty can be overcome by proper employment of an Arbitrary Lagrangian Eulerian (ALE) method derived from the Reynolds transport theorem to compute unsteady Newtonian flows including fluid interfaces and free surfaces. The proposed method conserves mass very accurately and obtains stable and accurate results with very large solution steps and even coarse meshes. The continuum mechanics equations are formulated, and the Navier-Stokes equations are solved using a 'flow-condition-based interpolation' (FCBI) scheme. The FCBI method uses exponential interpolations derived from the analytical solution of the 1-dimensional advection-diffusion equation. The thesis revisits the 2-dimensional FCBI method with special focus on the application to flow problems in highly nonlinear moving domains with interfaces and free surfaces, and develops an effective 3-D FCBI tetrahedral element for such applications. The newly developed 3-D FCBI solution scheme can solve flow problems of a wide range since it can handle highly nonlinear and unsteady flow conditions, even when large mesh distortions occur. Various example solutions are given to show the effectiveness of the developed solution schemes.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2015.; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 103-106).
</description>
<pubDate>Thu, 01 Jan 2015 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/97845</guid>
<dc:date>2015-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Transport maps for accelerated Bayesian computation</title>
<link>https://hdl.handle.net/1721.1/97263</link>
<description>Transport maps for accelerated Bayesian computation
Parno, Matthew David
Bayesian inference provides a probabilistic framework for combining prior knowledge with mathematical models and observational data. Characterizing a Bayesian posterior probability distribution can be a computationally challenging undertaking, however, particularly when evaluations of the posterior density are expensive and when the posterior has complex non-Gaussian structure. This thesis addresses these challenges by developing new approaches for both exact and approximate posterior sampling. In particular, we make use of deterministic couplings between random variables--i.e., transport maps--to accelerate posterior exploration. Transport maps are deterministic transformations between (probability) measures. We introduce new algorithms that exploit these transformations as a fundamental tool for Bayesian inference. At the core of our approach is an ecient method for constructing transport maps using only samples of a target distribution, via the solution of a convex optimization problem. We first demonstrate the computational eciency and accuracy of this method, exploring various parameterizations of the transport map, on target distributions of low-to-moderate dimension. Then we introduce an approach that composes sparsely parameterized transport maps with rotations of the parameter space, and demonstrate successful scaling to much higher dimensional target distributions. With these building blocks in place, we introduce three new posterior sampling algorithms. First is an adaptive Markov chain Monte Carlo (MCMC) algorithm that uses a transport map to dene an ecient proposal mechanism. We prove that this algorithm is ergodic for the exact target distribution and demonstrate it on a range of parameter inference problems, showing multiple order-of-magnitude speedups over current stateof- the-art MCMC techniques, as measured by the number of effectively independent samples produced per model evaluation and per unit of wall clock time. Second, we introduce an algorithm for inference in large-scale inverse problems with multiscale structure. Multiscale structure is expressed as a conditional independence relationship that is naturally induced by many multiscale methods for the solution of partial differential equations, such as the multiscale finite element method (MsFEM). Our algorithm exploits the offline construction of transport maps that represent the joint distribution of coarse and ne-scale parameters. We evaluate the accuracy of our approach via comparison to single-scale MCMC on a 100-dimensional problem, then demonstrate the algorithm on an inverse problem from ow in porous media that has over 105 spatially distributed parameters. Our last algorithm uses offline computation to construct a transport map representation of the joint data-parameter distribution that allows for ecient conditioning on data. The resulting algorithm has two key attributes: first, it can be viewed as a "likelihood-free" approximate Bayesian computation (ABC) approach, in that it only requires samples, rather than evaluations, of the likelihood function. Second, it is designed for approximate inference in near-real-time. We evaluate the eciency and accuracy of the method, with demonstration on a nonlinear parameter inference problem where excellent posterior approximations can be obtained in two orders of magnitude less online time than a standard MCMC sampler.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2015.; This electronic version was submitted by the student author.  The certified thesis is available in the Institute Archives and Special Collections.; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 167-174).
</description>
<pubDate>Thu, 01 Jan 2015 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/97263</guid>
<dc:date>2015-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A deviational Monte Carlo formulation of ab initio phonon transport and its application to the study of kinetic effects in graphene ribbons</title>
<link>https://hdl.handle.net/1721.1/92161</link>
<description>A deviational Monte Carlo formulation of ab initio phonon transport and its application to the study of kinetic effects in graphene ribbons
Landon, Colin Donald
We present a deviational Monte Carlo method for solving the Boltzmann equation for phonon transport subject to the linearized ab initio 3-phonon scattering operator. Phonon dispersion relations and transition rates are obtained from density functional theory calculations. The ab initio scattering operator replaces the commonly used relaxation-time approximation, which is known to neglect, among other things, coupling between out of equilibrium states. The latter is particularly important in two-dimensional materials such as graphene, which is the subject of this thesis. One important ingredient of the method presented here is an energy-conserving, stochastic particle algorithm for simulating the linearized form of the ab initio scattering operator. This scheme is incorporated within the recently developed deviational, energy-based formulation of the Boltzmann equation, to obtain, for the first time, low-variance Monte Carlo solutions of this model for time- and spatially-dependent problems. The deviational formulation ensures that simulations are computationally feasible for arbitrarily small temperature differences, while the stochastic treatment of the scattering operator is both efficient-in the limit of large number of states, it outperforms the more traditional direct evaluation methods used in solutions of the homogeneous Boltzmann equation-and exhibits no timestep error. We use the method to study heat transport in graphene ribbons, a geometry used to experimentally measure the thermal conductivity of graphene. Our results show that the effective thermal conductivity of ribbons decreases monotonically as either the length or the width of the ribbon decreases. We also show that at room temperature the error introduced by modeling the effect of transverse diffuse boundaries using a homogeneous scattering approximation is on the order of 10% and as high as 30%. A simple parametric model for the effective thermal conductivity depending only on the Knudsen number is presented that outperforms the homogenous scattering rate approximation in terms of accuracy. Spatially resolved temperature and heat flux profiles are also obtained and analyzed for the first time in graphene ribbons using the linearized ab initio scattering term.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2014.; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 143-151).
</description>
<pubDate>Wed, 01 Jan 2014 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/92161</guid>
<dc:date>2014-01-01T00:00:00Z</dc:date>
</item>
</channel>
</rss>
