<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:dc="http://purl.org/dc/elements/1.1/" version="2.0">
<channel>
<title>MIT Open Access Articles</title>
<link>https://hdl.handle.net/1721.1/49433</link>
<description/>
<pubDate>Mon, 13 Apr 2026 17:16:30 GMT</pubDate>
<dc:date>2026-04-13T17:16:30Z</dc:date>
<item>
<title>Drive-by Environmental Sensing Strategy to Reach Optimal and Continuous Spatio-Temporal Coverage Using Local Transit Network</title>
<link>https://hdl.handle.net/1721.1/165407</link>
<description>Drive-by Environmental Sensing Strategy to Reach Optimal and Continuous Spatio-Temporal Coverage Using Local Transit Network
Ariss, Mayar; Wang, An; Sabouri, Sadegh; Duarte, Fabio; Ratti, Carlo
Monitoring environmental features, such as air pollution, carbon dioxide emissions, noise, and heat, gives cities key data-driven insights to advise sustainable policies and city design. However, given the high variability of the environmental data, achieving good spatio-temporal resolution and coverage remains a major challenge. Even in well-monitored cities, such as Amsterdam, environmental sensors are usually placed in very few fixed locations, implying limited spatial coverage and an inability to adapt to changes in the urban environment. As cities evolve, they experience shifts in pollution sources, and fixed sensors might not adequately capture these changes without a costly and time-consuming reconfiguration process. To monitor the environmental qualities of Amsterdam’s roads, we present a “drive-by” sensing solution for a structured network of vehicles, meaning that sensors are designed to be deployed on buses and tramways, the trajectories and schedules of which are known. We propose a deployment strategy that combines the available fleets to reach optimal spatio-temporal coverage for different environmental features. For example, by optimizing the deployment of sensors on public transit vehicles, our proposal significantly enhances the monitoring of pollution-sensitive areas in Amsterdam. Depending on the desired spatio-temporal granularity and noting that one vehicle only hosts one sensor, the required number of sensors to be deployed on the structured network varies between 43 and 142, with the latter achieving the finest possible resolution.
</description>
<pubDate>Thu, 23 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165407</guid>
<dc:date>2024-05-23T00:00:00Z</dc:date>
</item>
<item>
<title>Seismic assessment of unreinforced masonry façades from images using macroelement-based modeling</title>
<link>https://hdl.handle.net/1721.1/165406</link>
<description>Seismic assessment of unreinforced masonry façades from images using macroelement-based modeling
Ariss, Mayar; Pantoja-Rosero, Bryan German; Duarte, Fabio; Klimenka, Mikita; Ratti, Carlo
Despite the variability of urban infrastructure, unreinforced masonry buildings remain globally prevalent. Constructed from brick, hollow concrete blocks, stone, or other masonry materials, these structures account for a significant proportion of fatalities during seismic events—particularly in regions with limited access to early warning systems. Due to the complex behavior of masonry, accurately assessing structural vulnerabilities is highly dependent on the chosen modeling strategy. Yet, scalable, cost-effective approaches based on simple RGB imagery can still offer valuable insights. In this context, building on a previously developed digitalization methodology, this study proposes an automated, image-based framework for the rapid, non-invasive seismic evaluation of façades, addressing important research gaps in disaster resilience. The framework links image data with structural simulation by extracting visual and geometric features and translating them into consistent macroelement models using computer vision techniques, enabling nonlinear analyses under in-plane cyclic loading. The adopted numerical strategy has been extensively validated in prior work, with predictions closely aligning with experimental results. While the outcomes are predictive rather than diagnostic, future integration with publicly accessible urban imagery may enable the development of real-time, cross-border seismic risk maps.
</description>
<pubDate>Thu, 14 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165406</guid>
<dc:date>2025-08-14T00:00:00Z</dc:date>
</item>
<item>
<title>A Mars 2020 Perseverance SuperCam Perspective on the Igneous Nature of the Máaz Formation at Jezero Crater and Link With Séítah, Mars</title>
<link>https://hdl.handle.net/1721.1/165405</link>
<description>A Mars 2020 Perseverance SuperCam Perspective on the Igneous Nature of the Máaz Formation at Jezero Crater and Link With Séítah, Mars
The Máaz formation consists of the first lithologies in Jezero crater analyzed by the Mars 2020 Perseverance rover. This formation, investigated from Sols (Martian days) 1 to 201 and from Sols 343 to 382, overlies the Séítah formation (previously described as an olivine-rich cumulate) and was initially suggested to represent an igneous crater floor unit based on orbital analyses. Using SuperCam data, we conducted a detailed textural, chemical, and mineralogical analyses of the Máaz formation and the Content member of the Séítah formation. We conclude that the Máaz formation and the Content member are igneous and consist of different lava flows and/or possibly pyroclastic flows with complex textures, including vesicular and non-vesicular rocks with different grain sizes. The Máaz formation rocks exhibit some of the lowest Mg# (=molar 100 × MgO/MgO + FeO) of all Martian igneous rocks analyzed so far (including meteorites and surface rocks) and show similar basaltic to basaltic-andesitic compositions. Their mineralogy is dominated by Fe-rich augite to possibly ferrosilite and plagioclase, and minor phases such as Fe-Ti oxides and Si-rich phases. They show a broad diversity of both compositions and textures when compared to Martian meteorites and other surface rocks. The different Máaz and Content lava or pyroclastic flows all originate from the same parental magma and/or the same magmatic system, but are not petrogenetically linked to the Séítah formation. The study of returned Máaz samples in Earth-based laboratories will help constrain the formation of these rocks, calibrate Martian crater counting, and overall, improve our understanding of magmatism on Mars.
</description>
<pubDate>Tue, 01 Nov 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165405</guid>
<dc:date>2022-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Compositions and Interior Structures of the Large Moons of Uranus and Implications for Future Spacecraft Observations</title>
<link>https://hdl.handle.net/1721.1/165404</link>
<description>Compositions and Interior Structures of the Large Moons of Uranus and Implications for Future Spacecraft Observations
Castillo‐Rogez, Julie; Weiss, Benjamin; Beddingfield, Chloe; Biersteker, John; Cartwright, Richard; Goode, Allison; Melwani Daswani, Mohit; Neveu, Marc
The five large moons of Uranus are important targets for future spacecraft missions. To motivate&#13;
and inform the exploration of these moons, we model their internal evolution, present-day physical structures,&#13;
and geochemical and geophysical signatures that may be measured by spacecraft. We predict that if the moons&#13;
preserved liquid until present, it is likely in the form of residual oceans less than 30 km thick in Ariel, Umbriel,&#13;
and less than 50 km in Titania, and Oberon. The preservation of liquid strongly depends on material properties&#13;
and, potentially, on dynamical circumstances that are presently unknown. Miranda is unlikely to host liquid at&#13;
present unless it experienced tidal heating a few tens of million years ago. We find that since the thin residual&#13;
layers may be hypersaline, their induced magnetic fields could be detectable by future spacecraft-based&#13;
magnetometers. However, if the ocean is maintained primarily by ammonia, and thus well below the water&#13;
freezing point, then its electrical conductivity may be too small to be detectable by spacecraft. Lastly, our&#13;
calculated tidal Love number (k2) and dissipation factor (Q) are consistent with the Q/k2 values previously&#13;
inferred from dynamical evolution models. In particular, we find that the low Q/k2 estimated for Titania&#13;
supports the hypothesis that Titania currently holds an ocean.
</description>
<pubDate>Thu, 22 Dec 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165404</guid>
<dc:date>2022-12-22T00:00:00Z</dc:date>
</item>
<item>
<title>Intermodel Spread in Walker Circulation Responses Linked to Spread in Moist Stability and Radiation Responses</title>
<link>https://hdl.handle.net/1721.1/165403</link>
<description>Intermodel Spread in Walker Circulation Responses Linked to Spread in Moist Stability and Radiation Responses
Duffy, Margaret L; O’Gorman, Paul A
The response of the Pacific Walker circulation (WC) to long‐term warming remains uncertain. Here, we diagnose contributions to the WC response in comprehensive and idealized general circulation model (GCM) simulations. We find that the spread in WC response is substantial across both the Coupled Model Intercomparison Project (CMIP6) and the Atmospheric Model Intercomparison Project (AMIP) models, implicating differences in atmospheric models in the spread in projected WC strength. Using a moist static energy (MSE) budget, we evaluate the contributions to changes in the WC strength related to changes in gross moist stability (GMS), horizontal MSE advection, radiation, and surface fluxes. We find that the multimodel mean WC weakening is mostly related to changes in GMS and radiation. Furthermore, the &lt;jats:italic&gt;spread&lt;/jats:italic&gt; in WC response is related to the spread in GMS and radiation responses. The GMS response is potentially sensitive to parameterized convective entrainment which can affect lapse rates and the depth of convection. We thus investigate the role of entrainment in setting the GMS response by varying the entrainment rate in an idealized GCM. The idealized GCM is run with a simplified Betts‐Miller convection scheme, modified to represent entrainment. The weakening of the WC with warming in the idealized GCM is dampened when higher entrainment rates are used. However, the spread in GMS responses due to differing entrainment rates is much smaller than the spread in GMS responses across CMIP6 models. Therefore, further work is needed to understand the large spread in GMS responses across CMIP6 and AMIP models.
</description>
<pubDate>Tue, 17 Jan 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165403</guid>
<dc:date>2023-01-17T00:00:00Z</dc:date>
</item>
<item>
<title>Spatial Variability of Movement, Structure, and Formation of Warm Core Rings in the Northwest Atlantic Slope Sea</title>
<link>https://hdl.handle.net/1721.1/165402</link>
<description>Spatial Variability of Movement, Structure, and Formation of Warm Core Rings in the Northwest Atlantic Slope Sea
Silver, Adrienne; Gangopadhyay, Avijit; Gawarkiewicz, Glen; Andres, Magdalena; Flierl, Glenn; Clark, Jenifer
Gulf Stream Warm Core Rings (WCRs) have important influences on the New England Shelf and marine ecosystems. A 10‐year (2011–2020) WCR dataset that tracks weekly WCR locations and surface areas is used here to identify the rings' path and characterize their movement between 55 and 75°W. The WCR dataset reveals a very narrow band between 66 and 71°W along which rings travel almost due west along ∼39°N across isobaths – the “Ring Corridor.” Then, west of the corridor, the mean path turns southwestward, paralleling the shelfbreak. The average ring translation speed along the mean path is 5.9 cm s&lt;jats:sup&gt;−1&lt;/jats:sup&gt;. Long‐lived rings (lifespan &amp;gt;150 days) tend to occupy the region west of the New England Seamount Chain (NESC) whereas short‐lived rings (lifespan &amp;lt;150 days) tend to be more broadly distributed. WCR vertical structures, analyzed using available Argo float profiles indicate that rings that are formed to the west of the NESC have shallower thermoclines than those formed to the east. This tendency may be due to different WCR formation processes that are observed to occur along different sections of the Gulf Stream. WCRs formed to the east of the NESC tend to form from a pinch‐off mechanism incorporating cores of Sargasso Sea water and a perimeter of Gulf Stream water. WCRs that form to the west of the NESC, form from a process called an aneurysm. WCRs formed through aneurysms comprise water mostly from the northern half of the Gulf Stream and are smaller than the classic pinch‐off rings.
</description>
<pubDate>Tue, 16 Aug 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165402</guid>
<dc:date>2022-08-16T00:00:00Z</dc:date>
</item>
<item>
<title>Paleomagnetic Constraint on the Age of the Shyok Suture Zone</title>
<link>https://hdl.handle.net/1721.1/165401</link>
<description>Paleomagnetic Constraint on the Age of the Shyok Suture Zone
Martin, Craig R; Jagoutz, Oliver; Upadhyay, Rajeev; van Tongeren, Jill A; Mueller, Paul A; Weiss, Benjamin P
The India‐Eurasia collision is a key case study for understanding the influence of plate tectonic processes on Earth's crust, atmosphere, hydrosphere, and biosphere. However, the timing of the final India‐Eurasia continental collision is debated due to significant uncertainty in the age of the collision between the Kohistan‐Ladakh arc (KLA) and Eurasia along the Shyok suture zone. Here we present paleomagnetic results that constrain the Karakoram terrane in northwest India to a paleolatitude of 19.9 ± 8.9°N between 93 and 75 million years ago (Ma). Our results show that the Karakoram terrane was situated on the southern margin of Eurasia in the Late‐Cretaceous. Our results indicate that the KLA and Eurasian continent had a not converged until &amp;lt;61.6 Ma, placing a Paleocene older limit on the age of final closure of the Shyok suture zone. This suggests that the India‐Eurasia collision in northwestern India likely occurred after the closure of the oceanic basin between the KLA and Eurasia. The Paleocene collision event affecting India that has been widely interpreted to represent final India‐Eurasia collision instead records the arc‐continent collision between the KLA and the northern edge of India prior to final India‐Eurasia collision. Final India‐Eurasia collision in northwest India most likely occurred after the closure of the oceanic basin between the KLA and Eurasia.
</description>
<pubDate>Mon, 16 Oct 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165401</guid>
<dc:date>2023-10-16T00:00:00Z</dc:date>
</item>
<item>
<title>High‐Pressure Mechanical Properties of Talc: Implications for Fault Strength and Slip Processes</title>
<link>https://hdl.handle.net/1721.1/165400</link>
<description>High‐Pressure Mechanical Properties of Talc: Implications for Fault Strength and Slip Processes
Boneh, Y; Pec, M; Hirth, G
The hydrous mineral talc is stable over a relatively large P‐T field and can form due to fluid migration and metamorphic reactions in mafic and ultramafic rocks and in faults along plate boundary interfaces. Talc is known to be one of the weakest minerals, making it potentially important for the deformation dynamics and seismic characteristics of faults. However, little is known about talc's mechanical properties at high temperatures under confining pressures greater than 0.5 GPa. We present results of deformation experiments on natural talc cylinders exploring talc rheology under 0.5–1.5 GPa and 400–700°C, P‐T conditions simulating conditions at deep faults and subducted slab interface. At these pressures, the strength of talc is highly temperature‐dependent where the thermal weakening is associated with an increased tendency for localization. The strength of talc and friction coefficient inferred from Mohr circle analysis is between 0.13 at 400°C to ∼0.01 at 700°C. Strength comparison with other phyllosilicates highlights talc as the weakest mineral, a factor of ∼3–4 weaker than antigorite and a factor of ∼2 weaker than chlorite. The observed friction coefficients for talc are consistent with those inferred for subducted slabs and the San Andreas fault. We conclude that the presence of talc may explain the low strength of faults and of subducted slab interface at depths where transient slow slip events occur.
</description>
<pubDate>Wed, 15 Mar 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165400</guid>
<dc:date>2023-03-15T00:00:00Z</dc:date>
</item>
<item>
<title>Microseismic Constraints on the Mechanical State of the North Anatolian Fault Zone 13 Years After the 1999 M7.4 Izmit Earthquake</title>
<link>https://hdl.handle.net/1721.1/165399</link>
<description>Microseismic Constraints on the Mechanical State of the North Anatolian Fault Zone 13 Years After the 1999 M7.4 Izmit Earthquake
Beaucé, Eric; van der Hilst, Robert D; Campillo, Michel
The 17 August 1999 Mw7.4 Izmit earthquake ruptured the western section of the North Anatolian Fault Zone and strongly altered the fault zone properties and stress field. Consequences of the co- and post-seismic stress changes were seen in the spatio-temporal evolution of the seismicity and in the surface slip rates. Thirteen years after the Izmit earthquake, in 2012, the seismic network Dense Array for North Anatolia (DANA) was deployed for 1.5 years. We built a new catalog of microseismicity (M &lt; 2) by applying our automated detection and location method to the DANA data set. Our method combines a systematic backprojection of the seismic wavefield and template matching. We analyzed the statistical properties of the catalog by computing the Gutenberg-Richter b-value and by quantifying the amount of temporal clustering in groups of nearby earthquakes. We found that the microseismicity mainly occurs off the main fault and that the most active regions are the Lake Sapanca step-over and near the Akyazi fault. Based on previous studies, we interpreted the b-values and temporal clustering (a) as indicating that the Akyazi seismicity is occurring in high background stresses and is driven by the Izmit earthquake residual stresses, and (b) as suggesting evidence that an intricate combination of seismic and aseismic slip was taking place on heterogeneous faults at the eastern Lake Sapanca, near the brittle-ductile transition. Combined with geodetic evidence for enhanced north-south extension around Lake Sapanca following the Izmit earthquake, the seismicity supports the possibility of slow slip at depth in the step-over.
</description>
<pubDate>Wed, 07 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165399</guid>
<dc:date>2022-09-07T00:00:00Z</dc:date>
</item>
<item>
<title>Higher‐Resolution Tropopause Folding Accounts for More Stratospheric Ozone Intrusions</title>
<link>https://hdl.handle.net/1721.1/165398</link>
<description>Higher‐Resolution Tropopause Folding Accounts for More Stratospheric Ozone Intrusions
Bartusek, Samuel; Wu, Yutian; Ting, Mingfang; Zheng, Cheng; Fiore, Arlene; Sprenger, Michael; Flemming, Johannes
Ozone in the troposphere is a pollutant and greenhouse gas, and it is crucial to better understand its transport from the ozone-rich stratosphere. Tropopause folding, wherein stratospheric air intrudes downward into the troposphere, enables stratosphere-to-troposphere ozone transport (STT). However, systematic analysis of the relationship between folding and tropospheric ozone, using data that can both capture folding's spatial scales and accurately represent tropospheric chemistry, is limited. Here, we compare folding in high-resolution reanalysis ERA5 (0.25° horizontal, &lt;21 hPa vertical) and low-resolution chemical reanalysis CAMSRA (0.75°, &lt;40 hPa), against CAMSRA ozone, over 1 year. Folding becomes dramatically more frequent at high resolution, with vertical resolution overwhelmingly responsible. Deeper, more filamentary folding is almost entirely unrepresented at low resolution. Higher-resolution folding is better-correlated with tropospheric ozone (especially along midlatitude storm tracks, where deep folding is most common); STT is therefore likely more attributable to tropopause folding than coarsely-resolved folding can capture.
</description>
<pubDate>Sun, 23 Apr 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165398</guid>
<dc:date>2023-04-23T00:00:00Z</dc:date>
</item>
<item>
<title>Phosphine in the Venusian Atmosphere: A Strict Upper Limit From SOFIA GREAT Observations</title>
<link>https://hdl.handle.net/1721.1/165397</link>
<description>Phosphine in the Venusian Atmosphere: A Strict Upper Limit From SOFIA GREAT Observations
Cordiner, MA; Villanueva, GL; Wiesemeyer, H; Milam, SN; de Pater, I; Moullet, A; Aladro, R; Nixon, CA; Thelen, AE; Charnley, SB; Stutzki, J; Kofman, V; Faggi, S; Liuzzi, G; Cosentino, R; McGuire, BA
The presence of phosphine (PH3) in the atmosphere of Venus was reported by Greaves et al. (2021, https://doi.org/10.1038/s41550-020-1174-4), based on observations of the J = 1–0 transition at 267 GHz using ground-based, millimeter-wave spectroscopy. This unexpected discovery presents a challenge for our understanding of Venus's atmosphere, and has led to a reappraisal of the possible sources and sinks of atmospheric phosphorous-bearing gases. Here we present results from a search for PH3 on Venus using the German REceiver for Astronomy at Terahertz Frequencies instrument aboard the Stratospheric Observatory for Infrared Astronomy aircraft, over three flights conducted in November 2021. Multiple PH3 transitions were targeted at frequencies centered on 533 and 1,067 GHz, but no evidence for atmospheric PH3 was detected. Through radiative transfer modeling, we derived a disk-averaged upper limit on the PH3 abundance of 0.8 ppb in the altitude range 75–110 km, which is more stringent than previous ground-based studies.
</description>
<pubDate>Fri, 21 Oct 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165397</guid>
<dc:date>2022-10-21T00:00:00Z</dc:date>
</item>
<item>
<title>Crystal Shape Control on the Repacking and Jamming of Crystal‐Rich Mushes</title>
<link>https://hdl.handle.net/1721.1/165396</link>
<description>Crystal Shape Control on the Repacking and Jamming of Crystal‐Rich Mushes
Hoyos, Susana; Florez, Darien; Pec, Matej; Huber, Christian
The rheology of crustal mushes is a crucial parameter controlling melt segregation and magma flow. However, the relations between mush dynamics and crystal size and shape distribution remain poorly understood because of the complexity of melt‐crystal and crystal‐crystal interactions. We performed analog experiments to characterize the mechanisms that control pore space reduction associated with repacking. Three suspensions of monodisperse particles with different geometries and aspect ratios (1:1, 2:1, 4:1) in a viscous fluid were tested. Our results show that particle aspect ratios strongly control the melt extraction processes. We identify two competing mechanisms that enable melt extraction at grain scale. The first mechanism leads to continuous deformation and melt extraction and is associated with “diffuse” frictional dissipation between neighboring particles. The second is stochastic, localized, and nearly instantaneous and is associated with the development and destruction of force chains percolating through the granular assembly.
</description>
<pubDate>Thu, 22 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165396</guid>
<dc:date>2022-09-22T00:00:00Z</dc:date>
</item>
<item>
<title>Abundances of Uranium and Thorium Elements in Earth Estimated by Geoneutrino Spectroscopy</title>
<link>https://hdl.handle.net/1721.1/165395</link>
<description>Abundances of Uranium and Thorium Elements in Earth Estimated by Geoneutrino Spectroscopy
The decay of the primordial isotopes 238U, 235U, 232Th, and 40K has contributed to the terrestrial heat budget throughout the Earth's history. Hence, the individual abundance of those isotopes are key parameters in reconstructing contemporary Earth models. The geoneutrinos produced by the radioactive decays of uranium and thorium have been observed with the Kamioka Liquid-Scintillator Antineutrino Detector (KamLAND). Those measurements have been improved with more than 18-year observation time, and improvement in detector background levels mainly with an 8-year nearly reactor-free period, which now permit spectroscopy with geoneutrinos. Our results yield the first constraint on both uranium and thorium heat contributions. The KamLAND result is consistent with geochemical estimations based on elemental abundances of chondritic meteorites and mantle peridotites. The High-Q model is disfavored at 99.76% C.L. and a fully radiogenic model is excluded at 5.2σ assuming a homogeneous heat producing element distribution in the mantle.
</description>
<pubDate>Thu, 11 Aug 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165395</guid>
<dc:date>2022-08-11T00:00:00Z</dc:date>
</item>
<item>
<title>Unexpected Repartitioning of Stratospheric Inorganic Chlorine After the 2020 Australian Wildfires</title>
<link>https://hdl.handle.net/1721.1/165394</link>
<description>Unexpected Repartitioning of Stratospheric Inorganic Chlorine After the 2020 Australian Wildfires
Strahan, Susan E; Smale, Dan; Solomon, Susan; Taha, Ghassan; Damon, Megan R; Steenrod, Stephen D; Jones, Nicholas; Liley, Ben; Querel, Richard; Robinson, John
The inorganic chlorine (Cly) and odd nitrogen (NOy) chemical families influence stratospheric O3. In January 2020 Australian wildfires injected record-breaking amounts of smoke into the southern stratosphere. Within 1–2 months ground-based and satellite observations showed Cly and NOy were repartitioned. By May, lower stratospheric HCl columns declined by ∼30% and ClONO2 columns increased by 40%–50%. The Cly perturbations began and ended near the equinoxes, increased poleward, and peaked at the winter solstice. NO2 decreased from February to April, consistent with sulfate aerosol reactions, but returned to typical values by June - months before the Cly recovery. Transport tracers show that dynamics not chemistry explains most of the observed O3 decrease after April, with no significant transport earlier. Simulations assuming wildfire smoke behaves identically to sulfate aerosols couldn't reproduce observed Cly changes, suggesting they have different composition and chemistry. This undermines our ability to predict ozone in a changing climate.
</description>
<pubDate>Mon, 18 Jul 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165394</guid>
<dc:date>2022-07-18T00:00:00Z</dc:date>
</item>
<item>
<title>Organosulfur Aerosols Likely Carried Sulfur MIF Signatures in the Early Earth’s Atmosphere</title>
<link>https://hdl.handle.net/1721.1/165393</link>
<description>Organosulfur Aerosols Likely Carried Sulfur MIF Signatures in the Early Earth’s Atmosphere
Oduro, Harry; Ono, Shuhei; Alrasheed, Faisal; Eldridge, Daniel L
Signatures of mass-independent fractionation (MIF) of sulfur in Archean sulfide and sulfate minerals are widely thought to record an anoxic early Earth’s atmosphere. While experiments of ultraviolet irradiation of SO2 produce significant sulfur mass-independent fractionation (S-MIF) in reaction products (elemental sulfur and residual sulfur dioxide), they have not been able to reproduce the isotope patterns, in particular Δ36S/Δ33S ratios, observed in the geologic rock record. Studies that focused on organic sulfur gases and hazes in Archean did not report organosulfur aerosol photoproducts as major contributors to Archean S-MIF chemistry. Here we show, for the first time, that photochemical reactions of SO2 in the presence of gaseous hydrocarbons (CH4, C2H2, and C2H4) produce haze-like organosulfur aerosols bearing S-MIF with variable Δ36S/Δ33S ratios. The isotope trends for the organosulfur photoproducts produced in our experiments suggest that in addition to elemental sulfur, organosulfur compounds—in particular methanesulfonic acid—are a key component of S-MIF signals from the atmosphere to the ocean and sediments with possible links to Archean atmosphere warmed by a methane greenhouse.
</description>
<pubDate>Mon, 16 Oct 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165393</guid>
<dc:date>2023-10-16T00:00:00Z</dc:date>
</item>
<item>
<title>High‐Resolution Magnetic‐Geochemical Mapping of the Serpentinized and Carbonated Atlin Ophiolite, British Columbia: Toward Establishing Magnetometry as a Monitoring Tool for In Situ Mineral Carbonation</title>
<link>https://hdl.handle.net/1721.1/165392</link>
<description>High‐Resolution Magnetic‐Geochemical Mapping of the Serpentinized and Carbonated Atlin Ophiolite, British Columbia: Toward Establishing Magnetometry as a Monitoring Tool for In Situ Mineral Carbonation
Tominaga, Masako; Beinlich, Andreas; Lima, Eduardo A; Pruett, Paiden; Vento, Noah R; Weiss, Benjamin P
We address in situ serpentinization and mineral carbonation processes in oceanic lithosphere using integrated field magnetic measurements, rock magnetic analyses, superconducting quantum interference device (SQUID) microscopy, microtextural observations, and energy dispersive spectroscopy phase mapping. A representative suite of ultramafic rock samples were collected, within the Atlin ophiolite, along a 100‐m long transect across a continuous outcrop of mantle harzburgite with several alteration fronts: serpentinite, soapstone (magnesite + talc), and listvenite (magnesite + quartz). Strong correlations between changes in magnetic signal strengths and amount of alteration are shown with distinctive contrasts between serpentinite, transitional soapstone, and listvenite that are linked to the formation and breakdown of magnetite. While previous observations of the Linnajavri ultramafic complex indicated that the breakdown of magnetite occurred during listvenite formation from the precursor soapstone (Tominaga et al., 2017, &lt;jats:ext-link xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="https://doi.org/10.1038/s41467-017-01610-4"&gt;https://doi.org/10.1038/s41467-017-01610-4&lt;/jats:ext-link&gt;), results from our study suggest that magnetite destabilization already occurred during the replacement of serpentinite by soapstone (i.e., at lower fluid CO&lt;jats:sub&gt;2&lt;/jats:sub&gt; concentrations). This difference is attributed to fracture‐controlled flow of sulfur‐bearing alteration fluid at Atlin, causing reductive magnetite dissolution in thin soapstone zones separating serpentinite from sulfide‐mineralized listvenite. We argue that magnetite growth or breakdown in soapstone provides insight into the mode of fluid flow and the composition, which control the scale and extent of carbonation. This conclusion enables us to use magnetometry as a viable tool for monitoring the reaction progress from serpentinite to carbonate‐bearing assemblages in space and time with a caution that the three‐dimensionality of magnetic sources impacts the scalability of measurements.
</description>
<pubDate>Mon, 10 Apr 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165392</guid>
<dc:date>2023-04-10T00:00:00Z</dc:date>
</item>
<item>
<title>Estimating the Net Magnetic Moment of Geological Samples From Planar Field Maps Using Multipoles</title>
<link>https://hdl.handle.net/1721.1/165391</link>
<description>Estimating the Net Magnetic Moment of Geological Samples From Planar Field Maps Using Multipoles
Lima, Eduardo A; Weiss, Benjamin P; Borlina, Caue S; Baratchart, Laurent; Hardin, Douglas P
Recent advances in magnetic microscopy have enabled studies of geological samples whose weak and spatially nonuniform magnetizations were previously inaccessible to standard magnetometry techniques. A quantity of central importance is the net magnetic moment, which reflects the mean direction and the intensity of the magnetization states of numerous ferromagnetic crystals within a certain volume. The planar arrangement of typical magnetic microscopy measurements, which originates from measuring the field immediately above the polished surface of a sample to maximize sensitivity and spatial resolution, makes estimating net moments considerably more challenging than with spherically distributed data. In particular, spatially extended and nonuniform magnetization distributions often cannot be adequately approximated by a single magnetic dipole. To address this limitation, we developed a multipole fitting technique that can accurately estimate net moment using spherical harmonic multipole expansions computed from planar data. Given that the optimal location for the origin of such expansions is unknown beforehand and generally unconstrained, regularization of this inverse problem is critical for obtaining accurate moment estimates from noisy experimental magnetic data. We characterized the performance of the technique using synthetic sources under different conditions (noiseless data, data corrupted with simulated white noise, and data corrupted with measured instrument noise). We then validated and demonstrated the technique using superconducting quantum interference device microscopy measurements of impact melt spherules from Lonar crater, India and dusty olivine chondrules from the CO chondrite meteorite Dominion Range 08006.
</description>
<pubDate>Wed, 26 Jul 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165391</guid>
<dc:date>2023-07-26T00:00:00Z</dc:date>
</item>
<item>
<title>A &gt;200 ka U-Th Based Chronology From Lacustrine Evaporites, Searles Lake, CA</title>
<link>https://hdl.handle.net/1721.1/165390</link>
<description>A &gt;200 ka U-Th Based Chronology From Lacustrine Evaporites, Searles Lake, CA
Stroup, Justin S; Olson, Kristian J; Lowenstein, Tim K; Jost, Adam B; Mosher, Hayley M; Peaple, Mark D; Feakins, Sarah J; Chen, Christine Y; Lund, Steven P; McGee, David
Well‐dated lacustrine records are essential to establish the timing and drivers of regional hydroclimate change. Searles Basin, California, records the depositional history of a fluctuating saline‐alkaline lake in the terminal basin of the Owens River system draining the eastern Sierra Nevada. Here, we establish a U‐Th chronology for the ∼76‐m‐long SLAPP‐SLRS17 core collected in 2017 based on dating of evaporite minerals. Ninety‐eight dated samples comprising nine different minerals were evaluated based on stratigraphic, mineralogic, textural, chemical, and reproducibility criteria. After the application of these criteria, a total of 37 dated samples remained as constraints for the age model. A lack of dateable minerals between 145 and 110 ka left the age model unconstrained over the penultimate glacial termination (Termination II). We thus established a tie point between plant wax δD values in the core and a nearby speleothem δ&lt;jats:sup&gt;18&lt;/jats:sup&gt;O record at the beginning of the Last Interglacial. We construct a Bayesian age model allowing stratigraphy to inform sedimentation rate inflections. We find that the &amp;gt;210 ka SLAPP‐SRLS17 record contains five major units that correspond with prior work. The new dating is broadly consistent with previous efforts but provides more precise age estimates and enables a detailed evaluation of evaporite depositional history. We also offer a substantial revision of the age of the Bottom Mud‐Mixed Layer contact, shifting it from ∼130 ka to 178 ± 3 ka. The new U‐Th chronology documents the timing of mud and salt layers and lays the foundation for climate reconstructions.
</description>
<pubDate>Wed, 01 Mar 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165390</guid>
<dc:date>2023-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>The spinorial energy for asymptotically Euclidean Ricci flow</title>
<link>https://hdl.handle.net/1721.1/165389</link>
<description>The spinorial energy for asymptotically Euclidean Ricci flow
Baldauf, Julius; Ozuch, Tristan
This article introduces a functional generalizing Perelman’s weighted Hilbert-Einstein action and the Dirichlet energy for spinors. It is well defined on a wide class of noncompact manifolds; on asymptotically Euclidean manifolds, the functional is shown to admit a unique critical point, which is necessarily of min-max type, and the Ricci flow is its gradient flow. The proof is based on variational formulas for weighted spinorial functionals, valid on all spin manifolds with boundary.
</description>
<pubDate>Tue, 18 Apr 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165389</guid>
<dc:date>2023-04-18T00:00:00Z</dc:date>
</item>
<item>
<title>Emergent symmetries and phases in quantum spin chains coupled to a Kuramoto model</title>
<link>https://hdl.handle.net/1721.1/165388</link>
<description>Emergent symmetries and phases in quantum spin chains coupled to a Kuramoto model
Bastidas, V. M.
Floquet theory is a widely used framework to describe the dynamics of periodically driven quantum systems. The usual scenario to describe such systems is to consider the effect of an external control with a definite period in time that can act either locally or globally on the system of interest. However, apart from periodicity, such drives typically lack classical correlations or additional structure. In this work, we consider drives with intrinsic dynamics that undergo self-organization, leading to periodic steady states with emergent symmetries. To substantiate our results, we consider two examples of one-dimensional quantum spin chains coupled to a classical Kuramoto model. First, we investigate a Kuramoto model with all-to-all coupling driving a one-dimensional quantum Ising chain into a time-periodic steady state with an emergent translational symmetry. Next, we consider a Kuramoto model in a zigzag lattice driving an XX spin chain. The dynamics of traveling waves in the Kuramoto model trimerizes the lattice, effectively inducing topological behavior that can be exploited to perform topological pumping. Our results can be experimentally implemented in digital and analog near-term quantum devices.
</description>
<pubDate>Wed, 08 Oct 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165388</guid>
<dc:date>2025-10-08T00:00:00Z</dc:date>
</item>
<item>
<title>Relativistic electrodynamics with a universal length scale</title>
<link>https://hdl.handle.net/1721.1/165387</link>
<description>Relativistic electrodynamics with a universal length scale
Pedergnana, Tiemo; Kogelbauer, Florian
We derive the analogues of the Dirac and Pauli equations from a spatially fourth-order Klein-Gordon equation with a universal length scale. Starting from a singularly perturbed variant of Maxwell's equations, we deduce a 32-dimensional variant of the Dirac equation for spin-1/2 particles through an algebraic factorization procedure. We illustrate an experimental test of the theory from the split lines of the electron beam in a Stern-Gerlach experiment. This hyperfine splitting leads to four distinct eigenvalues of the spin operator, which can be grouped into two pairs centered around the classic values of ±ℏ/2. The modified electrodynamic framework features an oriented, micropolar spacetime.
</description>
<pubDate>Wed, 20 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165387</guid>
<dc:date>2025-08-20T00:00:00Z</dc:date>
</item>
<item>
<title>The Imminent Data Desert: The Future of Stratospheric Monitoring in a Rapidly Changing World</title>
<link>https://hdl.handle.net/1721.1/165386</link>
<description>The Imminent Data Desert: The Future of Stratospheric Monitoring in a Rapidly Changing World
Salawitch, Ross J; Smith, Jessica B; Selkirk, Henry; Wargan, Krzysztof; Chipperfield, Martyn P; Hossaini, Ryan; Levelt, Pieternel F; Livesey, Nathaniel J; McBride, Laura A; Millán, Luis F; Moyer, Elisabeth; Santee, Michelle L; Schoeberl, Mark R; Solomon, Susan; Stone, Kane; Worden, Helen M
The Atmospheric Chemistry Experiment–Fourier Transform Spectrometer (ACE-FTS) on SCISAT-1 and Microwave Limb Sounder (MLS) on NASA’s Aura satellite have contributed significantly to understanding the impacts of human activities on the stratospheric ozone layer. The two-decade-long data record from these instruments has allowed quantification of ozone depletion caused by human-released ozone-depleting substances, the effects of extreme natural events like major volcanic eruptions including Hunga in 2022, and events amplified by human-caused climate change such as wildfires that inject material into the stratosphere, as happened over Australia in early 2020. The Aura platform is nearing the end of its operational lifetime, and SCISAT-1 is over 20 years old. Their decommissioning will cause a substantial gap in the measurement of critical atmospheric components, including water vapor, inorganic chlorine species, and tracers of stratospheric transport. This upcoming “data desert” poses significant challenges for monitoring the recovery of the ozone layer and assessing the effects on stratospheric composition of future extreme events, threats posed by increases in space debris from satellite burn-up, and the possible injection of stratospheric aerosol to mitigate global warming. The lack of confirmed future missions that can provide daily near-global profile measurements of stratospheric composition highlights the need for observational strategies to bridge this impending gap. This paper discusses the essential role of ACE-FTS and MLS in advancing our understanding of the stratosphere, the impact of data loss after the cessation of one or both instruments, and the urgency of developing strategies for mitigating the impact of these observational losses at a time marked by dramatic changes in the stratosphere due to human and natural factors.&#13;
&#13;
Significance Statement&#13;
We highlight the critical role that data from the ACE-FTS and Microwave Limb Sounder (MLS) satellite instruments have played in advancing our understanding of stratospheric composition and the impacts of human activities on the ozone layer. As these instruments near the end of their operational lifetimes, the imminent loss of data, particularly of stratospheric water vapor, chlorine species, and tracers of transport, portends profound and irrevocable gaps in atmospheric observations. This loss of observational capability will occur at a time of rapid climate change and hinder our understanding of the stratosphere’s response to, and its coupled role in, continued climate forcing. This paper emphasizes the urgency of addressing this data desert, highlighting the need for sustained, coordinated, global measurement capabilities for these crucial constituents.
</description>
<pubDate>Sat, 01 Mar 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165386</guid>
<dc:date>2025-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Identifying Climate Patterns Using Clustering Autoencoder Techniques</title>
<link>https://hdl.handle.net/1721.1/165385</link>
<description>Identifying Climate Patterns Using Clustering Autoencoder Techniques
Kurihana, Takuya; Mastilovic, Ilijana; Wang, Lijing; Meray, Aurelien; Praveen, Satyarth; Xu, Zexuan; Memarzadeh, Milad; Lavin, Alexander; Wainwright, Haruko
The complexity of growing spatiotemporal resolution of climate simulations produces a variety of climate patterns under different projection scenarios. This paper proposes a new data-driven climate classification workflow via an unsupervised deep learning technique that can dimensionally reduce the vast volume of spatiotemporal numerical climate projection data into a compact representation. We aim to identify distinct zones that capture multiple climate variables as well as their future changes under different climate change scenarios. Our approach leverages convolutional autoencoders combined with k-means clustering (standard autoencoder) and online clustering based on the Sinkhorn–Knopp algorithm (clustering autoencoder) across the conterminous United States (CONUS) to capture unique climate patterns in a data-driven fashion from the Geophysical Fluid Dynamics Laboratory Earth System Model with GOLD component (GFDL-ESM2G). The developed approach compresses 70 years of GFDL-ESM2G simulation at 0.125° spatial resolution across the CONUS under multiple warming scenarios to a lower-dimensional space by a factor of 660 000 and then tested on 150 years of GFDL-ESM2G simulation data. The results show that five climate clusters capture physically reasonable and spatially stable climatological patterns matched to known climate classes defined by human experts. Results also show that using a clustering autoencoder can reduce the computational time for clustering by up to 9.2 times when compared to using a standard autoencoder. Our five unique climate patterns resulting from the deep learning–based clustering of the lower-dimensional space thereby enable us to provide insights on hydrometeorology and its spatial heterogeneity across the conterminous United States immediately without downloading large climate datasets.&#13;
&#13;
Significance Statement&#13;
This paper presents a data-driven climate classification approach using unsupervised deep learning to dimensionally reduce climate model outputs and to identify distinct climate regions for their future changes. Our approach compresses climate information for 70 years of Geophysical Fluid Dynamics Laboratory Earth System Model data across the conterminous United States (CONUS) at 0.125° spatial resolution. The results reveal that five climate clusters capture reasonable and stable climatological patterns matched to known climate patterns. The embedded clustering process in deep learning provides ×9.2 times faster execution than the k-means clustering technique. These results give us insight about climate spatial patterns and heterogeneity of hydrological patterns across the conterminous United States without downloading large climate datasets.
</description>
<pubDate>Mon, 01 Jul 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165385</guid>
<dc:date>2024-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Machine Learning Approach for Spatiotemporal Multivariate Optimization of Environmental Monitoring Sensor Locations</title>
<link>https://hdl.handle.net/1721.1/165384</link>
<description>Machine Learning Approach for Spatiotemporal Multivariate Optimization of Environmental Monitoring Sensor Locations
Siddiquee, Masudur R; Meray, Aurelien O; Xu, Zexuan; Gonzalez-Raymat, Hansell; Danielson, Thomas; Upadhyay, Himanshu; Lagos, Leonel E; Eddy-Dilek, Carol; Wainwright, Haruko M
Long-term environmental monitoring is critical for managing the soil and groundwater at contaminated sites. Recent improvements in state-of-the-art sensor technology, communication networks, and artificial intelligence have created opportunities to modernize this monitoring activity for automated, fast, robust, and predictive monitoring. In such modernization, it is required that sensor locations be optimized to capture the spatiotemporal dynamics of all monitoring variables as well as to make it cost-effective. The legacy monitoring datasets of the target area are important to perform this optimization. In this study, we have developed a machine-learning approach to optimize sensor locations for soil and groundwater monitoring based on ensemble supervised learning and majority voting. For spatial optimization, Gaussian process regression (GPR) is used for spatial interpolation, while the majority voting is applied to accommodate the multivariate temporal dimension. Results show that the algorithms significantly outperform the random selection of the sensor locations for predictive spatiotemporal interpolation. While the method has been applied to a four-dimensional dataset (with two-dimensional space, time, and multiple contaminants), we anticipate that it can be generalizable to higher-dimensional datasets for environmental monitoring sensor location optimization.
</description>
<pubDate>Tue, 01 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165384</guid>
<dc:date>2024-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Subseasonal Prediction of Central European Summer Heatwaves with Linear and Random Forest Machine Learning Models</title>
<link>https://hdl.handle.net/1721.1/165383</link>
<description>Subseasonal Prediction of Central European Summer Heatwaves with Linear and Random Forest Machine Learning Models
Weirich-Benet, Elizabeth; Pyrina, Maria; Jiménez-Esteve, Bernat; Fraenkel, Ernest; Cohen, Judah; Domeisen, Daniela IV
Heatwaves are extreme near-surface temperature events that can have substantial impacts on ecosystems and society. Early warning systems help to reduce these impacts by helping communities prepare for hazardous climate-related events. However, state-of-the-art prediction systems can often not make accurate forecasts of heatwaves more than two weeks in advance, which are required for advance warnings. We therefore investigate the potential of statistical and machine learning methods to understand and predict central European summer heatwaves on time scales of several weeks. As a first step, we identify the most important regional atmospheric and surface predictors based on previous studies and supported by a correlation analysis: 2-m air temperature, 500-hPa geopotential, precipitation, and soil moisture in central Europe, as well as Mediterranean and North Atlantic sea surface temperatures, and the North Atlantic jet stream. Based on these predictors, we apply machine learning methods to forecast two targets: summer temperature anomalies and the probability of heatwaves for 1–6 weeks lead time at weekly resolution. For each of these two target variables, we use both a linear and a random forest model. The performance of these statistical models decays with lead time, as expected, but outperforms persistence and climatology at all lead times. For lead times longer than two weeks, our machine learning models compete with the ensemble mean of the European Centre for Medium-Range Weather Forecast’s hindcast system. We thus show that machine learning can help improve subseasonal forecasts of summer temperature anomalies and heatwaves.&#13;
&#13;
Significance Statement&#13;
Heatwaves (prolonged extremely warm temperatures) cause thousands of fatalities worldwide each year. These damaging events are becoming even more severe with climate change. This study aims to improve advance predictions of summer heatwaves in central Europe by using statistical and machine learning methods. Machine learning models are shown to compete with conventional physics-based models for forecasting heatwaves more than two weeks in advance. These early warnings can be used to activate effective and timely response plans targeting vulnerable communities and regions, thereby reducing the damage caused by heatwaves.
</description>
<pubDate>Sat, 01 Apr 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165383</guid>
<dc:date>2023-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Waveform-based estimation of Q and scattering properties for zero-offset vertical seismic profile data</title>
<link>https://hdl.handle.net/1721.1/165382</link>
<description>Waveform-based estimation of Q and scattering properties for zero-offset vertical seismic profile data
Nakata, Rie; Lumley, David; Hampson, Gary; Nihei, Kurt; Nakata, Nori
Estimating Q using downgoing waves in zero-offset vertical seismic profiles (VSPs) can be challenging when scattered waves from near-borehole heterogeneities interfere with direct arrivals. In any Q estimation method that assumes a downgoing plane wave, constructive and destructive wave-mode interference can cause errors in the estimate. For example, in the spectral-ratio method, such interference modulates the amplitude spectra introducing significant variations and even nonphysical negative Q (amplification) estimates. We have investigated this phenomenon using synthetic and field data sets from offshore Australia and developed a two-step waveform-based method to characterize scattering anomalies and improve Q estimates. Waveform information is key to deal with closely spaced bandlimited seismic events. First, we solve an inverse problem to locate and characterize scatterers by minimizing the traveltime and waveform misfits. Then, using the estimated parameters, we model the scatterers’ contribution to the VSP data and remove it from the observed waveforms. The resulting spectra resemble those that would have been acquired in the absence of the scatterers and are much more suitable for the spectral-ratio method. By assuming a 1D medium and a simple scatterer shape (i.e., circular), we parameterize a scattering heterogeneity using five parameters (depth, distance, size, velocity, and density) and seek a solution using a grid search to handle the nonuniqueness of the VSP inversion. Instead, adaptive subtraction is required to fine-tune the modeled interference to better fit the observation. We successfully use this method to characterize and mitigate the strongest wave interference in the field data. The final Q estimates contain milder variations and much less nonphysical negative Q. Our results demonstrate that the proposed method, readily extendible to multiple scatterer cases, can locate discrete scatterers, remove the effects of their interference, and thus significantly improve the Q estimates from VSP data.
</description>
<pubDate>Thu, 14 May 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165382</guid>
<dc:date>2020-05-14T00:00:00Z</dc:date>
</item>
<item>
<title>Reservoir geophysics’ contributions to the energy transition and climate change challenge — Introduction</title>
<link>https://hdl.handle.net/1721.1/165381</link>
<description>Reservoir geophysics’ contributions to the energy transition and climate change challenge — Introduction
He, Zhiliang; Mukerji, Tapan; Grana, Dario; Fu, Liyun; Cao, Danping; Cai, Xiaohui; Qu, Luping; Liu, Mingliang; Farquharson, Colin G
Achieving net-zero greenhouse gas emissions by 2050 represents one of the key scientific and societal challenges of our time. The energy industry plays a central role in this transition, which requires a rapid shift from conventional fossil fuel-based systems toward renewable energy resources and the large-scale deployment of subsurface solutions such as geological CO₂ sequestration, underground hydrogen storage, and natural hydrogen exploration. These emerging energy systems critically depend on accurate static subsurface characterization and robust dynamic monitoring of complex reservoir environments. In this context, reservoir geophysics provides essential tools for imaging, quantifying, and monitoring subsurface processes across spatial and temporal scales.
</description>
<pubDate>Mon, 09 Mar 2026 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165381</guid>
<dc:date>2026-03-09T00:00:00Z</dc:date>
</item>
<item>
<title>Revisiting numerical validation of Gassmann’s equations: Open-pore boundary condition</title>
<link>https://hdl.handle.net/1721.1/165380</link>
<description>Revisiting numerical validation of Gassmann’s equations: Open-pore boundary condition
Alkhimenkov, Yury
Gassmann’s equations are widely used to predict the effective moduli of fluid-saturated rocks from dry properties and basic petrophysical parameters, yet pore-scale validations that explicitly demonstrate numerical convergence remain limited. Furthermore, recent publications have questioned the logical consistency of the derivation of Gassmann’s equations, arguing instead for the consistency of the Brown and Korringa (1975) formulation with a “mean” compressibility. A pore-scale validation of Gassmann’s relations was performed under an open-pore boundary geometry, where the pore network intersects the external surface and the imposed macroscopic displacement acts simultaneously on solid and fluid at the boundary. Three-dimensional finite-element simulations were used that couple linear elasticity of the solid skeleton with the quasistatic, compressible Navier–Stokes equations that govern fluid flow. Direct relaxation tests on monomineral, isotropic (cubic) models with generic pore topology (narrow throats linked to a wider pore body) were conducted across a sequence of mesh refinements. In the low-frequency limit, the numerically evaluated undrained bulk modulus converged to the Gassmann prediction to within numerical precision. Comparisons with the Brown and Korringa (1975) formulation indicated that the “mean” compressibility converges to the grain compressibility, causing the formulation to collapse to the classical Gassmann result as resolution increases. Together with earlier closed-pore analysis, the results confirmed that for such models, Gassmann’s equations remain valid for open- and closed-pore boundary conditions, provided that uniform pore pressure is achieved in the quasistatic limit.
</description>
<pubDate>Wed, 01 Apr 2026 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165380</guid>
<dc:date>2026-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Full rock anisotropy characterization using laser ultrasonics</title>
<link>https://hdl.handle.net/1721.1/165379</link>
<description>Full rock anisotropy characterization using laser ultrasonics
Mandal, Partha Pratim; Simpson, Jonathan; Sarout, Joel; Kovalyshen, Yevhen; Adam, Ludmila; Wijk, Kasper van
Reliable seismic imaging and the estimate of the distribution of subsurface stress depend on the accurate assessment of elastic anisotropy in shaly rock formation. Anisotropy was typically evaluated in the lab with contacting transducers. However, these measurements frequently reported significant uncertainty due to variations in mechanical coupling, limitations on the number of ray paths analyzed, and the relative sizes of transducers. We assessed the effectiveness of the contactless laser ultrasonic pulse transmission technique, which uses a source and probing laser and a cylindrical rock sample placed on a rotating stage, to reduce uncertainties in the estimation of Thomsen’s anisotropy parameters (TAPs). The ambiguity of propagation distance was eliminated from the smaller imprint of the source and receiver lasers on the core sample, suggesting that group velocity was effectively estimated, and the observed wave attenuation was solely indicative of the rock. Three cylindrical samples of multilayer manufactured material, namely, phenolic grade CE (Canvas Electrical), which were rather homogeneous and oriented differently (0, 45°, and 90° with respect to the bedding), were examined for P-wave velocity over approximately 630 separate ray pathways. The most precise estimation of TAPs in a vertical transverse isotropic medium without knowledge of the symmetry axis was achieved by using the laser ultrasonic method on a phenolic grade CE sample that is extracted horizontally to the bedding. Multiple dip angles, dense sampling, and multipath inversion of these datasets reduced Thomsen’s δ parameter uncertainty by 20%. Application of the same technique on an anisotropic and heterogeneous shale sample suggested that (i) the mineralogy-controlled density heterogeneity observed from the 3D X-ray computed tomography images could be detected and identified from high-density laser ultrasonic data using reservoir monitoring techniques imported from field-scale geophysics and (ii) TAPs in the homogeneous and anisotropic sub-volume of the shale sample could be reliably estimated once heterogeneity was accounted for.
</description>
<pubDate>Tue, 24 Feb 2026 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165379</guid>
<dc:date>2026-02-24T00:00:00Z</dc:date>
</item>
<item>
<title>MEDiCINe: Motion Correction for Neural Electrophysiology Recordings</title>
<link>https://hdl.handle.net/1721.1/165378</link>
<description>MEDiCINe: Motion Correction for Neural Electrophysiology Recordings
Watters, Nicholas; Buccino, Alessio; Jazayeri, Mehrdad
Electrophysiology recordings from the brain using laminar multielectrode arrays allow researchers to measure the activity of many neurons simultaneously. However, laminar microelectrode arrays move relative to their surrounding neural tissue for a variety of reasons, such as pulsation, changes in intracranial pressure, and decompression of neural tissue after insertion. Inferring and correcting for this motion stabilizes the recording and is critical to identify and track single neurons across time. Such motion correction is a preprocessing step of standard spike-sorting methods. However, estimating motion robustly and accurately in electrophysiology recordings is challenging due to the stochasticity of the neural data. To tackle this problem, we introduce MEDiCINe (Motion Estimation by Distributional Contrastive Inference for Neurophysiology), a novel motion estimation method. We show that MEDiCINe outperforms existing motion estimation methods on an extensive suite of simulated neurophysiology recordings and leads to more accurate spike sorting. We also show that MEDiCINe accurately estimates the motion in primate and rodent electrophysiology recordings with a variety of motion and stability statistics. We open-source MEDiCINe, usage instructions, examples integrating MEDiCINe with common tools for spike sorting, and data and code for reproducing our results. This open software will enable other researchers to use MEDiCINe to improve spike sorting results and get the most out of their electrophysiology datasets.
</description>
<pubDate>Sat, 01 Mar 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165378</guid>
<dc:date>2025-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Size Ranges of Magnetic Domain States in Tetrataenite</title>
<link>https://hdl.handle.net/1721.1/165376</link>
<description>Size Ranges of Magnetic Domain States in Tetrataenite
Mansbach, Elias N; Shah, Jay; Williams, Wyn; Maurel, Clara; Bryson, James FJ; Weiss, Benjamin P
Paleomagnetic studies of meteorites provide unique constraints on the evolution of magnetic&#13;
fields in the early solar system. These studies rely on the identification of magnetic minerals that can retain&#13;
stable magnetizations over ≳4.5 billion years (Ga). The ferromagnetic mineral tetrataenite (γ''-Fe0.5Ni0.5)&#13;
is found in iron, stony-iron and chondrite meteorite groups. Nanoscale intergrowths of tetrataenite have&#13;
been shown to carry records of paleomagnetic fields, although the effect of magnetostatic interactions&#13;
on their magnetic remanence acquisition remains to be fully understood. Tetrataenite can also occur as&#13;
isolated, non-interacting, nanoscale grains in many meteorite groups, although the paleomagnetic potential&#13;
of these grains is particularly poorly understood. Here, we aim to improve our understanding of tetrataenite&#13;
magnetization to refine our knowledge of existing paleomagnetic analyses and broaden the spectrum of&#13;
meteorite groups that can be used for future paleomagnetic studies. We present the results of analytical&#13;
calculations and micromagnetic modeling of isolated tetrataenite grains with various geometries. We find&#13;
that tetrataenite forms a stable single domain state at grain lengths between 6 and ∼160 nm dependent on its&#13;
elongation. It also possesses a magnetization resistant to viscous remagnetization over the lifetime of the solar&#13;
system at 293 K. At larger grain sizes, tetrataenite's lowest energy state is a lamellar two-domain state, stable at&#13;
Ga-scale timescales. Unlike many other magnetic minerals, tetrataenite does not form a single-vortex domain&#13;
state due to its large uniaxial anisotropy. Our results show that single domain and two-domain tetrataenite&#13;
grains carry an extremely stable magnetization and therefore are promising for paleomagnetic studies.
</description>
<pubDate>Mon, 17 Oct 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165376</guid>
<dc:date>2022-10-17T00:00:00Z</dc:date>
</item>
<item>
<title>Coincident Biogenic Nitrite and pH Maxima Arise in the Upper Anoxic Layer in the Eastern Tropical North Pacific</title>
<link>https://hdl.handle.net/1721.1/165375</link>
<description>Coincident Biogenic Nitrite and pH Maxima Arise in the Upper Anoxic Layer in the Eastern Tropical North Pacific
Cinay, Timur; Dumit, Diana; Woosley, Ryan J; Boles, Elisabeth L; Kwiecinski, Jarek V; Mullen, Susan; Tamasi, Tyler J; Wolf, Martin J; Kelly, Colette L; Travis, Nicole M; Casciotti, Karen L; Babbin, Andrew R
The Eastern Tropical North Pacific (ETNP), like the other marine oxygen deficient zones&#13;
(ODZs), is characterized by an anoxic water column, nitrite accumulation at the anoxic core, and fixed nitrogen&#13;
loss via nitrite reduction to N2O and N2 gases. Here, we constrain the relative contribution of biogeochemical&#13;
processes to observable features such as the secondary nitrite maximum (SNM) and local pH maximum by&#13;
simultaneous measurement of inorganic nitrogen and carbon species. High-resolution sampling within the top&#13;
1 km of the water column reveals consistent chemical features previously unobserved in the region, including a&#13;
tertiary nitrite maximum. Dissolved inorganic carbon measurements show that pH increases with depth at the&#13;
top of the ODZ, peaking at the potential density of the SNM at σθ = 26.15 ± 0.06 (1 s.d.). We developed a novel&#13;
method to determine the relative contributions of anaerobic ammonium oxidation (anammox), denitrification,&#13;
nitrite oxidation, dissimilatory nitrate reduction to nitrite, and calcium carbonate dissolution to the nitrite&#13;
cycling in the anoxic ODZ core. The calculated relative contributions of each reaction are slightly sensitive&#13;
to the assumed C:N:P ratio and the carbon oxidation state of the organic matter sinking through the ODZ.&#13;
Furthermore, we identify the source of the pH increase at the top of ODZ as the net consumption of protons&#13;
via nitrite reduction to N2 by the denitrification process. The increase in pH due to denitrification impacts the&#13;
buffering effect of calcite and aragonite dissolving in the ETNP.
</description>
<pubDate>Wed, 07 Dec 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165375</guid>
<dc:date>2022-12-07T00:00:00Z</dc:date>
</item>
<item>
<title>Subdaily Slow Fault Slip Dynamics Captured by Low‐Frequency Earthquakes</title>
<link>https://hdl.handle.net/1721.1/165374</link>
<description>Subdaily Slow Fault Slip Dynamics Captured by Low‐Frequency Earthquakes
Mouchon, Caroline; Frank, William B; Radiguet, Mathilde; Poli, Piero; Cotte, Nathalie
Geodetic positioning is the geophysical record of reference for slow slip events, but typical daily solutions limit studies of the evolution of slow slip to its long‐term dynamics. Accompanying seismic low‐frequency earthquakes located precisely in time and space provide an opportunity to image slow slip dynamics at subdaily time scales. Here we show that a high‐resolution time history of low‐frequency earthquake fault slip alone can reproduce the geodetic record of slow slip that we observe to be dominated by subdaily fault slip dynamics. However, a simple linear model cannot accommodate the complex dynamics present throughout the slow slip cycle, and an analysis of different phases of the slow slip cycle shows that the ratio of geodetic to seismic fault slip varies as a function of time. This suggests that the low‐frequency earthquake source region saturates as slow slip grows in moment and area. We propose that rheological heterogeneities at the plate boundary associated with low‐frequency earthquakes do not play a significant role in the slow slip rupture process, thus implying that their activity is incidental to the driving aseismic slip.
</description>
<pubDate>Wed, 28 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165374</guid>
<dc:date>2023-06-28T00:00:00Z</dc:date>
</item>
<item>
<title>Fast Response of Amazon Rivers to Quaternary Climate Cycles</title>
<link>https://hdl.handle.net/1721.1/165373</link>
<description>Fast Response of Amazon Rivers to Quaternary Climate Cycles
Goldberg, Samuel L; Schmidt, Morgan J; Perron, J Taylor
Large alluvial rivers transport water and sediment across continents and shape lowland landscapes. Repeated glacial cycles have dominated Earth's recent climate, but it is unclear whether these rivers are sensitive to such rapid changes. The Amazon River system, the largest and highest‐discharge in the world, features extensive young terraces that demonstrate geologically rapid change temporally correlated with changes in runoff from Quaternary climate cycles. To test the plausibility of a causal relationship, we use a simple model to estimate from empirical measurements how quickly a river profile responds to changes in discharge or sediment supply. Applying this model to data from 30 gauging stations along alluvial rivers throughout the Brazilian Amazon, we find that many rivers of the Amazon basin can respond faster than glacially induced changes in runoff or sediment flux. The Amazon basin is unusually responsive compared to other large river systems due to its high discharge and sediment flux, narrow floodplains, and low slopes. As a result, we predict that the Amazon basin has been highly dynamic during Quaternary glacial cycles, with cyclical aggradation and incision of lowland rivers driving repeated habitat and environmental change throughout the region. This dynamic landscape may have contributed to the exceptional biodiversity of the region and patterns of ancient human settlement.
</description>
<pubDate>Thu, 18 Nov 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165373</guid>
<dc:date>2021-11-18T00:00:00Z</dc:date>
</item>
<item>
<title>Stratosphere‐Troposphere Exchanges of Air Mass and Ozone Concentration in the Last Glacial Maximum</title>
<link>https://hdl.handle.net/1721.1/165372</link>
<description>Stratosphere‐Troposphere Exchanges of Air Mass and Ozone Concentration in the Last Glacial Maximum
Wang, Mingcheng; Fu, Qiang; Solomon, Susan; Alexander, Becky; White, Rachel H
Stratosphere‐troposphere exchange (STE) of ozone represents a significant source term in the tropospheric ozone budget and can impact surface ozone concentrations, tropospheric oxidation capacity, and methane lifetime. Using the Whole Atmosphere Community Climate Model 6, changes in the air mass and ozone STEs in the Last Glacial Maximum (LGM) as compared with preindustrial (PI) climate are investigated. We use dynamic isentropic surfaces that are determined by fitting to the tropical tropopauses as the upper boundary of the lowermost stratosphere in a mass budget approach, a method particularly suitable for estimating air mass and ozone STEs across different climates. Relative to the PI, the magnitude of ozone STE in the LGM is decreased by 14%–19%, 18%–24%, 18%–23%, 16%–21%, and 15%–21% over the Northern hemisphere extratropics, Southern hemisphere extratropics, the tropics, the extratropics, and the globe, respectively. The extratropical and global decreases are mainly caused by decreased ozone in the extratropical lower stratosphere associated with a weakening of Brewer‐Dobson circulation, while changes in air mass fluxes play a minor role because the effects of weakening Brewer‐Dobson circulation and increased isentropic density partly cancel each other. Analysis of the modeled tropospheric ozone budget indicates that the ozone STE in the LGM is 28% of the tropospheric ozone production rate, as compared to about 9% in the modern climate (year 2000) and 19% in the PI.&lt;/jats:p&gt;
</description>
<pubDate>Mon, 09 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165372</guid>
<dc:date>2022-05-09T00:00:00Z</dc:date>
</item>
<item>
<title>Philipp Strack, 2024 Clark Medalist</title>
<link>https://hdl.handle.net/1721.1/165371</link>
<description>Philipp Strack, 2024 Clark Medalist
Fudenberg, Drew
The 2024 John Bates Clark Medal of the American Economic Association was awarded to Philipp Strack, Professor of Economics at Yale University, for his pathbreaking contributions to the study of individual decision making, which have introduced new techniques, improved our understanding of important economic phenomena, and helped spark a new wave of research on the economics of information while building bridges between modern economic theory and a wide range of adjacent disciplines. This article summarizes some of Philipp’s papers, and explains how they build on and improve previous work.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165371</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Anisotropic Exciton Transport in a Lamellar CsPbBr&lt;sub&gt;3&lt;/sub&gt; Nanocrystal Superlattice</title>
<link>https://hdl.handle.net/1721.1/165370</link>
<description>Anisotropic Exciton Transport in a Lamellar CsPbBr&lt;sub&gt;3&lt;/sub&gt; Nanocrystal Superlattice
Sheehan, Thomas J.; Sekh, Taras V.; Bodnarchuk, Maryna I.; Kovalenko, Maksym V.; Tisdale, William A.
Colloidal self-assembly is one strategy for engineering anisotropic properties into&#13;
otherwise isotropic materials. In this work, we demonstrate anisotropic exciton transport in an&#13;
A2B-type superlattice containing columns of 5.3 nm CsPbBr3 nanocubes assembled into a&#13;
hexagonal lattice around 6.5 nm LaF3 nanodisks. Using transient photoluminescence microscopy,&#13;
we determined that diffusivity along the fast axis of the superlattice is more than twice as large as&#13;
the slow axis at T = 5 K, but that anisotropy is greatly suppressed at room temperature. Calculations&#13;
of the diffusivity anisotropy ratio based on Förster theory overestimate the measured values,&#13;
highlighting the limitations of this theory in completely describing exciton transport. Overall, our&#13;
results demonstrate how self-assembly of colloidal nanocrystals can be used to engineer directional&#13;
energy transport, and raise more questions about the microscopic nature of dipole coupling in&#13;
CsPbBr3 NC superlattices.
</description>
<pubDate>Tue, 17 Mar 2026 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165370</guid>
<dc:date>2026-03-17T00:00:00Z</dc:date>
</item>
<item>
<title>Greedy Algorithm Almost Dominates in Smoothed Contextual Bandits</title>
<link>https://hdl.handle.net/1721.1/165369</link>
<description>Greedy Algorithm Almost Dominates in Smoothed Contextual Bandits
Raghavan, Manish; Slivkins, Aleksandrs; Vaughan, Jennifer Wortman; Wu, Zhiwei Steven
Online learning algorithms, widely used to power search and content optimization onthe web, must balance exploration and exploitation, potentially sacrificing the experience of currentusers in order to gain information that will lead to better decisions in the future. While necessary inthe worst case, explicit exploration has a number of disadvantages compared to the greedy algorithmthat always ``exploits"" by choosing an action that currently looks optimal. We determine under whatconditions inherent diversity in the data makes explicit exploration unnecessary. We build on a recentline of work on the smoothed analysis of the greedy algorithm in the linear contextual bandits model.We improve on prior results to show that the greedy algorithm almost matches the best possibleBayesian regret rate of any other algorithm on the same problem instance whenever the diversityconditions hold. The key technical finding is that data collected by the greedy algorithm sufficesto simulate a run of any other algorithm. Further, we prove that under a particular smoothnessassumption, the Bayesian regret of the greedy algorithm is at most \~O(T 1/3) in the worst case, whereT is the time horizon.
</description>
<pubDate>Sun, 30 Apr 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165369</guid>
<dc:date>2023-04-30T00:00:00Z</dc:date>
</item>
<item>
<title>Spectral Methods from Tensor Networks</title>
<link>https://hdl.handle.net/1721.1/165368</link>
<description>Spectral Methods from Tensor Networks
Moitra, Ankur; Wein, Alexander S
A tensor network is a diagram that specifies a way to “multiply” a collection of tensors together to produce another tensor (or matrix). Many existing algorithms for tensor problems (such as tensor decomposition and tensor PCA), although they are not presented this way, can be viewed as spectral methods on matrices built from simple tensor networks. In this work we leverage the full power of this abstraction to design new algorithms for certain continuous tensor decomposition problems. An important and challenging family of tensor problems comes from orbit recovery, a class of inference problems involving group actions (inspired by applications such as cryo-electron microscopy). Orbit recovery problems over finite groups can often be solved via standard tensor methods. However, for infinite groups, no general algorithms are known. We give a new spectral algorithm based on tensor networks for one such problem: continuous multi-reference alignment over the infinite group SO(2). Our algorithm extends to the more general heterogeneous case.
</description>
<pubDate>Sun, 30 Apr 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165368</guid>
<dc:date>2023-04-30T00:00:00Z</dc:date>
</item>
<item>
<title>Coupling Techniques for Nonlinear Ensemble Filtering</title>
<link>https://hdl.handle.net/1721.1/165367</link>
<description>Coupling Techniques for Nonlinear Ensemble Filtering
Spantini, Alessio; Baptista, Ricardo; Marzouk, Youssef
We consider filtering in high-dimensional non-Gaussian state-space models with intractable transition kernels, nonlinear and possibly chaotic dynamics, and sparse observations in space and time. We propose a novel filtering methodology that harnesses transportation of measures, convex optimization, and ideas from probabilistic graphical models to yield robust ensemble approximations of the filtering distribution in high dimensions. Our approach can be understood as the natural generalization of the ensemble Kalman filter (EnKF) to nonlinear updates, using stochastic or deterministic couplings. The use of nonlinear updates can reduce the intrinsic bias of the EnKF at a marginal increase in computational cost. We avoid any form of importance sampling and introduce non-Gaussian localization approaches for dimension scalability. Our framework achieves state-of-the-art tracking performance on challenging configurations of the Lorenz-96 model in the chaotic regime.
</description>
<pubDate>Tue, 01 Nov 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165367</guid>
<dc:date>2022-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Lifting for Simplicity: Concise Descriptions of Convex Sets</title>
<link>https://hdl.handle.net/1721.1/165366</link>
<description>Lifting for Simplicity: Concise Descriptions of Convex Sets
Fawzi, Hamza; Gouveia, Joao; Parrilo, Pablo A; Saunderson, James; Thomas, Rekha R
This paper presents a selected tour through the theory and applications of lifts of convex sets. A lift of a convex set is a higher-dimensional convex set that projects onto the original set. Many convex sets have lifts that are dramatically simpler to describe than the original set. Finding such simple lifts has significant algorithmic implications, particularly for optimization problems. We consider both the classical case of polyhedral lifts, described by linear inequalities, as well as that of spectrahedral lifts, defined by linear matrix inequalities, with a focus on recent developments related to spectrahedral lifts. Given a convex set, ideally we would like to either find a (low-complexity) polyhedral or spectrahedral lift or find an obstruction proving that no such lift is possible. To this end, we explain the connection between the existence of lifts of a convex set and certain structured factorizations of its associated slack operator. Based on this characterization, we describe a uniform approach, via sums of squares, to the construction of spectrahedral lifts of convex sets and illustrate the method on several families of examples. Finally, we discuss two flavors of obstruction to the existence of lifts: one related to facial structure, and the other related to algebraic properties of the set in question. Rather than being exhaustive, our aim is to illustrate the richness of the area. We touch on a range of different topics related to the existence of lifts and present many examples of lifts from different areas of mathematics and its applications.
</description>
<pubDate>Tue, 01 Nov 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165366</guid>
<dc:date>2022-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Cortical Face-Selective Responses Emerge Early in Human Infancy</title>
<link>https://hdl.handle.net/1721.1/165365</link>
<description>Cortical Face-Selective Responses Emerge Early in Human Infancy
Kosakowski, Heather L; Cohen, Michael A; Herrera, Lyneé; Nichoson, Isabel; Kanwisher, Nancy; Saxe, Rebecca
In human adults, multiple cortical regions respond robustly to faces, including the occipital face area (OFA) and fusiform face area (FFA), implicated in face perception, and the superior temporal sulcus (STS) and medial prefrontal cortex (MPFC), implicated in higher-level social functions. When in development, does face selectivity arise in each of these regions? Here, we combined two awake infant functional magnetic resonance imaging (fMRI) datasets to create a sample size twice the size of previous reports (n = 65 infants; 2.6–9.6 months). Infants watched movies of faces, bodies, objects, and scenes, while fMRI data were collected. Despite variable amounts of data from each infant, individual subject whole-brain activation maps revealed responses to faces compared to nonface visual categories in the approximate location of OFA, FFA, STS, and MPFC. To determine the strength and nature of face selectivity in these regions, we used cross-validated functional region of interest analyses. Across this larger sample size, face responses in OFA, FFA, STS, and MPFC were significantly greater than responses to bodies, objects, and scenes. Even the youngest infants (2–5 months) showed significantly face-selective responses in FFA, STS, and MPFC, but not OFA. These results demonstrate that face selectivity is present in multiple cortical regions within months of birth, providing powerful constraints on theories of cortical development.
</description>
<pubDate>Mon, 01 Jul 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165365</guid>
<dc:date>2024-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Synchronous Measurements of Extracellular Action Potentials and Neurochemical Activity with Carbon Fiber Electrodes in Nonhuman Primates</title>
<link>https://hdl.handle.net/1721.1/165364</link>
<description>Synchronous Measurements of Extracellular Action Potentials and Neurochemical Activity with Carbon Fiber Electrodes in Nonhuman Primates
Amjad, Usamma; Choi, Jiwon; Gibson, Daniel J; Murray, Raymond; Graybiel, Ann M; Schwerdt, Helen N
Measuring the dynamic relationship between neuromodulators, such as dopamine, and neuronal action potentials is imperative to understand how these fundamental modes of neural signaling interact to mediate behavior. We developed methods to measure concurrently dopamine and extracellular action potentials (i.e., spikes) in monkeys. Standard fast-scan cyclic voltammetric (FSCV) electrochemical (EChem) and electrophysiological (EPhys) recording systems are combined and used to collect spike and dopamine signals, respectively, from an array of carbon fiber (CF) sensors implanted in the monkey striatum. FSCV requires the application of small voltages at the implanted sensors to measure redox currents generated from target molecules, such as dopamine. These applied voltages create artifacts at neighboring EPhys measurement sensors which may lead to misclassification of these signals as physiological spikes. Therefore, simple automated temporal interpolation algorithms were designed to remove these artifacts and enable accurate spike extraction. We validated these methods using simulated artifacts and demonstrated an average spike recovery rate of 84.5%. We identified and discriminated cell type-specific units in the monkey striatum that were shown to correlate to specific behavioral task parameters related to reward size and eye movement direction. Synchronously recorded spike and dopamine signals displayed contrasting relations to the task variables, suggesting a complex relationship between these two modes of neural signaling. Future application of our methods will help advance our understanding of the interactions between neuromodulator signaling and neuronal activity, to elucidate more detailed mechanisms of neural circuitry and plasticity mediating behaviors in health and in disease.
</description>
<pubDate>Mon, 01 Jul 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165364</guid>
<dc:date>2024-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Functional genetics reveals modulators of antimicrotubule drug sensitivity</title>
<link>https://hdl.handle.net/1721.1/165363</link>
<description>Functional genetics reveals modulators of antimicrotubule drug sensitivity
Su, Kuan-Chung; Radul, Elena; Maier, Nolan K; Tsang, Mary-Jane; Goul, Claire; Moodie, Brittania; Marescal, Océane; Keys, Heather R; Cheeseman, Iain M
Microtubules play essential roles in diverse cellular processes and are important pharmacological targets for treating human disease. Here, we sought to identify cellular factors that modulate the sensitivity of cells to antimicrotubule drugs. We conducted a genome-wide CRISPR/Cas9-based functional genetics screen in human cells treated with the microtubule-destabilizing drug nocodazole or the microtubule-stabilizing drug paclitaxel. We further conducted a focused secondary screen to test drug sensitivity for ∼1,400 gene targets across two distinct human cell lines and to additionally test sensitivity to the KIF11 inhibitor, STLC. These screens defined gene targets whose loss enhances or suppresses sensitivity to antimicrotubule drugs. In addition to gene targets whose loss sensitized cells to multiple compounds, we observed cases of differential sensitivity to specific compounds and differing requirements between cell lines. Our downstream molecular analysis further revealed additional roles for established microtubule-associated proteins and identified new players in microtubule function.
</description>
<pubDate>Mon, 03 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165363</guid>
<dc:date>2025-02-03T00:00:00Z</dc:date>
</item>
<item>
<title>Chemical transformation of the multibudding yeast, Aureobasidium pullulans</title>
<link>https://hdl.handle.net/1721.1/165362</link>
<description>Chemical transformation of the multibudding yeast, Aureobasidium pullulans
Wirshing, Alison CE; Petrucco, Claudia A; Lew, Daniel J
Aureobasidium pullulans is a ubiquitous polymorphic black yeast with industrial and agricultural applications. It has recently gained attention amongst cell biologists for its unconventional mode of proliferation in which multinucleate yeast cells make multiple buds within a single cell cycle. Here, we combine a chemical transformation method with genome-targeted homologous recombination to yield ∼60 transformants/μg of DNA in just 3 days. This protocol is simple, inexpensive, and requires no specialized equipment. We also describe vectors with codon-optimized green and red fluorescent proteins for A. pullulans and use these tools to explore novel cell biology. Quantitative imaging of a strain expressing cytosolic and nuclear markers showed that although the nuclear number varies considerably among cells of similar volume, total nuclear volume scales with cell volume over an impressive 70-fold size range. The protocols and tools described here expand the toolkit for A. pullulans biologists and will help researchers address the many other puzzles posed by this polyextremotolerant and morphologically plastic organism.
</description>
<pubDate>Thu, 27 Jun 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165362</guid>
<dc:date>2024-06-27T00:00:00Z</dc:date>
</item>
<item>
<title>Tunable DNMT1 degradation reveals DNMT1/DNMT3B synergy in DNA methylation and genome organization</title>
<link>https://hdl.handle.net/1721.1/165361</link>
<description>Tunable DNMT1 degradation reveals DNMT1/DNMT3B synergy in DNA methylation and genome organization
Scelfo, Andrea; Barra, Viviana; Abdennur, Nezar; Spracklin, George; Busato, Florence; Salinas-Luypaert, Catalina; Bonaiti, Elena; Velasco, Guillaume; Bonhomme, Frédéric; Chipont, Anna; Tijhuis, Andréa E; Spierings, Diana CJ; Guérin, Coralie; Arimondo, Paola; Francastel, Claire; Foijer, Floris; Tost, Jӧrg; Mirny, Leonid; Fachinetti, Daniele
DNA methylation (DNAme) is a key epigenetic mark that regulates critical biological processes maintaining overall genome stability. Given its pleiotropic function, studies of DNAme dynamics are crucial, but currently available tools to interfere with DNAme have limitations and major cytotoxic side effects. Here, we present cell models that allow inducible and reversible DNAme modulation through DNMT1 depletion. By dynamically assessing whole genome and locus-specific effects of induced passive demethylation through cell divisions, we reveal a cooperative activity between DNMT1 and DNMT3B, but not of DNMT3A, to maintain and control DNAme. We show that gradual loss of DNAme is accompanied by progressive and reversible changes in heterochromatin, compartmentalization, and peripheral localization. DNA methylation loss coincides with a gradual reduction of cell fitness due to G1 arrest, with minor levels of mitotic failure. Altogether, this system allows DNMTs and DNA methylation studies with fine temporal resolution, which may help to reveal the etiologic link between DNAme dysfunction and human disease.
</description>
<pubDate>Tue, 20 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165361</guid>
<dc:date>2024-02-20T00:00:00Z</dc:date>
</item>
<item>
<title>Can Development Programs Counter Insurgencies? Evidence from a Field Experiment in Afghanistan</title>
<link>https://hdl.handle.net/1721.1/165360</link>
<description>Can Development Programs Counter Insurgencies? Evidence from a Field Experiment in Afghanistan
Beath, Andrew; Christia, Fotini; Enikolopov, Ruben
We exploit a randomized controlled trial conducted between 2007 and 2011 to identify the effect of Afghanistan's largest local governance and development program on the strength of the insurgency. We find that the program reduced violence, improved economic outcomes, and increased government support in interior regions of the country, but increased violence in villages close to the Pakistani border, where foreign insurgents were more numerous. The results suggest that development programs can be effective in suppressing locally driven insurgencies, but may be counterproductive where insurgents are not reliant on the local population for support. (JEL C93, D74, F35, O15, O17, O18)
</description>
<pubDate>Tue, 01 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165360</guid>
<dc:date>2025-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Wage Stagnation and the Decline of Standardized Pay Rates, 1974–1991</title>
<link>https://hdl.handle.net/1721.1/165359</link>
<description>Wage Stagnation and the Decline of Standardized Pay Rates, 1974–1991
Massenkoff, Maxim; Wilmers, Nathan
Using new establishment-by-occupation microdata, we show that the use of discretionary wage setting significantly expanded in the 1970s and 1980s. Increasingly, wages for blue-collar workers were not standardized by job title or seniority but instead subject to managerial discretion. When establishments abandoned standardized pay rates, wages fell, particularly for the lowest-paid workers in a job and for those in establishments that previously paid above market rates. This shift away from standardized pay rates, in context of a broader decline in worker bargaining power, accelerated the decline in real wages experienced by blue-collar workers in the 1980s. (JEL J31, J33, J52, M52, O33)
</description>
<pubDate>Sun, 01 Jan 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165359</guid>
<dc:date>2023-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Multiscale affinity maturation simulations to elicit broadly neutralizing antibodies against HIV</title>
<link>https://hdl.handle.net/1721.1/165358</link>
<description>Multiscale affinity maturation simulations to elicit broadly neutralizing antibodies against HIV
Conti, Simone; Ovchinnikov, Victor; Faris, Jonathan G; Chakraborty, Arup K; Karplus, Martin; Sprenger, Kayla G
The design of vaccines against highly mutable pathogens, such as HIV and influenza, requires a detailed understanding of how the adaptive immune system responds to encountering multiple variant antigens (Ags). Here, we describe a multiscale model of B cell receptor (BCR) affinity maturation that employs actual BCR nucleotide sequences and treats BCR/Ag interactions in atomistic detail. We apply the model to simulate the maturation of a broadly neutralizing Ab (bnAb) against HIV. Starting from a germline precursor sequence of the VRC01 anti-HIV Ab, we simulate BCR evolution in response to different vaccination protocols and different Ags, which were previously designed by us. The simulation results provide qualitative guidelines for future vaccine design and reveal unique insights into bnAb evolution against the CD4 binding site of HIV. Our model makes possible direct comparisons of simulated BCR populations with results of deep sequencing data, which will be explored in future applications.
</description>
<pubDate>Fri, 22 Apr 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165358</guid>
<dc:date>2022-04-22T00:00:00Z</dc:date>
</item>
<item>
<title>How persistent infection overcomes peripheral tolerance mechanisms to cause T cell–mediated autoimmune disease</title>
<link>https://hdl.handle.net/1721.1/165357</link>
<description>How persistent infection overcomes peripheral tolerance mechanisms to cause T cell–mediated autoimmune disease
Yin, Rose; Melton, Samuel; Huseby, Eric S; Kardar, Mehran; Chakraborty, Arup K
T cells help orchestrate immune responses to pathogens, and their aberrant regulation can trigger autoimmunity. Recent studies highlight that a threshold number of T cells (a quorum) must be activated in a tissue to mount a functional immune response. These collective effects allow the T cell repertoire to respond to pathogens while suppressing autoimmunity due to circulating autoreactive T cells. Our computational studies show that increasing numbers of pathogenic peptides targeted by T cells during persistent or severe viral infections increase the probability of activating T cells that are weakly reactive to self-antigens (molecular mimicry). These T cells are easily re-activated by the self-antigens and contribute to exceeding the quorum threshold required to mount autoimmune responses. Rare peptides that activate many T cells are sampled more readily during severe/persistent infections than in acute infections, which amplifies these effects. Experiments in mice to test predictions from these mechanistic insights are suggested.
</description>
<pubDate>Wed, 06 Mar 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165357</guid>
<dc:date>2024-03-06T00:00:00Z</dc:date>
</item>
<item>
<title>Enantioselective Copper-Catalyzed Synthesis of Hydroxylamines via Hydrofunctionalization of Alkenes using Nitroalkanes</title>
<link>https://hdl.handle.net/1721.1/165356</link>
<description>Enantioselective Copper-Catalyzed Synthesis of Hydroxylamines via Hydrofunctionalization of Alkenes using Nitroalkanes
Law, James A; Mai, Binh Khanh; Hung, Hsuan-Min; Hendon, Tobianna; Dong, Yuyang; Zhang, Yu; Liu, Peng; Buchwald, Stephen L
Herein, we report that nitroalkanes are competent electrophiles for the enantioselective copper hydride (CuH)-catalyzed alkene hydrofunctionalization of vinyl(hetero)arenes to generate hydroxylamines in good yields and with high levels of enantioselectivity. Control experiments and density functional theory (DFT) calculations suggest that the nitro group constitutes the active electrophile. The direct addition of the enantioenriched alkyl copper intermediate to the nitro group outcompetes competitive reduction or deprotonation of the nitroalkane. DFT calculations indicate that the addition of the stereoenriched alkyl copper intermediate to nitroalkane electrophiles occurs through a six-membered cyclic transition state featuring dearomatization of the vinyl arene. Overall, this process constitutes a one-step route to access enantioenriched &lt;i&gt;N&lt;/i&gt;-alkylhydroxylamine from vinylarenes and nitroalkanes.
</description>
<pubDate>Tue, 10 Mar 2026 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165356</guid>
<dc:date>2026-03-10T00:00:00Z</dc:date>
</item>
<item>
<title>Monte Carlo toolkit for designing and validating step-range-filter spectrometer designs</title>
<link>https://hdl.handle.net/1721.1/165355</link>
<description>Monte Carlo toolkit for designing and validating step-range-filter spectrometer designs
Johnson, TM; Lahmann, B; Russell, L; Vanderloo, NL; Cufari, MJ; Reichelt, BL; Chang, CW; Birkel, A; Kabadi, NV; Sutcliffe, GD; Adrian, PJ; Pearcy, JA; Kunimune, JH; Dannhoff, SG; Evans, TE; Johnson, M Gatu; Séguin, FH; Petrasso, RD; Li, CK; Frenje, JA
Here, we present a Monte Carlo toolkit for validating step range filter (SRF) spectrometer designs. Geant4 is used to transport charged particles through the SRF filters to generate synthetic SRF data that include realistic CR-39 effects. Synthetic SRF spectra generated by this method inherently account for instrument response and allow for the quantification of SRF performance before shots. The usefulness of this toolkit is demonstrated through its application to a number of problems. A new broadband SRF for the ∼10 MeV wide 3He3He proton spectrum is validated, and an analysis method for analyzing 3He3He-p SRF data that accounts for instrument response is put forth. In addition, an SRF design for the compact recoil-proton spectrometer (CRS) on the Z-machine is validated. Finally, a new calibration technique for the DD-p SRF is proposed and validated.
</description>
<pubDate>Tue, 04 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165355</guid>
<dc:date>2025-02-04T00:00:00Z</dc:date>
</item>
<item>
<title>Accelerating plasma and radiation surface science using transient grating spectroscopy</title>
<link>https://hdl.handle.net/1721.1/165354</link>
<description>Accelerating plasma and radiation surface science using transient grating spectroscopy
Wylie, APC; Woller, KB; Rae, M; Lanzrath, AT; Dacus, BR; Ferry, SE; Short, MP
A facility for the investigation of in situ radiation-materials and plasma-materials interaction is demonstrated with tungsten, using transient grating spectroscopy as a probe of thermal diffusivity and surface acoustic wave speed. Helium plasma exposure at 645 °C to 1.18 × 1018 cm−2 helium, until the growth of tungsten fuzz, showed an increase in surface acoustic wave speed at the near-surface from 2542 ± 1 m s−1 up to 2565 ± 1 m s−1, followed by a greater drop to 2499 ± 7 m s−1. No observable change in thermal diffusivity was present for plasma exposure alone. A separate 10.26 MeV self-ion-irradiation of tungsten to a dose of 7.92 dpa showed a reduction in both thermal diffusivity from 61.4 ± 1.4 mm2 s−1 to 36.0 ± 0.7 mm2 s−1, following trends seen in existing studies, and surface acoustic wave speed from 2647.8 ± 0.6 m s−1 to 2640.0 ± 0.4 m s−1. Facilities like these are poised to rapidly close critical knowledge gaps regarding the coupled effects of plasma and radiation damage for materials in fusion systems.
</description>
<pubDate>Tue, 11 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165354</guid>
<dc:date>2025-02-11T00:00:00Z</dc:date>
</item>
<item>
<title>Enhanced sampling of robust molecular datasets with uncertainty-based collective variables</title>
<link>https://hdl.handle.net/1721.1/165353</link>
<description>Enhanced sampling of robust molecular datasets with uncertainty-based collective variables
Tan, Aik Rui; Dietschreit, Johannes CB; Gómez-Bombarelli, Rafael
Generating a dataset that is representative of the accessible configuration space of a molecular system is crucial for the robustness of machine-learned interatomic potentials. However, the complexity of molecular systems, characterized by intricate potential energy surfaces, with numerous local minima and energy barriers, presents a significant challenge. Traditional methods of data generation, such as random sampling or exhaustive exploration, are either intractable or may not capture rare, but highly informative configurations. In this study, we propose a method that leverages uncertainty as the collective variable (CV) to guide the acquisition of chemically relevant data points, focusing on regions of configuration space where ML model predictions are most uncertain. This approach employs a Gaussian Mixture Model-based uncertainty metric from a single model as the CV for biased molecular dynamics simulations. The effectiveness of our approach in overcoming energy barriers and exploring unseen energy minima, thereby enhancing the dataset in an active learning framework, is demonstrated on alanine dipeptide and bulk silica.
</description>
<pubDate>Wed, 15 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165353</guid>
<dc:date>2025-01-15T00:00:00Z</dc:date>
</item>
<item>
<title>Thermal enhancement of defect motion for optimizing periodic poling of x-cut thin-film lithium niobate</title>
<link>https://hdl.handle.net/1721.1/165352</link>
<description>Thermal enhancement of defect motion for optimizing periodic poling of x-cut thin-film lithium niobate
Doshi, Sagar P; West, Gavin N; Gray, Dodd; Ram, Rajeev J
Patterning of stable, spatially tailored ferroelectric domains in thin-film lithium niobate enables efficient nonlinear optical interactions through quasi-phase matching. The engineering of domain structure is limited by the uncontrolled distribution of defects, which disrupt domain wall motion. Here, we fabricate quasi-phase matching gratings in thin-film lithium niobate with sub 20 nm of period variation. We demonstrate that annealing processed samples at 350 or 500 °C for 48 h, prior to E-field poling, can dramatically reduce the duty cycle variation. We show that maintaining an elevated temperature of 200 °C during poling enhances defect mobility, which leads to more rectangular inverted domains. Moreover, poling at elevated temperatures also increases inversion depth without sacrificing the periodic domain pattern's accuracy or precision. Elevating the temperature prior to and during poling resulted in near-ideal square wave patterning of ferroelectric domains (50% mean duty cycle, sub 10% domain width variation, and 100% depth inversion). This enables effective quasi-phase matching for second harmonic generation in 5.6 mm-long waveguides fabricated from MgO-doped x-cut thin-film lithium niobate.
</description>
<pubDate>Tue, 24 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165352</guid>
<dc:date>2024-12-24T00:00:00Z</dc:date>
</item>
<item>
<title>Ultra-fast single-crystal CVD diamonds in the particle time-of-flight (PTOF) detector for low yield burn-history measurements on the NIF (invited)</title>
<link>https://hdl.handle.net/1721.1/165351</link>
<description>Ultra-fast single-crystal CVD diamonds in the particle time-of-flight (PTOF) detector for low yield burn-history measurements on the NIF (invited)
The Particle Time of Flight (PTOF) diagnostic is a chemical vapor deposition diamond-based detector and is the only diagnostic for measuring nuclear bang times of low yield (&lt;1013) shots on the National Ignition Facility. Recently, a comprehensive study of detector impulse responses revealed certain detectors with very fast and consistent impulse responses with a rise time of &lt;50 ps, enabling low yield burn history measurements. At the current standoff of 50 cm, this measurement is possible with fast 14 MeV neutrons from deuterium–tritium (DT) fusion plasmas. PTOF-inferred DT burn width numbers compare well with widths inferred from the gamma reaction history diagnostic on midyield (1013–1015) shots, where both systems are capable of making this measurement. These new capabilities could be extended to 2.5 MeV deuterium–deuterium neutrons from D plasmas and to even lower yield by reducing the detector standoff distance to 10 cm; a design for this is also presented.
</description>
<pubDate>Mon, 06 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165351</guid>
<dc:date>2025-01-06T00:00:00Z</dc:date>
</item>
<item>
<title>Machine learning driven measurement of high-aspect-ratio nanostructures using Mueller matrix spectroscopic ellipsometry</title>
<link>https://hdl.handle.net/1721.1/165350</link>
<description>Machine learning driven measurement of high-aspect-ratio nanostructures using Mueller matrix spectroscopic ellipsometry
Mudide, Shiva; Keller, Nick; Andrew Antonelli, G; Cruz, Geraldina; Hart, Julia; Bruccoleri, Alexander R; Heilmann, Ralf K; Schattenburg, Mark L
Accurate fabrication of high-aspect ratio (HAR) structures in applications from semiconductor devices to x-ray observatories is essential for their optimal performance because their performance directly depends on their structure. High-efficiency critical-angle transmission (CAT) gratings enable high-resolution x-ray spectroscopy in astrophysics, but their performance is only ideal when certain performance-critical parameters, like the bar tilts introduced during deep reactive-ion etching, are tuned to precise values. Traditional measurement methods like small-angle x-ray scattering (SAXS) are accurate, but limit the development of robust control algorithms to nudge performance-critical parameters toward favorable values because they are slow and often destructive. We present a fast, accurate, nondestructive measurement method using Mueller matrix spectroscopic ellipsometry and machine learning. Given a HAR structure, we train on rigorous coupled-wave analysis simulation data to predict Mueller matrix spectra from input performance-critical parameter values. We then invert this forward problem by freezing our network weights, measuring experimental Mueller matrix spectra, and vanilla gradient descending on performance-critical parameters to values that correspond to the input Mueller matrix spectra. Introducing machine learning to invert the forward problem reduces computation time, and experimental results demonstrate close agreement between our method’s determined tilt and SAXS measurements. Our accurate, fast measurement method paves the way for the development of robust control algorithms that adjust fabrication parameters in response to measurement, ensuring optimal performance in not only CAT gratings but also HAR structures embedded in applications from semiconductor to microelectromechanical systems fabrication.
</description>
<pubDate>Tue, 07 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165350</guid>
<dc:date>2025-01-07T00:00:00Z</dc:date>
</item>
<item>
<title>Self-aligned fabrication of vertical, fin-based structures</title>
<link>https://hdl.handle.net/1721.1/165349</link>
<description>Self-aligned fabrication of vertical, fin-based structures
Perozek, Joshua; Palacios, Tomás
Modern power devices rely on complex, three-dimensional, vertical designs to increase their power density, ease their thermal management, and improve their reliability. However, fabrication techniques have historically relied on 2D processes for patterning lateral features. This work presents a new technology that uses multiple steps of angled depositions to fabricate self-aligned vertical, fin-based devices that avoid fundamental lithography resolution and alignment limitations. The fabrication flows of two devices, the self-aligned vertical finFET and the high-κ dielectric fin diode, are presented to demonstrate how angled depositions can readily achieve transistors with submicrometer, vertical gates in a source-first process and also create high-aspect ratio GaN fins with a record 70:1 aspect ratio.
</description>
<pubDate>Mon, 09 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165349</guid>
<dc:date>2024-12-09T00:00:00Z</dc:date>
</item>
<item>
<title>Evaluating deuterated-xylene for use as a fusion neutron spectrometer</title>
<link>https://hdl.handle.net/1721.1/165348</link>
<description>Evaluating deuterated-xylene for use as a fusion neutron spectrometer
Ball, JL; Panontin, E; Mackie, S; Tinguely, RA; Raj, P
The spectrum of neutrons emitted by thermonuclear plasmas encodes information about the fuel ion distribution function. Measuring these fast neutron spectra with sufficient resolution allows for the measurement of plasma properties such as the ion temperature and strength and energy of fast ion populations. Liquid organic scintillators are a commonly used fast neutron detection technology because of their high detection efficiency and ability to discriminate between neutrons and gammas. However, performing detailed spectroscopy with these detectors is difficult because of the isotropic nature of neutron scattering on protons, the dominant mechanism of interaction. Deuterium-based scintillators have shown promise as a superior spectrometer technology because of the anisotropic nature of neutron scattering on deuterium, which significantly improves the condition number of the detector response matrix [Lawrence et al., Nucl. Instrum. Methods Phys. Res., Sect. A 729, 924 (2013)]. Deuterated-xylene, now available commercially, has advantages in light output and safety over benzene-based deuterated scintillators [Becchetti et al., Nucl. Instrum. Methods Phys. Res., Sect. A 820, 112 (2016)]. We present experimental spectrum unfoldings made by 2 in. right cylindrical protiated-xylene and deuterated-xylene detectors with response matrices generated with Geant4 and additional data from the literature. We compare their performance by measuring the neutron spectrum produced by an AmBe source and deuterium–tritium (DT) neutron generators. We find that the deuterated scintillator outperforms the protiated one for AmBe and DT spectra, suggesting deuterated-xylene should be considered for future fusion neutron spectrometry applications.
</description>
<pubDate>Tue, 17 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165348</guid>
<dc:date>2024-12-17T00:00:00Z</dc:date>
</item>
<item>
<title>Immersed boundary method for dynamic simulation of polarizable colloids of arbitrary shape in explicit ion electrolytes</title>
<link>https://hdl.handle.net/1721.1/165347</link>
<description>Immersed boundary method for dynamic simulation of polarizable colloids of arbitrary shape in explicit ion electrolytes
Krucker-Velasquez, Emily; Swan, James W; Sherman, Zachary
We develop a computational method for modeling electrostatic interactions of arbitrarily shaped, polarizable objects on colloidal length scales, including colloids/nanoparticles, polymers, and surfactants, dispersed in explicit ion electrolytes and nonionic solvents. Our method computes the nonuniform polarization charge distribution induced in a colloidal particle by both externally applied electric fields and local electric fields arising from other charged objects in the dispersion. This leads to expressions for electrostatic energies, forces, and torques that enable efficient molecular dynamics and Brownian dynamics simulations of colloidal dispersions in electrolytes, which can be harnessed to accurately predict structural and transport properties. We describe an implementation in which colloidal particles are modeled as rigid composites of small spherical beads that tessellate the surface of the particle. The electrostatics calculations are accelerated using a spectrally accurate particle-mesh-Ewald technique implemented on a graphics processing unit and regularized such that the electrostatic calculations are well-defined even for overlapping bodies. We illustrate the effectiveness of this approach with a comprehensive set of calculations: the induced dipole moments and forces for individual, paired, and lattice configurations of spherical colloids in an electric field; the induced dipole moment and torque for anisotropic particles subjected to an electric field; the equilibrium ion distribution in the double layer surrounding charged colloids; the dynamics of charged colloids; and the behavior of ions in the double layer of a polarizable colloid under the influence of an electric field.
</description>
<pubDate>Fri, 25 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165347</guid>
<dc:date>2024-10-25T00:00:00Z</dc:date>
</item>
<item>
<title>Errors in the field reconstruction using CR-39 proton radiographs with high fluence variation</title>
<link>https://hdl.handle.net/1721.1/165346</link>
<description>Errors in the field reconstruction using CR-39 proton radiographs with high fluence variation
Foo, BC; Buschmann, BI; Cufari, M; Dannhoff, SG; DeVault, A; Evans, TE; Johnson, TM; Kunimune, JH; Lawrence, Y; Pearcy, JA; Reichelt, BL; Russell, L; Vanderloo, N; Vargas, J; Wink, CW; Johnson, M Gatu; Séguin, FH; Petrasso, RD; Frenje, JA
CR-39 proton radiography is an experimental charged-particle backlighter platform fielded and used at OMEGA and the NIF to image electric and magnetic fields in a subject plasma. Processing a piece of CR-39 involves etching it in hot NaOH, and the etch time can greatly impact the background-to-signal ratio (BSR) in low-fluence (≲4 × 104 cm−2) regions and detection efficiency in high-fluence regions (≳7 × 105 cm−2). For CR-39 data with high fluence variation, these effects mean that any single etch time will result in ≳15% error in the measured signal in either the high- or low-fluence regions. This study aims to quantify the impact of the etch time on the BSR and efficiency losses and how these affect the field reconstruction. Experiments at the MIT Linear Electrostatic Ion Accelerator provided empirical values of the BSR and efficiency losses as a function of the fluence and etch time for fluences ranging from 3 × 103 to 7 × 105 cm−2. Synthetic radiographs were generated with known fields and modulated based on empirical values of BSR and efficiency losses. The fields were reconstructed using a Monge–Ampère code with the modulated radiographs as input. The results indicate that combining short and long etches allows for more accurate analysis of radiographs with high fluence variation, with the mean squared error of the reconstructed fields decreasing by factors of 1.2–7 compared to the reconstructions using only one etch time.
</description>
<pubDate>Thu, 31 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165346</guid>
<dc:date>2024-10-31T00:00:00Z</dc:date>
</item>
<item>
<title>Atomic structure of self-buffered BaZr(S, Se)3 epitaxial thin film interfaces</title>
<link>https://hdl.handle.net/1721.1/165345</link>
<description>Atomic structure of self-buffered BaZr(S, Se)3 epitaxial thin film interfaces
Xu, Michael; Ye, Kevin; Sadeghi, Ida; Jaramillo, R; LeBeau, James M
Understanding and controlling the growth of chalcogenide perovskite thin films through interface design is important for tailoring film properties. Here, the film and interface structure of BaZr(S, Se)3 thin films grown on LaAlO3 by molecular beam epitaxy and postgrowth anion exchange is resolved using aberration-corrected scanning transmission electron microscopy. Epitaxial films are achieved from selfassembly of an interface “buffer” layer, which accommodates the large film/substrate lattice mismatch of nearly 40% for the alloy film studied here. The self-assembled buffer layer, occurring for both the as-grown sulfide and post-selenization alloy films, is shown to have rock-salt-like atomic stacking akin to a Ruddlesden–Popper phase. These results provide insights into oxide-chalcogenide heteroepitaxial film growth, illustrating a process that yields relaxed, crystalline, epitaxial chalcogenide perovskite films that support ongoing studies of optoelectronic and device properties.
</description>
<pubDate>Mon, 16 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165345</guid>
<dc:date>2024-12-16T00:00:00Z</dc:date>
</item>
<item>
<title>Synthetic measurements of runaway electron synchrotron emission in the SPARC tokamak</title>
<link>https://hdl.handle.net/1721.1/165344</link>
<description>Synthetic measurements of runaway electron synchrotron emission in the SPARC tokamak
Tinguely, RA; Rosenthal, AM; Silva Sa, M; Jean, M; Abramovic, I
With plasma currents up to 8.7 MA, the SPARC tokamak runs the risk of forming multi-MA beams of relativistic “runaway” electrons (REs), which could damage plasma facing components if unmitigated. The infrared (IR) and visible imaging and visible spectroscopy systems in SPARC are designed with measurements of synchrotron emission from REs in mind. Synchrotron radiation is emitted by REs along their direction of motion, opposite the plasma current. Matched clockwise and counterclockwise wide views are proposed to detect synchrotron and background radiation, allowing observation of RE synchrotron emission in both plasma current configurations. Due to SPARC’s high toroidal magnetic field strength, 12.2 T on axis, the synchrotron light spectrum is expected to peak in the visible-IR wavelength range. The synthetic diagnostic tool, Synchrotron Orbit-Following Toolkit, is used to model synchrotron images and spectra for three scenarios, with appropriate magnetic equilibria for each: REs generated during plasma current ramp-up, steady-state flat-top (although unlikely, but serving as a reference), and disruptions. Required time resolutions, achievable spatial coverage, and appropriate spectral ranges for various RE energies are assessed.
</description>
<pubDate>Tue, 05 Nov 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165344</guid>
<dc:date>2024-11-05T00:00:00Z</dc:date>
</item>
<item>
<title>Quantifying the effects of neutron fluence on proton signal retention in CR-39</title>
<link>https://hdl.handle.net/1721.1/165343</link>
<description>Quantifying the effects of neutron fluence on proton signal retention in CR-39
Russell, L; Johnson, TM; Lawrence, Y; Reichelt, B; Vanderloo, N; Cufari, M; Buschmann, BI; Dannhoff, S; DeVault, A; Doeg, E; Evans, T; Foo, BC; Frankel, R; Kunimune, JH; Pearcy, JA; Vargas, J; Gatu Johnson, M; Frenje, J
This paper reports on investigations on the impact of higher neutron fluences on the detection efficiency of protons with CR-39, a charged particle track detector. CR-39 is widely used as a diagnostic for inertial fusion applications and is an integral component of numerous particle diagnostics at the OMEGA laser facility and National Ignition Facility. As experiments continue to produce higher and higher yields, existing diagnostics are impacted by higher particle fluences than they were originally designed for. This paper presents data from experiments measuring proton signal on pieces of CR-39 with different levels of neutron fluence with two different etch times. The experiments show a decrease in signal recovery with increased neutron fluence, which is exacerbated at longer etch times. At 3 h etch time, data suggest a 17% ± 7% signal loss at 1.3 × 105 neutron-induced tracks per cm2 and a 67% ± 21% loss at 6 h etch time. Careful signal isolation techniques can recover most of the proton tracks even with moderate neutron fluence.
</description>
<pubDate>Wed, 02 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165343</guid>
<dc:date>2024-10-02T00:00:00Z</dc:date>
</item>
<item>
<title>Transcriptome-wide mapping reveals a diverse dihydrouridine landscape including mRNA</title>
<link>https://hdl.handle.net/1721.1/165314</link>
<description>Transcriptome-wide mapping reveals a diverse dihydrouridine landscape including mRNA
Draycott, Austin S; Schaening-Burgos, Cassandra; Rojas-Duran, Maria F; Wilson, Loren; Schärfen, Leonard; Neugebauer, Karla M; Nachtergaele, Sigrid; Gilbert, Wendy V
Dihydrouridine is a modified nucleotide universally present in tRNAs, but the complete dihydrouridine landscape is unknown in any organism. We introduce dihydrouridine sequencing (D-seq) for transcriptome-wide mapping of D with single-nucleotide resolution and use it to uncover novel classes of dihydrouridine-containing RNA in yeast which include mRNA and small nucleolar RNA (snoRNA). The novel D sites are concentrated in conserved stem-loop regions consistent with a role for D in folding many functional RNA structures. We demonstrate dihydrouridine synthase (DUS)-dependent changes in splicing of a D-containing pre-mRNA in cells and show that D-modified mRNAs can be efficiently translated by eukaryotic ribosomes in vitro. This work establishes D as a new functional component of the mRNA epitranscriptome and paves the way for identifying the RNA targets of multiple DUS enzymes that are dysregulated in human disease.
</description>
<pubDate>Tue, 24 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165314</guid>
<dc:date>2022-05-24T00:00:00Z</dc:date>
</item>
<item>
<title>SARS-CoV-2 antibodies protect against reinfection for at least 6 months in a multicentre seroepidemiological workplace cohort</title>
<link>https://hdl.handle.net/1721.1/165313</link>
<description>SARS-CoV-2 antibodies protect against reinfection for at least 6 months in a multicentre seroepidemiological workplace cohort
Finch, Emilie; Lowe, Rachel; Fischinger, Stephanie; de St Aubin, Michael; Siddiqui, Sameed M; Dayal, Diana; Loesche, Michael A; Rhee, Justin; Beger, Samuel; Hu, Yiyuan; Gluck, Matthew J; Mormann, Benjamin; Hasdianda, Mohammad A; Musk, Elon R; Alter, Galit; Menon, Anil S; Nilles, Eric J; Kucharski, Adam J
Identifying the potential for SARS-CoV-2 reinfection is crucial for understanding possible long-term epidemic dynamics. We analysed longitudinal PCR and serological testing data from a prospective cohort of 4,411 United States employees in 4 states between April 2020 and February 2021. We conducted a multivariable logistic regression investigating the association between baseline serological status and subsequent PCR test result in order to calculate an odds ratio for reinfection. We estimated an odds ratio for reinfection ranging from 0.14 (95% CI: 0.019 to 0.63) to 0.28 (95% CI: 0.05 to 1.1), implying that the presence of SARS-CoV-2 antibodies at baseline is associated with around 72% to 86% reduced odds of a subsequent PCR positive test based on our point estimates. This suggests that primary infection with SARS-CoV-2 provides protection against reinfection in the majority of individuals, at least over a 6-month time period. We also highlight 2 major sources of bias and uncertainty to be considered when estimating the relative risk of reinfection, confounders and the choice of baseline time point, and show how to account for both in reinfection analysis.
</description>
<pubDate>Thu, 10 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165313</guid>
<dc:date>2022-02-10T00:00:00Z</dc:date>
</item>
<item>
<title>Persona2vec: a flexible multi-role representations learning framework for graphs</title>
<link>https://hdl.handle.net/1721.1/165312</link>
<description>Persona2vec: a flexible multi-role representations learning framework for graphs
Yoon, Jisung; Yang, Kai-Cheng; Jung, Woo-Sung; Ahn, Yong-Yeol
Graph embedding techniques, which learn low-dimensional representations of a graph, are achieving state-of-the-art performance in many graph mining tasks. Most existing embedding algorithms assign a single vector to each node, implicitly assuming that a single representation is enough to capture all characteristics of the node. However, across many domains, it is common to observe pervasively overlapping community structure, where most nodes belong to multiple communities, playing different roles depending on the contexts. Here, we propose persona2vec, a graph embedding framework that efficiently learns multiple representations of nodes based on their structural contexts. Using link prediction-based evaluation, we show that our framework is significantly faster than the existing state-of-the-art model while achieving better performance.
</description>
<pubDate>Tue, 30 Mar 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165312</guid>
<dc:date>2021-03-30T00:00:00Z</dc:date>
</item>
<item>
<title>Chemically induced reprogramming to reverse cellular aging</title>
<link>https://hdl.handle.net/1721.1/165311</link>
<description>Chemically induced reprogramming to reverse cellular aging
Yang, Jae-Hyun; Petty, Christopher A; Dixon-McDougall, Thomas; Lopez, Maria Vina; Tyshkovskiy, Alexander; Maybury-Lewis, Sun; Tian, Xiao; Ibrahim, Nabilah; Chen, Zhili; Griffin, Patrick T; Arnold, Matthew; Li, Jien; Martinez, Oswaldo A; Behn, Alexander; Rogers-Hammond, Ryan; Angeli, Suzanne; Gladyshev, Vadim N; Sinclair, David A
A hallmark of eukaryotic aging is a loss of epigenetic information, a process that can be reversed. We have previously shown that the ectopic induction of the Yamanaka factors OCT4, SOX2, and KLF4 (OSK) in mammals can restore youthful DNA methylation patterns, transcript profiles, and tissue function, without erasing cellular identity, a process that requires active DNA demethylation. To screen for molecules that reverse cellular aging and rejuvenate human cells without altering the genome, we developed high-throughput cell-based assays that distinguish young from old and senescent cells, including transcription-based aging clocks and a real-time nucleocytoplasmic compartmentalization (NCC) assay. We identify six chemical cocktails, which, in less than a week and without compromising cellular identity, restore a youthful genome-wide transcript profile and reverse transcriptomic age. Thus, rejuvenation by age reversal can be achieved, not only by genetic, but also chemical means.
</description>
<pubDate>Wed, 12 Jul 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165311</guid>
<dc:date>2023-07-12T00:00:00Z</dc:date>
</item>
<item>
<title>High-Level Expression and Biochemical Properties of A Thermo-Alkaline Pectate Lyase From Bacillus sp. RN1 in Pichia pastoris With Potential in Ramie Degumming</title>
<link>https://hdl.handle.net/1721.1/165310</link>
<description>High-Level Expression and Biochemical Properties of A Thermo-Alkaline Pectate Lyase From Bacillus sp. RN1 in Pichia pastoris With Potential in Ramie Degumming
Zheng, Xueyun; Zhang, Yimin; Liu, Xiaoxiao; Li, Cheng; Lin, Ying; Liang, Shuli
Pectate lyases play an essential role in textiles, animal feed, and oil extraction industries. Pichia pastoris can be an ideal platform for pectate lyases production, and BspPel (a thermo-alkaline pectate lyase from Bacillus sp. RN1) was overexpressed by combined strategies, reaching 1859 U/mL in a 50 L fermentator. It displayed the highest activity at 80°C, and maintained more than 60% of the activity between 30 and 70°C for 1 h. It showed an optimal pH of 10.0, and exhibited remarkable stability over a wider pH range (3.0-11.0), retaining more than 80.0% of enzyme activity for 4 h. The Km and kcat of BspPel on PGA (polygalacturonic acid) was 2.19 g L–1 and 116.1 s–1, respectively. The activity was significantly enhanced by Ca2+, Mn2+, and Cu2+, and a slight increase was observed with the addition of Ba2+ and Mg2+. Scanning electron microscope was used to show the degumming efficiency of BspPel on ramie fibers. The loss weight was 9.2% when treated with crude enzyme supernatant and 20.8% when treated with the enzyme-chemical method, which was higher than the 14.2% weight loss in the positive control treated with 0.5% (w/v) NaOH alone. In conclusion, BspPel could be a good candidate for the ramie degumming industry.
</description>
<pubDate>Fri, 24 Jul 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165310</guid>
<dc:date>2020-07-24T00:00:00Z</dc:date>
</item>
<item>
<title>Growth Factor Engineering Strategies for Regenerative Medicine Applications</title>
<link>https://hdl.handle.net/1721.1/165309</link>
<description>Growth Factor Engineering Strategies for Regenerative Medicine Applications
Ren, Xiaochen; Zhao, Moyuan; Lash, Blake; Martino, Mikaël M; Julier, Ziad
Growth factors are critical molecules for tissue repair and regeneration. Therefore, recombinant growth factors have raised a lot of hope for regenerative medicine applications. While using growth factors to promote tissue healing has widely shown promising results in pre-clinical settings, their success in the clinic is not a forgone conclusion. Indeed, translation of growth factors is often limited by their short half-life, rapid diffusion from the delivery site, and low cost-effectiveness. Trying to circumvent those limitations by the use of supraphysiological doses has led to serious side-effects in many cases and therefore innovative technologies are required to improve growth factor-based regenerative strategies. In this review, we present protein engineering approaches seeking to improve growth factor delivery and efficacy while reducing doses and side effects. We focus on engineering strategies seeking to improve affinity of growth factors for biomaterials or the endogenous extracellular matrix. Then, we discuss some examples of increasing growth factor stability and bioactivity, and propose new lines of research that the field of growth factor engineering for regenerative medicine may adopt in the future.
</description>
<pubDate>Tue, 21 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165309</guid>
<dc:date>2020-01-21T00:00:00Z</dc:date>
</item>
<item>
<title>Quantification of muscle fiber malformations using edge detection to investigate chronic muscle pressure ulcers</title>
<link>https://hdl.handle.net/1721.1/165308</link>
<description>Quantification of muscle fiber malformations using edge detection to investigate chronic muscle pressure ulcers
Ong, Charlene ZL; Nasir, N Jannah M; Welsch, Roy E; Tucker-Kellogg, Lisa; Rajapakse, Jagath C
Background: Microscopy of regenerated tissue shows different morphologies between the healing of acute wounds and chronic wounds. This difference can be seen manually by biologists, but computational methods are needed to automate the characterization of morphology and regenerative quality in regenerated muscle tissue. Results: From the detected edge segments, we computed several imaging biomarkers of interest, such as median tortuosity, number of edge segments normalized by area, median edge segment distance and interquartile range of orientation angles of edge segments of the microscope images of successful and unsuccessful muscle regeneration. We observed that muscle fibers in saline-treated pressure ulcers had a larger interquartile range of orientation angles of the edge segments (p = 0.05) and shorter edge segment distances (p = 0.003) compared to those of acute cardiotoxin injuries. Conclusion: Our edge detection method was able to identify statistically significant differences in some of the imaging biomarkers between saline-treated pressure ulcers and cardiotoxin injuries, suggesting that chronic pressure ulcers have increased muscle fiber malformations compared to cardiotoxin injuries.
</description>
<pubDate>Mon, 21 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165308</guid>
<dc:date>2024-10-21T00:00:00Z</dc:date>
</item>
<item>
<title>Fast and Fourier: extreme mass ratio inspiral waveforms in the frequency domain</title>
<link>https://hdl.handle.net/1721.1/165307</link>
<description>Fast and Fourier: extreme mass ratio inspiral waveforms in the frequency domain
Speri, Lorenzo; Katz, Michael L; Chua, Alvin JK; Hughes, Scott A; Warburton, Niels; Thompson, Jonathan E; Chapman-Bird, Christian EA; Gair, Jonathan R
Extreme Mass Ratio Inspirals (EMRIs) are one of the key sources for future spacebased gravitational wave interferometers. Measurements of EMRI gravitational waves are expected to determine the characteristics of their sources with subpercent precision. However, their waveform generation is challenging due to the long duration of the signal and the high harmonic content. Here, we present the first ready-to-use Schwarzschild eccentric EMRI waveform implementation in the frequency domain for use with either graphics processing units (GPUs) or central processing units (CPUs). We present the overall waveform implementation and test the accuracy and performance of the frequency domain waveforms against the time domain implementation. On GPUs, the frequency domain waveform takes in median 0.044 s to generate and is twice as fast to compute as its time domain counterpart when considering massive black hole masses ≥ 2 × 106 M⊙ and initial eccentricities e0 &gt; 0.2. On CPUs, the median waveform evaluation time is 5 s, and it is five times faster in the frequency domain than in the time domain. Using a sparser frequency array can further speed up the waveform generation, reaching up to 0.3 s. This enables us to perform, for the first time, EMRI parameter inference with fully relativistic waveforms on CPUs. Future EMRI models, which encompass wider source characteristics (particularly black hole spin and generic orbit geometries), will require significantly more harmonics. Frequency domain models will be essential analysis tools for these astrophysically realistic and important signals.
</description>
<pubDate>Fri, 12 Jan 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165307</guid>
<dc:date>2024-01-12T00:00:00Z</dc:date>
</item>
<item>
<title>Eigenvalue lower bounds and splitting for modified Ricci flow</title>
<link>https://hdl.handle.net/1721.1/165306</link>
<description>Eigenvalue lower bounds and splitting for modified Ricci flow
Colding, Tobias Holck; Minicozzi II, William P
We prove sharp lower bounds for eigenvalues of the drift Laplacian for a modified Ricci flow. The modified Ricci flow is a system of coupled equations for a metric and weighted volume that plays an important role in Ricci flow. We will also show that there is a splitting theorem in the case of equality.
</description>
<pubDate>Fri, 08 Mar 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165306</guid>
<dc:date>2024-03-08T00:00:00Z</dc:date>
</item>
<item>
<title>Propagation of symmetries for Ricci shrinkers</title>
<link>https://hdl.handle.net/1721.1/165305</link>
<description>Propagation of symmetries for Ricci shrinkers
Colding, Tobias Holck; Minicozzi II, William P
We will show that if a gradient shrinking Ricci soliton has an approximate symmetry on one scale, this symmetry propagates to larger scales. This is an example of the shrinker principle which roughly states that information radiates outwards for shrinking solitons.
</description>
<pubDate>Tue, 13 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165305</guid>
<dc:date>2023-06-13T00:00:00Z</dc:date>
</item>
<item>
<title>The effects of morphology, mobility size, and secondary organic aerosol (SOA) material coating on the ice nucleation activity of black carbon in the cirrus regime</title>
<link>https://hdl.handle.net/1721.1/165304</link>
<description>The effects of morphology, mobility size, and secondary organic aerosol (SOA) material coating on the ice nucleation activity of black carbon in the cirrus regime
Zhang, Cuiqi; Zhang, Yue; Wolf, Martin J; Nichman, Leonid; Shen, Chuanyang; Onasch, Timothy B; Chen, Longfei; Cziczo, Daniel J
There is evidence that black carbon (BC) particles may affect cirrus formation and, hence, global climate by acting as potential ice nucleating particles (INPs) in the troposphere. Nevertheless, the ice nucleation (IN) ability of bare BC and BC coated with secondary organic aerosol (SOA) material remains uncertain. We have systematically examined the IN ability of 100–400 nm size-selected BC particles with different morphologies and different SOA coatings representative of anthropogenic (toluene and n-dodecane) and biogenic (β-caryophyllene) sources in the cirrus regime (−46 to −38 ∘C). Several BC proxies were selected to represent different particle morphologies and oxidation levels. Atmospheric aging was further replicated with the exposure of SOA-coated BC to OH. The results demonstrate that the 400 nm hydrophobic BC types nucleate ice only at or near the homogeneous freezing threshold. Ice formation at cirrus temperatures below homogeneous freezing thresholds, as opposed to purely homogeneous freezing, was observed to occur for some BC types between 100 and 200 nm within the investigated temperature range. More fractal BC particles did not consistently act as superior INPs over more spherical ones. SOA coating generated by oxidizing β-caryophyllene with O3 did not seem to affect BC IN ability, probably due to an SOA-phase state transition. However, SOA coatings generated from OH oxidation of various organic species did exhibit higher IN-onset supersaturation ratio with respect to ice (SSi), compared with bare BC particles, with the toluene-SOA coating showing an increase in SSi of 0.1–0.15 while still below the homogeneous freezing threshold. Slightly oxidized toluene SOA coating seemed to have a stronger deactivation effect on BC IN ability than highly oxidized toluene SOA, which might be caused by oligomer formation and the phase state transition of toluene SOA under different oxidation levels. n-dodecane and β-caryophyllene-derived SOA-coated BC only froze in the homogeneous regime. We attribute the inhibition of IN ability to the filling of the pores on the BC surface by the SOA material coating. OH exposure levels of n-dodecane and β-caryophyllene SOA coating experiments, from an equivalent atmospheric exposure time from 10 to 90 d, did not render significant differences in the IN potential. Our study of selected BC types and sizes suggests that increases in diameter, compactness, and/or surface oxidation of BC particles lead to more efficient IN via the pore condensation freezing (PCF) pathway, and that coatings of common SOA materials can inhibit the formation of ice.
</description>
<pubDate>Thu, 19 Nov 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165304</guid>
<dc:date>2020-11-19T00:00:00Z</dc:date>
</item>
<item>
<title>A machine learning examination of hydroxyl radical differences among model simulations for CCMI-1</title>
<link>https://hdl.handle.net/1721.1/165303</link>
<description>A machine learning examination of hydroxyl radical differences among model simulations for CCMI-1
Nicely, Julie M; Duncan, Bryan N; Hanisco, Thomas F; Wolfe, Glenn M; Salawitch, Ross J; Deushi, Makoto; Haslerud, Amund S; Jöckel, Patrick; Josse, Béatrice; Kinnison, Douglas E; Klekociuk, Andrew; Manyin, Michael E; Marécal, Virginie; Morgenstern, Olaf; Murray, Lee T; Myhre, Gunnar; Oman, Luke D; Pitari, Giovanni; Pozzer, Andrea; Quaglia, Ilaria; Revell, Laura E; Rozanov, Eugene; Stenke, Andrea; Stone, Kane; Strahan, Susan; Tilmes, Simone; Tost, Holger; Westervelt, Daniel M; Zeng, Guang
The hydroxyl radical (OH) plays critical roles within the troposphere, such as determining the lifetime of methane (CH4), yet is challenging to model due to its fast cycling and dependence on a multitude of sources and sinks. As a result, the reasons for variations in OH and the resulting methane lifetime (τCH4 ), both between models and in time, are difficult to diagnose. We apply a neural network (NN) approach to address this issue within a group of models that participated in the Chemistry-Climate Model Initiative (CCMI). Analysis of the historical specified dynamics simulations performed for CCMI indicates that the primary drivers of τCH4 differences among 10 models are the flux of UV light to the troposphere (indicated by the photolysis frequency JO 1D), the mixing ratio of tropospheric ozone (O3), the abundance of nitrogen oxides (NOx ≡ NO + NO2), and details of the various chemical mechanisms that drive OH. Water vapour, carbon monoxide (CO), the ratio of NO : NOx , and formaldehyde (HCHO) explain moderate differences in τCH4 , while isoprene, methane, the photolysis frequency of NO2 by visible light (JNO2), overhead ozone column, and temperature account for little to no model variation in τCH4 . We also apply the NNs to analysis of temporal trends in OH from 1980 to 2015. All models that participated in the specified dynamics historical simulation for CCMI demonstrate a decline in τCH4 during the analysed timeframe. The significant contributors to this trend, in order of importance, are tropospheric O3, JO 1D, NOx , and H2O, with CO also causing substantial interannual variability in OH burden. Finally, the identified trends in τCH4 are compared to calculated trends in the tropospheric mean OH concentration from previous work, based on analysis of observations. The comparison reveals a robust result for the effect of rising water vapour on OH and τCH4 , imparting an increasing and decreasing trend of about 0.5 % decade−1 , respectively. The responses due to NOx , ozone column, and temperature are also in reasonably good agreement between the two studies.
</description>
<pubDate>Wed, 05 Feb 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165303</guid>
<dc:date>2020-02-05T00:00:00Z</dc:date>
</item>
<item>
<title>High-Resolution OCT Reveals Age-Associated Variation in the Region Posterior to the External Limiting Membrane</title>
<link>https://hdl.handle.net/1721.1/165302</link>
<description>High-Resolution OCT Reveals Age-Associated Variation in the Region Posterior to the External Limiting Membrane
Jamil, Muhammad Usman; Won, Jungeun; Ploner, Stefan B; Marmalidou, Anna; Takahashi, Hiroyuki; Kaiser, Stephanie; Hwang, Yunchan; Abu-Qamar, Omar; Yaghy, Antonio; Witkin, Andre J; Zhao, Peter Y; Desai, Shilpa; Duker, Jay S; Maier, Andreas; Fujimoto, James G; Waheed, Nadia K
Purpose: To evaluate visibility of a sub-band posterior to the external limiting membrane (ELM) and assess its age-associated variation.&#13;
Methods: In a retrospective cross-sectional study, normal eyes were imaged using a high-resolution spectral-domain optical coherence tomography (SD-OCT) prototype (2.7-µm axial resolution). Volume fusion of six sequential scans (each 500 × 500 A-scans over 6 mm × 6 mm) was performed in the motion correction and volume reconstruction in OCT (MoReOCT) framework to enhance feature visibility in OCT. The subjects were divided into three groups: young (21-40 years old), middle (41-60 years old), and older (&gt;60 years old). Three expert graders assessed the visibility of the sub-band on B-scans, and its A-scan intensity relative to ELM intensity (peak intensity ratio) was measured.&#13;
Results: Forty-four eyes of 44 subjects were imaged. The sub-band, tentatively attributed to the photoreceptor myoid, can be visualized under high-resolution OCT. The B-scan gradings showed that sub-band visibility increased with age (visible in 16.7%, 47.2%, and 66.7% of the young, middle, and older age groups, respectively). The gradings were statistically different among age groups at 1 mm and 2 mm nasal and 1 mm and 2 mm temporal (P &lt; 0.04) from the foveal center. Similarly, the mean peak intensity ratios of the sub-band to the ELM were 71.6%, 77.5%, and 85.2% in the young, middle, and older age groups, respectively, and were positively correlated with age at 1 mm temporal (P = 0.012) and 2 mm temporal (P &lt; 0.001).&#13;
Conclusions: High-resolution OCT, combined with advanced volume fusion, enables visualization of the photoreceptor myoid and investigation of its age-associated variations.&#13;
Translational Relevance: Investigating the sub-band can advance our understanding of photoreceptors and their association with aging and disease pathogenesis.
</description>
<pubDate>Wed, 15 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165302</guid>
<dc:date>2025-01-15T00:00:00Z</dc:date>
</item>
<item>
<title>Topographic Measurement of the Subretinal Pigment Epithelium Space in Normal Aging and Age-Related Macular Degeneration Using High-Resolution OCT</title>
<link>https://hdl.handle.net/1721.1/165301</link>
<description>Topographic Measurement of the Subretinal Pigment Epithelium Space in Normal Aging and Age-Related Macular Degeneration Using High-Resolution OCT
Won, Jungeun; Takahashi, Hiroyuki; Ploner, Stefan B; Karbole, Wenke; Abu-Qamar, Omar; Yaghy, Antonio; Marmalidou, Anna; Kaiser, Stephanie; Hwang, Yunchan; Lin, Junhong; Witkin, Andre; Desai, Shilpa; Baumal, Caroline R; Maier, Andreas; Curcio, Christine A; Waheed, Nadia K; Fujimoto, James G
Purpose: A micrometer scale hyporeflective band within the retinal pigment epithelium basal lamina - Bruch's membrane complex (RPE-BL-BrM) was topographically measured in aging and age-related macular degeneration (AMD).&#13;
Methods: In a prospective cross-sectional study, 90 normal eyes from 76 subjects (range = 23-90 years) and 53 dry AMD eyes from 47 subjects (range = 62-91 years) were enrolled. Isotropic volume raster scans over 6 mm × 6 mm (500 × 500 A-scans) were acquired using a high-resolution (2.7 µm axial resolution) spectral-domain optical coherence tomography (SD-OCT) prototype instrument. Six consecutive optical coherence tomography (OCT) volumes were computationally motion-corrected and fused to improve feature visibility. A boundary regression neural network was developed to measure hyporeflective band thickness. Topographic dependence was evaluated over a 6-mm-diameter Early Treatment Diabetic Retinopathy Study (ETDRS) grid.&#13;
Results: The hyporeflective band thickness map (median of 4.3 µm and 7.8 µm in normal and AMD eyes, respectively) is thicker below and radially symmetric around the fovea. In normal eyes, age-associated differences occur within 0.7 to 2.3 mm from the foveal center (P &lt; 0.05). In AMD eyes, the hyporeflective band is hypothesized to be basal laminar deposits (BLamDs) and is thicker within the 3-mm ETDRS circle (P &lt; 0.0002) compared with normal eyes. The inner ring is the most sensitive location to detect age versus AMD-associated changes within the RPE-BL-BrM. AMD eyes with subretinal drusenoid deposits (SDDs) have a significantly thicker hyporeflective band (P &lt; 0.001) than those without SDDs.&#13;
Conclusions: The hyporeflective band is a quantifiable biomarker which differentiates AMD from aging. Longitudinal studies are warranted. The hyporeflective band may be a useful biomarker for risk stratification and disease progression.
</description>
<pubDate>Fri, 09 Aug 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165301</guid>
<dc:date>2024-08-09T00:00:00Z</dc:date>
</item>
<item>
<title>Assessment of Choriocapillaris Flow Prior to Nascent Geographic Atrophy Development Using Optical Coherence Tomography Angiography</title>
<link>https://hdl.handle.net/1721.1/165300</link>
<description>Assessment of Choriocapillaris Flow Prior to Nascent Geographic Atrophy Development Using Optical Coherence Tomography Angiography
Greig, Eugenia Custo; Moult, Eric M; Despotovic, Ivana N; Hodgson, Lauren AB; Pramil, Varsha; Fujimoto, James G; Waheed, Nadia K; Guymer, Robyn H; Wu, Zhichao
Purpose: To assess the relationship between choriocapillaris (CC) loss and the development of nascent geographic atrophy (nGA) using optical coherence tomography angiography (OCTA) imaging.&#13;
Methods: In total, 105 from 62 participants with bilateral large drusen, without late age-related macular degeneration (AMD) or nGA at baseline, were included in this prospective, longitudinal, observational study. Participants underwent swept-source OCTA imaging at 6-month intervals. CC flow deficit percentage (FD%) and drusen volume measurements were determined for the visit prior to nGA development or the second-to-last visit if nGA did not develop. Global and local analyses, the latter based on analyses within superpixels (120 × 120-µm regions), were performed to examine the association between CC FD% and future nGA development.&#13;
Results: A total of 15 (14%) eyes from 12 (19%) participants developed nGA. There was no significant difference in global CC FD% at the visit prior to nGA development between eyes that developed nGA and those that did not (P = 0.399). In contrast, CC FD% was significantly higher in superpixels that subsequently developed nGA compared to those that did not (P &lt; 0.001), and a model utilizing CC FD% was significantly better at predicting foci of future nGA development at the superpixel level than a model using drusen volume alone (P ≤ 0.040).&#13;
Conclusions: This study showed that significant impairments in CC blood flow could be detected locally prior to the development of nGA. These findings add to our understanding of the pathophysiologic changes that occur with atrophy development in AMD.
</description>
<pubDate>Thu, 18 Jan 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165300</guid>
<dc:date>2024-01-18T00:00:00Z</dc:date>
</item>
<item>
<title>Stochastic Network Utility Maximization in Strategic Queueing Systems: A Game-Theoretic Approach</title>
<link>https://hdl.handle.net/1721.1/165299</link>
<description>Stochastic Network Utility Maximization in Strategic Queueing Systems: A Game-Theoretic Approach
Nguyen, Quang Minh; Berry, Randall; Modiano, Eytan
Stochastic Network Utility Maximization (NUM) has been a  dominant framework for many queueing network resource allocation and control problems. Its original model seeks to optimize  social welfare, which usually takes the form of the sum of local utilities of participating entities. However, such a centralized utility maximization approach  is  unsuitable for many modern multi-agent systems, in which each agent may selfishly optimize its local utility without regard to the overall utility. In this paper, we formulate the stochastic NUM problem in strategic queueing systems as a repeated game with queue stability constraints. In particular, the agents repeatedly make  decisions to satisfy both their local constraints and global  constraints, shared among them, while maintaining queue stability. The goal is to design a policy that constitutes a generalized Nash equilibrium (GNE) for the game. &#13;
&#13;
We first derive the fluid model characterization of the strategic queueing NUM problem via a static one-shot game formulation. This characterization motivates a primal-dual algorithm  that constitutes an approximate GNE by ensuring last-iterate convergence to a solution of the regularized static one-shot game. However, similar to primal-dual methods developed for the classical NUM problem, this approach does not leverage real-time queue lengths in decision making, leading to suboptimal queueing delay in practice, and has no explicit performance guarantees. To this end, we propose the Strategic Drift-plus-Penalty (SDP) algorithm and show that it constitutes an $\varepsilon$-GNE  and  has a uniformly bounded expected queue length of order $O\big(\frac{1}{\varepsilon^3} \big)$ for any $\varepsilon &gt; 0$. Under an additional mild assumption that holds for a wide class of problems, we show that  our algorithms achieve  long-term average social welfare arbitrarily close to that of a welfare-maximizing GNE policy.  Simulations validate our theory and demonstrate the favorable performance of our algorithms.
</description>
<pubDate>Thu, 26 Mar 2026 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165299</guid>
<dc:date>2026-03-26T00:00:00Z</dc:date>
</item>
<item>
<title>Pop-Up Encounters with Spot: Shaping Public Perceptions of Robots through Hands-On Experience</title>
<link>https://hdl.handle.net/1721.1/165298</link>
<description>Pop-Up Encounters with Spot: Shaping Public Perceptions of Robots through Hands-On Experience
Park, Hae Won; Van de Zande, Georgia D.; Zhang, Xiajie; Wendell, Dawn; Hodgins, Jessica
Public attitudes toward robots are often shaped by indirect exposure (e.g., media, staged demos), leaving open how direct, hands-on experience influences acceptance. In this study, we investigate how interacting with Boston Dynamics’ Spot, an agile, state-of-the-art quadruped robot, in a public pop-up booth affects perceptions of comfort and suitability across everyday and high-stakes environments. In a walk-up, 10-week pop-up booth, participants (N=753) completed pre–post surveys before and after driving Spot within curated Drive Scenes (Factory, Home, Hospital, Outdoor/Disaster). Measures captured comfort encountering robots and perceived suitability across Rated Contexts (RCs), affective reactions, and open-ended reflections. Hands-on control significantly increased comfort across all RCs, with the largest gains in Outdoor/Disaster, and increased perceived suitability—most in Home/Office/Hospital where baselines were lower. Improvements generalized beyond the experienced Drive Scene to other contexts. Age, gender, and prior familiarity moderated baseline levels and some changes, but hands-on exposure raised scores for all groups and attenuated several gaps. Thematic analysis showed memorable moments tied to locomotion, terrain adaptation, and expressive tilt; imagined roles consistently emphasized domestic assistance (e.g., cleaning, mobility), with entertainment/play and companionship emerging post-interaction. Together, these results demonstrate that brief, agency-granting encounters with a high-capability quadruped can broaden where people see robots as appropriate and diversify envisioned roles, offering a scalable model for public-facing HRI that fosters comfort, enthusiasm, and acceptance.
HRI ’26, Edinburgh, Scotland, UK
</description>
<pubDate>Mon, 16 Mar 2026 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165298</guid>
<dc:date>2026-03-16T00:00:00Z</dc:date>
</item>
<item>
<title>Who’s the Boss? Children Negotiate Robot Control across Role and Context</title>
<link>https://hdl.handle.net/1721.1/165297</link>
<description>Who’s the Boss? Children Negotiate Robot Control across Role and Context
Pu, Isabella; Rogers, Kantwon; Dinh, Linh Dieu; Alghowinem, Sharifa; Breazeal, Cynthia
Children regularly negotiate questions of authority and control in home and school life, but little is known about how they believe robots should fit into these dynamics. We conducted a 75-minute design session with 17 children (ages 6-9) to examine when robots should take, share, or defer control, and how expectations shift when robots are framed as teachers, classmates, or mentees. Children resisted robot control, particularly in adult-regulated domains and areas tied to personal skill or self-expression. They were more open to robot control in domains where they felt less competent, or where robots, perceived as less legitimate authorities than humans, could substitute for adult control. Role framing further shaped expectations: teacher robots were granted autonomy, classmate robots were expected to act as peers, and mentee robots were expected to defer. These findings show that children apply context- and role-sensitive rules when negotiating control with robots. We conclude with design considerations for robots in children's everyday lives that respect children's agency, calibrate autonomy by domain, and align behavior with children's context-sensitive expectations.
HRI ’26, Edinburgh, Scotland, UK
</description>
<pubDate>Mon, 16 Mar 2026 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165297</guid>
<dc:date>2026-03-16T00:00:00Z</dc:date>
</item>
<item>
<title>WiReSens Toolkit: An Open-source Platform towards Accessible Wireless Tactile Sensing</title>
<link>https://hdl.handle.net/1721.1/165296</link>
<description>WiReSens Toolkit: An Open-source Platform towards Accessible Wireless Tactile Sensing
Murphy, Devin; Zhu, Junyi; Gadre, Akshay; Torralba, Antonio; Liang, Paul Pu; Matusik, Wojciech; Luo, Yiyue
Past research has widely explored the design and fabrication of resistive matrix-based tactile sensors for creating touch-sensitive devices. However, real-world deployment of resistive tactile sensing systems remains difficult for individuals with limited prior experience in embedded sensing due to challenges of portability, adaptivity, and efficiency. We introduce the WiReSens Toolkit, an accessible, open-source platform to bridge this gap. Central to our approach is adaptive hardware for interfacing with resistive sensors and a web-based GUI that streamlines access to advanced features for building scalable tactile sensing systems, including multi-device programming and wireless visualization across three communication protocols, autocalibration for adaptive sensitivity, and intermittent data transmission for low-power use. We validated the toolkit’s usability through a user study with 11 novice participants, who, on average, configured a tactile sensor with over 95% accuracy in under five minutes, calibrated sensors 10× faster than baseline methods, and showed improved sense-making of tactile data.
TEI ’26, Chicago, IL, USA
</description>
<pubDate>Sat, 07 Mar 2026 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165296</guid>
<dc:date>2026-03-07T00:00:00Z</dc:date>
</item>
<item>
<title>PixBric: Precision Morphological Control of Pre-Stretched Fabrics Through Tessellated Primitive Geometries</title>
<link>https://hdl.handle.net/1721.1/165295</link>
<description>PixBric: Precision Morphological Control of Pre-Stretched Fabrics Through Tessellated Primitive Geometries
Youn, Hye Jun; Choe, Jun Kyu; Ahn, SooYeon; Tania, Marcello; Sara, Serena Xin Wei; Ishii, Hiroshi
3D printing onto pre-stretched fabrics has emerged as a promising technique for fabricating self-shaping textiles. However, resulting morphing behaviors are often dictated by heuristics or arbitrarily selected parameters. We present PixBric, a pixel-based design framework that enables precise morphological control through tessellated primitive geometries printed onto biaxially stretched fabrics. Upon release, these units buckle into programmed 3D forms including undulations, curling, and bistable snapping. PixBric integrates parametric modeling, mechanical simulation, and empirical evaluation to map geometric parameters to deformation outcomes. We demonstrate applications spanning morphable typography, wearable rings, and reconfigurable surfaces. PixBric bridges digital simulation (tide) with the mechanical constraints of elastic substrates (tied), transforming complex material behaviors into accessible tools for learning, experimentation, and creative fabrication.
TEI ’26, Chicago, IL, USA
</description>
<pubDate>Sat, 07 Mar 2026 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165295</guid>
<dc:date>2026-03-07T00:00:00Z</dc:date>
</item>
<item>
<title>ChromoLCD: LCD-based Compact Reprogrammer for On-the-fly High-Resolution Images on Photochromic Surfaces</title>
<link>https://hdl.handle.net/1721.1/165293</link>
<description>ChromoLCD: LCD-based Compact Reprogrammer for On-the-fly High-Resolution Images on Photochromic Surfaces
Zhu, Yunyi; Li, Qingyuan; Yan, Katherine; Guan, Emily; Luchianov, Alexandru; Hen, Eden; Mueller, Stefanie
Color-changing materials, such as photochromic pigments, allow objects to have reprogrammable multicolor surface images. Existing systems that reprogram these images are based on projectors and LEDs, each with advantages and limitations in device portability and image resolution. In this paper, we present ChromoLCD, a surface reprogrammer that uses a liquid crystal display (LCD) to achieve a compact handheld device without sacrificing image resolution. ChromoLCD consists of an LCD panel with a custom backlight containing R,G,B and UV LEDs, forming high-resolution light patterns with the required wavelengths. The compact form factor of ChromoLCD enables on-the-fly reprogramming of everyday surfaces. Our technical evaluation shows that ChromoLCD achieves a resolution of 25 ppi, which is 8 times better than the prior work. We demonstrate ChromoLCD with three applications, including the stamping of reprogrammable AR markers on a kitchen counter, on-the-fly designs on personal accessories, and reference pictures on a whiteboard.
TEI ’26, Chicago, IL, USA
</description>
<pubDate>Sat, 07 Mar 2026 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165293</guid>
<dc:date>2026-03-07T00:00:00Z</dc:date>
</item>
<item>
<title>Natural Convection Heat Transfer from an Inclined Cylinder</title>
<link>https://hdl.handle.net/1721.1/165292</link>
<description>Natural Convection Heat Transfer from an Inclined Cylinder
Jaffer, Aubrey
Based on Jaffer’s (2023) heat engine analysis of natural convection, this investigation mathematically derives a novel, comprehensive formula predicting the natural convective heat transfer from an inclined cylinder given its length, diameter, angle, and Rayleigh number and the fluid’s Prandtl number and thermal conductivity. The present formula was tested with 93 inclined cylinder measurements having length-to-diameter ratios between 1.48 and 104 in nine data-sets from three peer-reviewed studies, yielding (data-set) root-mean-squared relative error values between 1.9% and 4.7%.
</description>
<pubDate>Tue, 17 Mar 2026 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165292</guid>
<dc:date>2026-03-17T00:00:00Z</dc:date>
</item>
<item>
<title>Stability and Reactivity of Cyclopentane Nucleoside Analogs in 98% w/w Sulfuric Acid</title>
<link>https://hdl.handle.net/1721.1/165291</link>
<description>Stability and Reactivity of Cyclopentane Nucleoside Analogs in 98% w/w Sulfuric Acid
Seager, Sara; Seager, Maxwell D.; Visser, Ton; Marinus, Nittert; Poizat, Mael; van Wiltenburg, Jim; Poelert, Martin; Petkowski, Janusz J.
We synthesized seven carbocyclic nucleoside analogs featuring a cyclopentane ring in place of the (deoxy)ribose sugar, which serves as a linker in DNA/RNA nucleosides. We assessed the stability of cyclopentane nucleosides in 98% w/w sulfuric acid at room temperature via 1H and 13C NMR spectroscopy. We observe that adenine (A1, A4), guanine (G1) and thymine (T1) cyclopentane nucleoside analogs remain stable for at least two weeks at room temperature, with only minor (~4%) degradation in A1. In contrast, the cytosine analog (C1) rapidly degrades to release a soluble cytosine. Methyl-substituted adenine analogs mimicking polymer backbone attachments at positions prone to tertiary carbocation formation (A2, A3) prove unstable and release soluble adenine. Only the 3,3-dimethylcyclopentyl adenine analog (A4) exhibits sufficient stability. Our findings reveal that cyclopentane serves as a viable stable linker in concentrated sulfuric acid for select nucleic acid bases, provided that the backbone connections avoid tertiary carbons susceptible to carbocation-mediated cleavage. We thus identify one potential key structural feature for engineering examples of genetic-like polymers that could potentially persist in Venus’s concentrated sulfuric acid cloud environment.
</description>
<pubDate>Tue, 17 Mar 2026 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165291</guid>
<dc:date>2026-03-17T00:00:00Z</dc:date>
</item>
<item>
<title>Operation of a Modular 3D-Pixelated Liquid Argon Time-Projection Chamber in a Neutrino Beam</title>
<link>https://hdl.handle.net/1721.1/165290</link>
<description>Operation of a Modular 3D-Pixelated Liquid Argon Time-Projection Chamber in a Neutrino Beam
Abbaslu, S.; Abud, A. Abed; Acciarri, R.; Accorsi, L. P.; Acero, M. A.; Adames, M. R.; Adamov, G.; Adamowski, M.; Adriano, C.; Akbar, F.; Alemanno, F.; Alex, N. S.; Allison, K.; Alrashed, M.; Alton, A.; Alvarez, R.; Alves, T.; Aman, A.; Amar, H.; Amedo, P.
The 2x2 Demonstrator, a prototype for the Deep Underground Neutrino Experiment (DUNE) liquid argon (LAr) Near Detector, was exposed to the Neutrinos from the Main Injector (NuMI) neutrino beam at Fermi National Accelerator Laboratory (Fermilab). This detector is a prototype of a new modular design for a liquid argon time-projection chamber (LArTPC), comprising a two-by-two array of four modules, each further segmented into two optically isolated LArTPCs. The 2x2 Demonstrator features a number of pioneering technologies, including a low-profile resistive field shell to establish drift fields, native 3D ionization pixelated imaging, and a high-coverage dielectric light readout system. The 2.4-tonne active mass detector is flanked upstream and downstream by supplemental solid-scintillator tracking planes, repurposed from the MINERvA experiment, which track ionizing particles exiting the argon volume. The antineutrino beam data collected by the detector over a 4.5 day period in 2024 include over 30,000 neutrino interactions in the LAr active volume—the first neutrino interactions reported by a DUNE detector prototype. During its physics-quality run, the 2x2 Demonstrator operated at a nominal drift field of 500 V/cm and maintained good LAr purity, with a stable electron lifetime of approximately 1.25 ms. This paper describes the detector and supporting systems, summarizes the installation and commissioning, and presents the initial validation of collected NuMI beam and off-beam self-triggers. In addition, it highlights observed interactions in the detector volume, including candidate muon antineutrino events.
</description>
<pubDate>Tue, 17 Mar 2026 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165290</guid>
<dc:date>2026-03-17T00:00:00Z</dc:date>
</item>
<item>
<title>Accelerated Full Waveform Inversion by Deep Compressed Learning</title>
<link>https://hdl.handle.net/1721.1/165288</link>
<description>Accelerated Full Waveform Inversion by Deep Compressed Learning
Gelboim, Maayan; Adler, Amir; Araya-Polo, Mauricio
We propose and test a method to reduce the dimensionality of Full Waveform Inversion (FWI) inputs as a computational cost mitigation approach. Given modern seismic acquisition systems, the data (as an input for FWI) required for an industrial-strength case is in the teraflop level of storage; therefore, solving complex subsurface cases or exploring multiple scenarios with FWI becomes prohibitive. The proposed method utilizes a deep neural network with a binarized sensing layer that learns by compressed learning seismic acquisition layouts from a large corpus of subsurface models. Thus, given a large seismic data set to invert, the trained network selects a smaller subset of the data, then by using representation learning, an autoencoder computes latent representations of the shot gathers, followed by K-means clustering of the latent representations to further select the most relevant shot gathers for FWI. This approach can effectively be seen as a hierarchical selection. The proposed approach consistently outperforms random data sampling, even when utilizing only 10% of the data for 2D FWI, and these results pave the way to accelerating FWI in large scale 3D inversion.
</description>
<pubDate>Fri, 13 Mar 2026 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165288</guid>
<dc:date>2026-03-13T00:00:00Z</dc:date>
</item>
<item>
<title>A Time-Symmetric and Retrocausal Resolution of the EPR Paradox</title>
<link>https://hdl.handle.net/1721.1/165287</link>
<description>A Time-Symmetric and Retrocausal Resolution of the EPR Paradox
Heaney, Michael B.
The Copenhagen Interpretation of quantum mechanics explains the Einstein, Podolsky, and Rosen (EPR) experiments with “spooky action at a distance” and nonlocal wavefunction collapse. A time-symmetric and retrocausal interpretation of quantum mechanics explains the same experiments without spooky action at a distance or nonlocal wavefunction collapse. An experiment that can distinguish between the Copenhagen and Time-Symmetric Interpretations is described.
</description>
<pubDate>Thu, 12 Mar 2026 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165287</guid>
<dc:date>2026-03-12T00:00:00Z</dc:date>
</item>
<item>
<title>Ionic Liquid Biospheres</title>
<link>https://hdl.handle.net/1721.1/165286</link>
<description>Ionic Liquid Biospheres
Seager, Sara; Bains, William; Iakubivskyi, Iaroslav; Agrawal, Rachana; Jenkins, John; Shinde, Pranav; Petkowski, Janusz J.
Liquid is a fundamental requirement for life as we understand it, but whether that liquid has to be water is not known. We propose the hypothesis that ionic liquids (ILs) and deep eutectic solvents (DES) constitute a class of non-aqueous planetary liquids capable of persisting on a wide range of bodies where stable liquid water cannot exist. This hypothesis is motivated by key physical properties of ILs and DES. Many exhibit vapor pressures orders of magnitude lower than that of water and remain liquid across exceptionally wide temperature ranges, from cryogenic to well above terrestrial temperatures. These properties permit stable liquids to exist where liquid water would rapidly evaporate or freeze and outside of bulk phases as persistent microscale reservoirs&amp;mdash;such as thin films and pore-filling droplets. In other words, ILs and DES can persist in environments without requiring oceans, thick atmospheres, or narrowly regulated climate conditions. We further hypothesize that ILs and DES could act as solvents for non-Earth-like life, based on their polar nature and the demonstrated stability and functionality of proteins and other biomolecules in ionic liquids. More speculatively, our hypothesis extends to the idea that ILs and DES could enable prebiotic chemistry by providing long-lived, protective liquid environments for complex organic molecules on bodies such as comets and asteroids, where liquid water is absent. Additionally, based on the occurrence of DES-like mixtures as protective intracellular liquids in desiccation-tolerant plants, we propose that ILs and DES might be solvents that life elsewhere purposefully evolves. We review protein and other biomolecule studies in ILs and DES and outline planetary environments in which ILs and DES might occur by discussing available anions and cations. We present strategies to advance the IL/DES solvent hypothesis using laboratory studies, computational chemistry, planetary missions, analysis of existing spectroscopic datasets, and modeling of liquid microniches and chemical survival on small bodies.
</description>
<pubDate>Tue, 03 Mar 2026 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165286</guid>
<dc:date>2026-03-03T00:00:00Z</dc:date>
</item>
<item>
<title>Nomenclatural Recommendations for Genera Assigned to the Arcobacteraceae from the Subcommittee on the Taxonomy of Campylobacter and Related Bacteria</title>
<link>https://hdl.handle.net/1721.1/165285</link>
<description>Nomenclatural Recommendations for Genera Assigned to the Arcobacteraceae from the Subcommittee on the Taxonomy of Campylobacter and Related Bacteria
On, Stephen L. W.; Figueras, Maria J.; Fox, James G.; Houf, Kurt; Mégraud, Francis; Miller, William G.; Stolz, John; Takai, Ken; Vandamme, Peter
The taxonomy of the genus Arcobacter has been subject to substantive turmoil in recent years following a proposal to subdivide the genus into six genera. This proposal has been challenged by a number of multidisciplinary studies employing phenotypic, genomic, and phylogenetic analyses. Following several discussions among members of the International Committee on Systematics of Prokaryotes (ICSP) subcommittee on the taxonomy of Campylobacter and related bacteria, this group now unanimously recommends the use of the genus term Arcobacter to refer to these species.
</description>
<pubDate>Mon, 02 Feb 2026 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165285</guid>
<dc:date>2026-02-02T00:00:00Z</dc:date>
</item>
<item>
<title>Local Geographic Atrophy Growth Rates Not Influenced by Close Proximity to Non-Exudative Type 1 Macular Neovascularization</title>
<link>https://hdl.handle.net/1721.1/165284</link>
<description>Local Geographic Atrophy Growth Rates Not Influenced by Close Proximity to Non-Exudative Type 1 Macular Neovascularization
Trivizki, Omer; Moult, Eric M; Wang, Liang; Iyer, Prashanth; Shi, Yingying; Gregori, Giovanni; Feuer, William; Fujimoto, James G; Rosenfeld, Philip J
Purpose: The local growth rates of geographic atrophy (GA) adjacent to non-exudative type 1 macular neovascularization (MNV) were investigated to determine if MNV influenced GA growth.&#13;
Methods: Eyes with GA and non-exudative type 1 MNV were followed for at least 1 year. Both GA and the MNV were imaged and measured using swept-source optical coherence tomography angiography (SS-OCTA) scans. Pearson correlations were computed between local growth rates of GA, which were estimated using a biophysical GA growth model, and local distances-to-MNV. Corresponding P values for the null hypothesis of no Pearson correlation were computed using a Monte Carlo approach that adjusts for spatial autocorrelations.&#13;
Results: Nine eyes were included in this study. There were positive correlations (Pearson's r &gt; 0) between distance-to-MNV and local GA growth in eight (89%) of the eyes; however, in all but one eye (11%), correlations were relatively weak and statistically nonsignificant after Bonferroni correction (corrected P &gt; 0.05).&#13;
Conclusions: SS-OCTA imaging combined with GA growth modeling and spatial statistical analysis enabled quantitative assessment of correlations between local GA growth rates and local distances-to-MNV. Our results are not consistent with non-exudative type 1 MNV having a strong inhibitory effect on local GA growth rates.
</description>
<pubDate>Fri, 14 Jan 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165284</guid>
<dc:date>2022-01-14T00:00:00Z</dc:date>
</item>
<item>
<title>Optical trapping of SrOH molecules for dark matter and T-violation searches</title>
<link>https://hdl.handle.net/1721.1/165283</link>
<description>Optical trapping of SrOH molecules for dark matter and T-violation searches
Sawaoka, Hiromitsu; Nasir, Abdullah; Lunstad, Annika; Li, Mingda; Mango, Jack; Lasner, Zack D; Doyle, John M
We report an optical dipole trap of strontium monohydroxide (SrOH) with 1400(300) trapped molecules. Through optical pumping, we access vibrational states that are proposed for improved probes of the electron’s electric dipole moment (eEDM) and ultralight dark matter (UDM). For each of these states, the lifetime of trapped molecules is measured and found to be consistent with spontaneous radiative decay and blackbody excitation limits, making this platform viable for these eEDM and UDM searches.
</description>
<pubDate>Thu, 29 Jan 2026 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165283</guid>
<dc:date>2026-01-29T00:00:00Z</dc:date>
</item>
<item>
<title>Drosophila Fog/Cta and T48 pathways have overlapping and distinct contributions to mesoderm invagination</title>
<link>https://hdl.handle.net/1721.1/165282</link>
<description>Drosophila Fog/Cta and T48 pathways have overlapping and distinct contributions to mesoderm invagination
Horo, Uzuki; Clarke, D Nathaniel; Martin, Adam C
The regulation of the cytoskeleton by multiple signaling pathways, sometimes in parallel, is a common principle of morphogenesis. A classic example of regulation by parallel pathways is Drosophila gastrulation, where the inputs from the Folded gastrulation (Fog)/Concertina (Cta) and the T48 pathways induce apical constriction and mesoderm invagination. Whether there are distinct roles for these separate pathways in regulating the complex spatial and temporal patterns of cytoskeletal activity that accompany early embryo development is still poorly understood. We investigated the roles of the Fog/Cta and T48 pathways and found that, by themselves, the Cta and T48 pathways both promote timely mesoderm invagination and apical myosin II accumulation, with Cta being required for timely cell shape change ahead of mitotic cell division. We also identified distinct functions of T48 and Cta in regulating cellularization and the uniformity of the apical myosin II network, respectively. Our results demonstrate that both redundant and distinct functions for the Fog/Cta and T48 pathways exist.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165282</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Tools for live-cell imaging of cytoskeletal and nuclear behavior in the unconventional yeast, Aureobasidium pullulans</title>
<link>https://hdl.handle.net/1721.1/165281</link>
<description>Tools for live-cell imaging of cytoskeletal and nuclear behavior in the unconventional yeast, Aureobasidium pullulans
Petrucco, Claudia A; Crocker, Alex W; D’Alessandro, Alec; Medina, Edgar M; Gorman, Olivia; McNeill, Jessica; Gladfelter, Amy S; Lew, Daniel J
Aureobasidium pullulans is a ubiquitous fungus with a wide variety of morphologies and growth modes including “typical” single-budding yeast, and interestingly, larger multinucleate yeast than can make multiple buds in a single cell cycle. The study of A. pullulans promises to uncover novel cell biology, but currently tools are lacking to achieve this goal. Here, we describe initial components of a cell biology toolkit for A. pullulans, which is used to express and image fluorescent probes for nuclei as well as components of the cytoskeleton. These tools allowed live-cell imaging of the multinucleate and multibudding cycles, revealing highly synchronous mitoses in multinucleate yeast that occur in a semiopen manner with an intact but permeable nuclear envelope. These findings open the door to using this ubiquitous polyextremotolerant fungus as a model for evolutionary cell biology.
</description>
<pubDate>Mon, 01 Apr 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165281</guid>
<dc:date>2024-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Tracking the Spatio‐Temporal Evolution of Foreshocks Preceding the Mw 6.1 2009 L’Aquila Earthquake</title>
<link>https://hdl.handle.net/1721.1/165280</link>
<description>Tracking the Spatio‐Temporal Evolution of Foreshocks Preceding the Mw 6.1 2009 L’Aquila Earthquake
Cabrera, Leoncio; Poli, Piero; Frank, William B
How faulting processes lead to a large earthquake is a fundamental question in seismology. To better constrain this pre‐seismic stage, we create a dense seismic catalog via template matching to analyze the precursory phase of the Mw 6.1 L’Aquila earthquake that occurred in central Italy in 2009. We estimate several physical parameters in time, such as the coefficient of variation, the seismic moment release, the effective stress drop, and analyze spatio‐temporal patterns to study the evolution of the sequence and the earthquake interactions. We observe that the precursory phase experiences multiple accelerations of the seismicity rate that we divide into two main sequences with different signatures and features: the first part exhibits weak earthquake interactions, quasi‐continuous moment release, slow spatial migration patterns, and a lower effective stress drop, pointing to aseismic processes. The second sequence exhibits strong temporal clustering, fast seismicity expansion, and a larger effective stress drop typical of a stress transfer process. We interpret the differences in seismicity behaviors between the two sequences as distinct physical mechanisms that are controlled by different physical properties of the fault system. We conclude that the L’Aquila earthquake is preceded by a complex preparation, made up of different physical processes over different time scales on faults with different physical properties.
</description>
<pubDate>Mon, 07 Mar 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165280</guid>
<dc:date>2022-03-07T00:00:00Z</dc:date>
</item>
<item>
<title>A High‐Resolution Atlas of the Eastern Tropical Pacific Oxygen Deficient Zones</title>
<link>https://hdl.handle.net/1721.1/165279</link>
<description>A High‐Resolution Atlas of the Eastern Tropical Pacific Oxygen Deficient Zones
Kwiecinski, Jarek V; Babbin, Andrew R
Oxygen deficient zones (ODZs) are important biogeochemical provinces of the global oceans wherein standing dissolved oxygen concentrations decrease to nanomolar levels. Despite their confinement, these regions are disproportionally important to the ocean's role in modulating Earth's climate through the interactions between the marine nitrogen cycle and that of carbon. Moreover, the spatial domain of low oxygen regions of the ocean is predicted to change as a consequence of ocean warming, increased stratification, and changes in circulation and productivity. However, the expanse of the modern ODZs is poorly resolved due to a dearth of direct sampling compounded with errors that arise in the processing and gridding of the sparse measurements that do exist. Here, we take a novel approach to map the horizontal and vertical extent of the two major ODZs of the eastern tropical Pacific via analysis of meter-scale resolution electrode sensors from both ship casts and Argo profiles, rather than from discretized bottle measurements. The resulting three-dimensional data product is based on a compendium of nearly 15 million measurements taken across three decades and provides the precise locations of low oxygen water, elucidating the ODZs' three-dimensional structures. It can be utilized by researchers to validate models, plan cruise occupations, and as a comparison for future change. Calculations made with this high-resolution atlas also provide the volumes, layers of maximal areal extent, and other descriptive statistics for both Pacific ODZs. Finally, the atlas reveals fine-scale features of oxygenated water mass intrusions and regional differences across these anoxic zones.
</description>
<pubDate>Mon, 27 Dec 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165279</guid>
<dc:date>2021-12-27T00:00:00Z</dc:date>
</item>
<item>
<title>Orbital‐ and Millennial‐Scale Variability in Northwest African Dust Emissions Over the Past 67,000 years</title>
<link>https://hdl.handle.net/1721.1/165278</link>
<description>Orbital‐ and Millennial‐Scale Variability in Northwest African Dust Emissions Over the Past 67,000 years
Kinsley, Christopher W; Bradtmiller, Louisa I; McGee, David; Galgay, Michael; Stuut, Jan‐Berend; Tjallingii, Rik; Winckler, Gisela; deMenocal, Peter B
Reconstructions of aeolian dust flux to West African margin sediments can be used to explore changing atmospheric circulation and hydroclimate over North Africa on millennial to orbital timescales. Here, we extend West African margin dust flux records back to 37 ka in a transect of sites from 19° to 27°N, and back to 67 ka at Ocean Drilling Program (ODP) Hole 658C, in order to explore the interplay of orbital and high-latitude forcings on North African climate and make quantitative estimates of dust flux during the core of the Last Glacial Maximum (LGM). The ODP 658C record shows a Green Sahara interval from 60 to 50 ka during a time of high Northern Hemisphere summer insolation, with dust fluxes similar to levels during the early Holocene African Humid Period, and an abrupt peak in flux during Heinrich event 5a (H5a). Dust fluxes increase from 50 to 35 ka while the high-latitude Northern Hemisphere cools, with peaks in dust flux associated with North Atlantic cool events. From 35 ka through the LGM dust deposition decreases in all cores, and little response is observed to low-latitude insolation changes. Dust fluxes at sites from 21° to 27°N were near late Holocene levels during the LGM time slice, suggesting a more muted LGM response than observed from mid-latitude dust sources. Records along the northwest African margin suggest important differences in wind responses during different stadials, with maximum dust flux anomalies centered south of 20°N during H1 and north of 20°N during the Younger Dryas.
</description>
<pubDate>Tue, 07 Dec 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165278</guid>
<dc:date>2021-12-07T00:00:00Z</dc:date>
</item>
<item>
<title>Methyl Chloroform Continues to Constrain the Hydroxyl (OH) Variability in the Troposphere</title>
<link>https://hdl.handle.net/1721.1/165277</link>
<description>Methyl Chloroform Continues to Constrain the Hydroxyl (OH) Variability in the Troposphere
Patra, PK; Krol, MC; Prinn, RG; Takigawa, M; Mühle, J; Montzka, SA; Lal, S; Yamashita, Y; Naus, S; Chandra, N; Weiss, RF; Krummel, PB; Fraser, PJ; O'Doherty, S; Elkins, JW
Trends and variability in tropospheric hydroxyl (OH) radicals influence budgets of many greenhouse gases, air pollutant species, and ozone depleting substances. Estimations of tropospheric OH trends and variability based on budget analysis of methyl chloroform (CH3CCl3) and process-based chemistry transport models often produce conflicting results. Here we use a previously tested transport model to simulate atmospheric CH3CCl3 for the period 1985–2018. Based on mismatches between model output and observations, we derive consistent anomalies in the inverse lifetime of CH3CCl3 (KG) using measurements from two independent observational networks (National Oceanic and Atmospheric Administration and Advanced Global Atmospheric Gases Experiment). Our method allows a separation between “physical” (transport, temperature) and “chemical” (i.e., abundance) influences on OH + CH3CCl3 reaction rate in the atmosphere. Small increases in KG due to “physical” influences are mostly driven by increases in the temperature-dependent reaction between OH and CH3CCl3 and resulted in a smoothly varying increase of 0.80% decade−1. Chemical effects on KG, linked to global changes in OH sources and sinks, show larger year-to-year variations (∼2%–3%), and have a negative correlation with the El Niño Southern Oscillation. A significant positive trend in KG can be derived after 2001, but it persists only through 2015 and only if we assume that CH3CCl3 emissions decayed more slowly over time than our best estimate suggests. If global CH3CCl3 emissions dropped below 3 Gg year−1 after 2015, recent CH3CCl3 measurements indicate that the 2015–2018 loss rate of CH3CCl3 due to reaction with OH is comparable to its value 2 decades ago.
</description>
<pubDate>Sat, 27 Feb 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165277</guid>
<dc:date>2021-02-27T00:00:00Z</dc:date>
</item>
<item>
<title>InterCarb: A Community Effort to Improve Interlaboratory Standardization of the Carbonate Clumped Isotope Thermometer Using Carbonate Standards</title>
<link>https://hdl.handle.net/1721.1/165276</link>
<description>InterCarb: A Community Effort to Improve Interlaboratory Standardization of the Carbonate Clumped Isotope Thermometer Using Carbonate Standards
Increased use and improved methodology of carbonate clumped isotope thermometry has greatly enhanced our ability to interrogate a suite of Earth-system processes. However, interlaboratory discrepancies in quantifying carbonate clumped isotope (Δ47) measurements persist, and their specific sources remain unclear. To address interlaboratory differences, we first provide consensus values from the clumped isotope community for four carbonate standards relative to heated and equilibrated gases with 1,819 individual analyses from 10 laboratories. Then we analyzed the four carbonate standards along with three additional standards, spanning a broad range of δ47 and Δ47 values, for a total of 5,329 analyses on 25 individual mass spectrometers from 22 different laboratories. Treating three of the materials as known standards and the other four as unknowns, we find that the use of carbonate reference materials is a robust method for standardization that yields interlaboratory discrepancies entirely consistent with intralaboratory analytical uncertainties. Carbonate reference materials, along with measurement and data processing practices described herein, provide the carbonate clumped isotope community with a robust approach to achieve interlaboratory agreement as we continue to use and improve this powerful geochemical tool. We propose that carbonate clumped isotope data normalized to the carbonate reference materials described in this publication should be reported as Δ47 (I-CDES) values for Intercarb-Carbon Dioxide Equilibrium Scale.
</description>
<pubDate>Tue, 13 Apr 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165276</guid>
<dc:date>2021-04-13T00:00:00Z</dc:date>
</item>
<item>
<title>Age Set versus Kin: Culture and Financial Ties in east Africa</title>
<link>https://hdl.handle.net/1721.1/165274</link>
<description>Age Set versus Kin: Culture and Financial Ties in east Africa
Moscona, Jacob; Seck, Awa Ambra
We study how social organization shapes patterns of economic interaction and the effects of national policy, focusing on the distinction between age-based and kin-based groups in sub-Saharan Africa. Motivated by ethnographic accounts suggesting that this distinction affects redistribution, we analyze a cash transfer program in Kenya and find that in age-based societies there are consumption spillovers within the age cohort, but not the extended family, while in kin-based societies we find the opposite. Next, we document that social structure shapes the impact of policy by showing that Uganda’s pension program had positive effects on child nutrition only in kin-based societies. (JEL H23, I12, I38, J13, O15, Z13).
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165274</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Social Media and Mental Health</title>
<link>https://hdl.handle.net/1721.1/165273</link>
<description>Social Media and Mental Health
Braghieri, Luca; Levy, Ro’ee; Makarin, Alexey
We provide quasi-experimental estimates of the impact of social media on mental health by leveraging a unique natural experiment: the staggered introduction of Facebook across US colleges. Our analysis couples data on student mental health around the years of Facebook’s expansion with a generalized difference-in-differences empirical strategy. We find that the rollout of Facebook at a college had a negative impact on student mental health. It also increased the likelihood with which students reported experiencing impairments to academic performance due to poor mental health. Additional evidence on mechanisms suggests the results are due to Facebook fostering unfavorable social comparisons. (JEL D91, I12, I23, L82)
</description>
<pubDate>Tue, 01 Nov 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165273</guid>
<dc:date>2022-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Is Journalistic Truth Dead? Measuring How Informed Voters Are about Political News</title>
<link>https://hdl.handle.net/1721.1/165272</link>
<description>Is Journalistic Truth Dead? Measuring How Informed Voters Are about Political News
Angelucci, Charles; Prat, Andrea
To investigate general patterns in news information in the United States, we combine a protocol for identifying major political news stories, 11 monthly surveys with 15,000 participants, and a model of news discernment. When confronted with a true and a fake news story, 47 percent of subjects confidently choose the true story, 3 percent confidently choose the fake story, and the remaining half are uncertain. Socioeconomic differences are associated with large variations in the probability of selecting the true news story. Partisan congruence between an individual and a news story matters, but its impact is up to an order of magnitude smaller. (JEL D72, D83, L82).
</description>
<pubDate>Mon, 01 Apr 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165272</guid>
<dc:date>2024-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>How Merchant Towns Shaped Parliaments: From the Norman Conquest of England to the Great Reform Act</title>
<link>https://hdl.handle.net/1721.1/165271</link>
<description>How Merchant Towns Shaped Parliaments: From the Norman Conquest of England to the Great Reform Act
Angelucci, Charles; Meraglia, Simone; Voigtländer, Nico
We study the emergence of urban self-governance in the late medieval period. We focus on England after the Norman Conquest of 1066, building a novel comprehensive dataset of 554 medieval towns. During the Commercial Revolution (twelfth to thirteenth centuries), many merchant towns obtained Farm Grants: the right of self-governed tax collection and law enforcement. Self-governance, in turn, was a stepping stone for parliamentary representation: Farm Grant towns were much more likely to be summoned directly to the medieval English Parliament than otherwise similar towns. We also show that self-governed towns strengthened the role of Parliament and shaped national institutions over the subsequent centuries. (JEL D02, D72, D73, K11, K34, N43, N93).
</description>
<pubDate>Sat, 01 Oct 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165271</guid>
<dc:date>2022-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Shock response of periodic interpenetrating phase composites</title>
<link>https://hdl.handle.net/1721.1/165270</link>
<description>Shock response of periodic interpenetrating phase composites
Taylor, Spencer V; Gonzales, Manny; Cordero, Zachary C
In this work, we examine the macroscale and fine-scale shock responses of interpenetrating phase composites comprising a body-centered cubic steel lattice embedded in an aluminum matrix. Through plate impact simulations, we find that the complex mesoscale geometry reduces shock velocity relative to monolithic constituents, slowing and spreading the shock front via reflection and redirection. The periodicity of the mesoscale composite geometry is also reflected by quasi-steady-wave behavior. On the fine-scale, we can predict several aspects of the oscillatory pressure and longitudinal velocity responses by tracking internal wave reflections. We also observe that the post-shock maximum temperature increases with structural openness and temperature hotspots form at interfaces parallel to the shock direction. The findings in this work provide novel structure–property linkages in the dynamic response of architectured interpenetrating phase composites.
</description>
<pubDate>Tue, 22 Nov 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165270</guid>
<dc:date>2022-11-22T00:00:00Z</dc:date>
</item>
<item>
<title>ℏ2 expansion of the transmission probability through a barrier</title>
<link>https://hdl.handle.net/1721.1/165269</link>
<description>ℏ2 expansion of the transmission probability through a barrier
Pollak, Eli; Cao, Jianshu
Ninety years ago, Wigner derived the leading order expansion term in ℏ2 for the tunneling rate through a symmetric barrier. His derivation included two contributions: one came from the parabolic barrier, but a second term involved the fourth-order derivative of the potential at the barrier top. He left us with a challenge, which is answered in this paper, to derive the same but for an asymmetric barrier. A crucial element of the derivation is obtaining the ℏ2 expansion term for the projection operator, which appears in the flux-side expression for the rate. It is also reassuring that an analytical calculation of semiclassical transition state theory (TST) reproduces the anharmonic corrections to the leading order of ℏ2. The efficacy of the resulting expression is demonstrated for an Eckart barrier, leading to the conclusion that especially when considering heavy atom tunneling, one should use the expansion derived in this paper, rather than the parabolic barrier approximation. The rate expression derived here reveals how the classical TST limit is approached as a function of ℏ and, thus, provides critical insights to understand the validity of popular approximate theories, such as the classical Wigner, centroid molecular dynamics, and ring polymer molecular dynamics methods.
</description>
<pubDate>Fri, 19 Aug 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165269</guid>
<dc:date>2022-08-19T00:00:00Z</dc:date>
</item>
<item>
<title>Nonlinear exceptional-point lasing with ab initio Maxwell– Bloch theory</title>
<link>https://hdl.handle.net/1721.1/165268</link>
<description>Nonlinear exceptional-point lasing with ab initio Maxwell– Bloch theory
Benzaouia, Mohammed; Stone, AD; Johnson, Steven G
We present a general analysis for finding and characterizing nonlinear exceptional point (EP) lasers above threshold using steady-state ab initio Maxwell–Bloch equations. For a system of coupled slabs, we show that a nonlinear EP is obtained for a given ratio between the external pumps in each resonator and that it is associated with a kink in the output power and lasing frequency, confirming coupled-mode theory predictions. Through numerical linear stability analysis, we confirm that the EP laser can be stable for a large enough inversion relaxation rate. We further show that the EP laser can be characterized by scattering a weak signal off the lasing cavity so that the scattering frequency spectrum exhibits a quartic divergence. Our approach can be applied to arbitrary scatterers with multi-level gain media.
</description>
<pubDate>Thu, 15 Dec 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165268</guid>
<dc:date>2022-12-15T00:00:00Z</dc:date>
</item>
<item>
<title>The impact of semiconductor surface states on vacuum field emission</title>
<link>https://hdl.handle.net/1721.1/165267</link>
<description>The impact of semiconductor surface states on vacuum field emission
Kim, Taeyoung; Joishi, Chandan; Shih, Pao-Chuan; Palacios, Tomás; Rajan, Siddharth
This work presents a theoretical analysis of the impact of surface states on vacuum field emission currents in semiconductors. In wide and ultra-wide bandgap semiconductors such as GaN and AlGaN, low electron affinity has been proposed as a benefit for field emission into vacuum. However, in these materials, the surface Fermi level at the surface is pinned well below the conduction band, and the surface depletion barriers due to the surface Fermi level pinning can be comparable to or higher than the electron affinity. Therefore, analysis of field emission requires consideration of not only the vacuum potential barrier set by electron affinity, but also the depletion region near the semiconductor surface. In this paper, we develop analytical models to predict field emission currents with careful consideration of the impact of surface states on the energy band alignment. The results are used to provide guidelines for design of field emitters that could benefit from the low electron affinity of semiconductors such as Al(Ga)N.
</description>
<pubDate>Wed, 26 Oct 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165267</guid>
<dc:date>2022-10-26T00:00:00Z</dc:date>
</item>
<item>
<title>A robust computational approach to Lees–Dorodnitsyn laminar hypersonic boundary layers with temperature-dependent properties</title>
<link>https://hdl.handle.net/1721.1/165266</link>
<description>A robust computational approach to Lees–Dorodnitsyn laminar hypersonic boundary layers with temperature-dependent properties
Onyeador, CN; Hodge, A; Harris, W
The Lees–Dorodnitsyn (L–D) boundary layer equations for two-dimensional, non-reactive, laminar, hypersonic, boundary layer flows, and an assumption of an isentropic external flow are examined. They are applied to various geometries for which the Thin Shear Layer assumptions are valid. This study expands on previous work to develop a novel and robust methodology for computing high-temperature hypersonic flows using a uniform and compact computational stencil implemented through a computational tool, the Bulk-property Boundary Layer (BuBL) solver. In particular, we explore the impact of treating high-temperature effects present in hypersonic flows, namely, treating air as a thermally perfect gas with temperature-variable properties. The ability to solve these flows computationally using second-order finite difference methods is evaluated as are various models for viscosity, Prandtl number, and specific heat. The methodology for solving the external flow properties in the transformed L–D computational domain is also discussed. It is shown that the L–D equations evaluated using the “box” computational stencil are an effective means for evaluating laminar hypersonic boundary layer flows. Solutions for displacement and momentum thicknesses, skin friction, and Stanton number variations are obtained as a function of Prandtl number, specific heat model, and Mach number. Verification and validation measures are performed for the code. Excellent agreement is found in comparisons between BuBL and other computational fluid dynamics and experimental results, thus demonstrating the utility of the proposed methodology.
</description>
<pubDate>Mon, 03 Oct 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165266</guid>
<dc:date>2022-10-03T00:00:00Z</dc:date>
</item>
<item>
<title>Plasmas for in situ resource utilization on Mars: Fuels, life support, and agriculture</title>
<link>https://hdl.handle.net/1721.1/165265</link>
<description>Plasmas for in situ resource utilization on Mars: Fuels, life support, and agriculture
Guerra, V; Silva, T; Pinhão, N; Guaitella, O; Guerra-Garcia, C; Peeters, FJJ; Tsampas, MN; van de Sanden, MCM
This work discusses the potential of combining non-thermal plasmas and conducting membranes for in situ resource utilization (ISRU) on Mars. By converting different molecules directly from the Martian atmosphere, plasmas can create the necessary feed-stock and base chemicals for processing fuels, breathing oxygen, building materials, and fertilizers. Different plasma sources operate according to different principles and are associated with distinct dominant physicochemical mechanisms. This diversity allows exploring different energy transfer pathways leading to CO2 dissociation, including direct electron-impact processes, plasma chemistry mediated by vibrationally and electronically excited states, and thermally driven dissociation. The coupling of plasmas with membranes is still a technology under development, but a synergistic effect between plasma decomposition and oxygen permeation across conducting membranes is anticipated. The emerging technology is versatile, scalable, and has the potential to deliver high rates of production of molecules per kilogram of instrumentation sent to space. Therefore, it will likely play a very relevant role in future ISRU strategies.
</description>
<pubDate>Tue, 16 Aug 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165265</guid>
<dc:date>2022-08-16T00:00:00Z</dc:date>
</item>
<item>
<title>Bridging the gap between H- and J-aggregates: Classification and supramolecular tunability for excitonic band structures in two-dimensional molecular aggregates</title>
<link>https://hdl.handle.net/1721.1/165264</link>
<description>Bridging the gap between H- and J-aggregates: Classification and supramolecular tunability for excitonic band structures in two-dimensional molecular aggregates
Deshmukh, Arundhati P; Geue, Niklas; Bradbury, Nadine C; Atallah, Timothy L; Chuang, Chern; Pengshung, Monica; Cao, Jianshu; Sletten, Ellen M; Neuhauser, Daniel; Caram, Justin R
Molecular aggregates with long-range excitonic couplings have drastically different photophysical properties compared to their monomer counterparts. From Kasha's model for one-dimensional systems, positive or negative excitonic couplings lead to blue or red-shifted optical spectra with respect to the monomers, labeled H-and J-aggregates, respectively. The overall excitonic couplings in higher dimensional systems are much more complicated and cannot be simply classified from their spectral shifts alone. Here, we provide a unified classification for extended 2D aggregates using temperature dependent peak shifts, thermal broadening, and quantum yields. We discuss the examples of six 2D aggregates with J-like absorption spectra but quite drastic changes in quantum yields and superradiance. We find the origin of the differences is, in fact, a different excitonic band structure where the bright state is lower energy than the monomer but still away from the band edge. We call this an “I-aggregate.” Our results provide a description of the complex excitonic behaviors that cannot be explained solely on Kasha's model. Furthermore, such properties can be tuned with the packing geometries within the aggregates providing supramolecular pathways for controlling them. This will allow for precise optimizations of aggregate properties in their applications across the areas of optoelectronics, photonics, excitonic energy transfer, and shortwave infrared technologies.
</description>
<pubDate>Thu, 23 Jun 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165264</guid>
<dc:date>2022-06-23T00:00:00Z</dc:date>
</item>
<item>
<title>Design and construction of a compact, high-repetition-rate ultrafast electron diffraction instrument</title>
<link>https://hdl.handle.net/1721.1/165263</link>
<description>Design and construction of a compact, high-repetition-rate ultrafast electron diffraction instrument
Freelon, Byron; Rohwer, Timm; Zong, Alfred; Kogar, Anshul; Zhou, Hengyun; Wong, Liang Jie; Ergeçen, Emre; Gedik, Nuh
We present the design and performance of a compact ultrafast electron diffraction instrument. The diffractometer provides a means of examining time-resolved ultrafast dynamical properties of solids. The system’s utilization is discussed in terms of instrument parameters and diffraction data from selected condensed matter samples. The difractometer’s performance is highlighted in terms of detection sensitivity, instrumental temporal resolution, and the electron beam transverse coherence length. Following specific details of the construction, we present a practical discussion of parameters such as repetition rate and provide advice on general construction approaches for laboratory-based, keV ultrafast electron diffractometers. In addition, design guidance for constructing a compact electron gun source that is well-suited for studying diffraction from hard condensed matter is given. A unique data acquisition scheme, utilizing high laser repetition rates, is presented.
</description>
<pubDate>Tue, 30 May 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165263</guid>
<dc:date>2023-05-30T00:00:00Z</dc:date>
</item>
<item>
<title>EXPANSE: A time-of-flight EXPanded Angle Neutron Spin Echo spectrometer at the Second Target Station of the Spallation Neutron Source</title>
<link>https://hdl.handle.net/1721.1/165261</link>
<description>EXPANSE: A time-of-flight EXPanded Angle Neutron Spin Echo spectrometer at the Second Target Station of the Spallation Neutron Source
Do, Changwoo; Ashkar, Rana; Boone, Cristina; Chen, Wei-Ren; Ehlers, Georg; Falus, Peter; Faraone, Antonio; Gardner, Jason S; Graves, Van; Huegle, Thomas; Katsumata, Reika; Kent, Darian; Lin, Jiao YY; McHargue, Bill; Olsen, Bradley; Wang, Yangyang; Wilson, Danielle; Z, Y
EXPANSE, an EXPanded Angle Neutron Spin Echo instrument, has been proposed and selected as one of the first suite of instruments to be built at the Second Target Station of the Spallation Neutron Source at the Oak Ridge National Laboratory. This instrument is designed to address scientific problems that involve high-energy resolution (neV–μeV) of dynamic processes in a wide range of materials. The wide-angle detector banks of EXPANSE provide coverage of nearly two orders of magnitude in scattering wavenumbers, and the wide wavelength band affords approximately four orders of magnitude in Fourier times. This instrument will offer unique capabilities that are not available in the currently existing neutron scattering instruments in the United States. Specifically, EXPANSE will enable direct measurements of slow dynamics in the time domain over wide Q-ranges simultaneously and will also enable time-resolved spectroscopic studies. The instrument is expected to contribute to a diverse range of science areas, including soft matter, polymers, biological materials, liquids and glasses, energy materials, unconventional magnets, and quantum materials.
</description>
<pubDate>Mon, 11 Jul 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165261</guid>
<dc:date>2022-07-11T00:00:00Z</dc:date>
</item>
<item>
<title>Thermal diffusivity in ion-irradiated single-crystal iron, chromium, vanadium, and tungsten measured using transient grating spectroscopy</title>
<link>https://hdl.handle.net/1721.1/165260</link>
<description>Thermal diffusivity in ion-irradiated single-crystal iron, chromium, vanadium, and tungsten measured using transient grating spectroscopy
Wylie, APC; Woller, KB; Al Dajani, SAA; Dacus, BR; Pickering, EJ; Preuss, M; Short, MP
The speed-up of radiation science development with the advent of ion-irradiation experiments has, until recently, been omitted in the post-irradiation examination technique. This paper reports the results of transient grating spectroscopy—a rapid, non-destructive, in situ photothermal surface technique—of ion-irradiated single-crystals of iron, chromium, vanadium, and tungsten at room temperature. Thermal diffusivity was used to track damage development throughout irradiation, with 5 MeV self-ion irradiated iron, chromium, and vanadium showing little to no change up to damages of the order of 1 dpa. 5 MeV Si3+-ion irradiated tungsten exhibits a reduction of thermal diffusivity from 0.78(7) to 0.29(2) cm2 s−1 with logarithmically increasing dose over a similar damage range. A comparison to literature of transient grating spectroscopy thermal diffusivity values past and present shows good agreement; radiation-induced change can be clearly distinguished from differences between mono- and poly-crystalline tungsten.
</description>
<pubDate>Fri, 22 Jul 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165260</guid>
<dc:date>2022-07-22T00:00:00Z</dc:date>
</item>
<item>
<title>Thickness and temperature dependence of the atomic-scale structure of SrRuO3 thin films</title>
<link>https://hdl.handle.net/1721.1/165259</link>
<description>Thickness and temperature dependence of the atomic-scale structure of SrRuO3 thin films
Zhang, Xuanyi; Penn, Aubrey N; Wysocki, Lena; Zhang, Zhan; van Loosdrecht, Paul HM; Kornblum, Lior; LeBeau, James M; Lindfors-Vrejoiu, Ionela; Kumah, Divine P
The temperature-dependent layer-resolved structure of 3 to 44 unit cell thick SrRuO3 (SRO) films grown on Nb-doped SrTiO3 substrates is investigated using a combination of high-resolution synchrotron x-ray diffraction and high-resolution electron microscopy to understand the role that structural distortions play in suppressing ferromagnetism in ultra-thin SRO films. The oxygen octahedral tilts and rotations and Sr displacements characteristic of the bulk orthorhombic phase are found to be strongly dependent on temperature, the film thickness, and the distance away from the film–substrate interface. For thicknesses, t, above the critical thickness for ferromagnetism (t &amp;amp;gt; 3 uc), the orthorhombic distortions decrease with increasing temperature above TC. Below TC, the structure of the films remains constant due to the magneto-structural coupling observed in bulk SRO. The orthorhombic distortions are found to be suppressed in the 2–3 interfacial layers due to structural coupling with the SrTiO3 substrate and correlate with the critical thickness for ferromagnetism in uncapped SRO films.
</description>
<pubDate>Wed, 11 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165259</guid>
<dc:date>2022-05-11T00:00:00Z</dc:date>
</item>
<item>
<title>Hyperbolic phonon polaritons with positive and negative phase velocities in suspended &lt;b&gt;                     &lt;i&gt;α&lt;/i&gt;                  &lt;/b&gt;-MoO3</title>
<link>https://hdl.handle.net/1721.1/165258</link>
<description>Hyperbolic phonon polaritons with positive and negative phase velocities in suspended &lt;b&gt;                     &lt;i&gt;α&lt;/i&gt;                  &lt;/b&gt;-MoO3
Shen, Jialiang; Zheng, Zhiren; Dinh, Thao; Wang, Chuanyu; Chen, Mingyuan; Chen, Pengyu; Ma, Qiong; Jarillo-Herrero, Pablo; Kang, Lixing; Dai, Siyuan
Sample suspension is a valuable method to improve the mechanical, thermal, electronic, and optical properties of low-dimensional materials. In terms of confined light-matter waves—the polaritons, sample suspension can elongate the wavelength of polaritons with a positive phase velocity. Previous work demonstrates a wavelength elongation of ∼10% for hyperbolic phonon polaritons (HPPs) in uniaxial crystals of hexagonal boron nitride (hBN). In this work, we report the alteration of HPPs in biaxial α-phase molybdenum trioxide (α-MoO3) by sample suspension. Our combined infrared nano-imaging experiments and electromagnetic theory reveal a wavelength elongation &amp;amp;gt; 60% and a propagation length increase &amp;amp;gt; 140%, due to the simultaneous wavelength elongation and dissipation elimination in the suspended specimen. We have also examined HPPs in α-MoO3 with a negative phase velocity. The sample suspension shortens the HPP wavelength and simultaneously reduces the dissipation due to the unique permittivity tensor. The HPPs with improved figures of merits in the suspended specimen may be developed for nano-polaritonic circuits, biochemical sensing, emission engineering, and energy transfer.
</description>
<pubDate>Mon, 14 Mar 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165258</guid>
<dc:date>2022-03-14T00:00:00Z</dc:date>
</item>
<item>
<title>Hole opening from growing interfacial voids: A possible mechanism of solid state dewetting</title>
<link>https://hdl.handle.net/1721.1/165257</link>
<description>Hole opening from growing interfacial voids: A possible mechanism of solid state dewetting
Curiotto, Stefano; Chame, Anna; Müller, Pierre; Thompson, Carl V; Pierre-Louis, Olivier
Vacancies at interfaces between a film and a substrate can affect material properties and could play a role in solid state dewetting. Using kinetic Monte Carlo simulations, we show that interfacial mono-vacancies diffuse and coalesce to form vacancy clusters and voids. The film/substrate excess energy ES, which is related to the apparent contact angle, controls the mechanisms of coalescence. Depending on ES, voids emerging at the film surface form a hole that can be filled by the film or can lead to dewetting of the film from the substrate.
</description>
<pubDate>Thu, 03 Mar 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165257</guid>
<dc:date>2022-03-03T00:00:00Z</dc:date>
</item>
<item>
<title>2D materials-enabled optical modulators: From visible to terahertz spectral range</title>
<link>https://hdl.handle.net/1721.1/165256</link>
<description>2D materials-enabled optical modulators: From visible to terahertz spectral range
Gan, Xuetao; Englund, Dirk; Van Thourhout, Dries; Zhao, Jianlin
Two-dimensional (2D) materials with layered structures have a variety of exceptional electronic and optical attributes for potentially developing basic functions of light wave technology from light-emitting to -modulating and -sensing. Here, we present state-of-the-art 2D materials-enabled optical intensity modulators according to their operation spectral ranges, which are mainly determined by the optical bandgaps of the 2D materials. Leveraging rich electronic structures from different 2D materials and the governed unique light–matter interactions, the working mechanisms and device architectures for the enabled modulators at specific wavelength ranges are discussed. For instance, the tunable excitonic effect in monolayer transition metal dichalcogenides allows the modulation of visible light. Electro-absorptive and electro-refractive graphene modulators could be operated in the telecom-band relying on their linear dispersion of the massless Dirac fermions. The bendable electronic band edge of the narrow bandgap in few-layer black phosphorus promises the modulation of mid-infrared light via the quantum-confined Franz–Keldysh or Burstein–Moss shift effect. Electrically and magnetically tunable optical conductivity in graphene also supports the realizations of terahertz modulators. While these modulators were demonstrated as proof of concept devices, part of them have great potential for future realistic applications, as discussed with their wavelength coverage, modulation depth, insertion loss, dynamic response speed, etc. Specifically, benefiting from the well-developed technologies of photonic chips and optical fibers in telecom and datacom, the 2D materials-based modulators integrated on these photonic structures are expected to find applications in fiber and chip optical communications. The free-space mid-infrared and terahertz modulators based on 2D materials can expect application in chemical bond spectroscopy, free-space communications, and environment/health sensing.
</description>
<pubDate>Tue, 19 Apr 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165256</guid>
<dc:date>2022-04-19T00:00:00Z</dc:date>
</item>
<item>
<title>Current-induced switching of a ferromagnetic Weyl semimetal Co2MnGa</title>
<link>https://hdl.handle.net/1721.1/165255</link>
<description>Current-induced switching of a ferromagnetic Weyl semimetal Co2MnGa
Han, Jiahao; McGoldrick, Brooke C; Chou, Chung-Tao; Safi, Taqiyyah S; Hou, Justin T; Liu, Luqiao
The introduction of magnetic moments to topological materials provides rich opportunities for studying the interplay among magnetism, electron correlation, and topological orders, which can give rise to exotic magnetoelectric effects and allow one to manipulate the topological band structure via spintronic approaches. Here, we report current-induced switching in a thin film of ferromagnetic Weyl semimetal Co2MnGa with perpendicular magnetic anisotropy, via the spin–orbit torque from a neighboring heavy metal Pt. The reversal of the large anomalous Hall signal indicates an effective electrical control of the Berry curvatures associated with the Weyl nodes in the topological band structure. The efficiency of the spin–orbit torque switching is calibrated to be comparable to that in conventional ferromagnets. Given the compatibility of Co2MnGa films with various spintronic devices and techniques, our work represents an essential step toward memory and computing devices built by topological ferromagnetic materials.
</description>
<pubDate>Wed, 24 Nov 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165255</guid>
<dc:date>2021-11-24T00:00:00Z</dc:date>
</item>
<item>
<title>Super-resolved second harmonic generation imaging by coherent image scanning microscopy</title>
<link>https://hdl.handle.net/1721.1/165254</link>
<description>Super-resolved second harmonic generation imaging by coherent image scanning microscopy
Raanan, Dekel; Song, Man Suk; Tisdale, William A; Oron, Dan
We extend image scanning microscopy to second harmonic generation (SHG) by extracting the complex field amplitude of the second-harmonic beam. While the theory behind coherent image scanning microscopy (ISM) is known, an experimental demonstration was not yet established. The main reason is that the naive intensity-reassignment procedure cannot be used for coherent scattering as the point spread function is now defined for the field amplitude rather than for the intensity. We use an inline interferometer to demonstrate super-resolved phase-sensitive SHG microscopy by applying the ISM reassignment machinery on the resolved field. This scheme can be easily extended to third harmonic generation and stimulated Raman microscopy schemes.
</description>
<pubDate>Fri, 18 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165254</guid>
<dc:date>2022-02-18T00:00:00Z</dc:date>
</item>
<item>
<title>Sub-50 cm/s surface recombination velocity in InGaAsP/InP ridges</title>
<link>https://hdl.handle.net/1721.1/165253</link>
<description>Sub-50 cm/s surface recombination velocity in InGaAsP/InP ridges
Andrade, Nicolas M; Hooten, Sean; Kim, Yunjo; Kim, Jeehwan; Yablonovitch, Eli; Wu, Ming C
The III–V InP/InGaAsP/InGaAs material family is important for photonic devices due to its optical emission and absorption in the 1.55 and&#13;
1.3 lm telecommunication bands for optical interconnects. However, InGaAsP/InGaAs generally suffer from relatively high surface&#13;
recombination velocity—compared to Si [Das et al., in 2020 47th IEEE Photovoltaic Specialists Conference (PVSC) (IEEE, Calgary, AB, 2020),&#13;
pp. 1167–1170] and InP [Joyce et al., Nano Lett. 12, 5325–5330 (2012)], which reduces the efficiency and can increase the noise in&#13;
nanophotonic devices. Here, we demonstrate an efficient method to passivate the surface using a combination of sulfur-saturated ammonium&#13;
sulfide and atomic layer deposition. After annealing, the surface passivation led to a surface recombination velocity as low as 45 cm/s, corresponding to a &gt;180 increase in the photoluminesence of a nanoscale light-emitting device with 200 nm width.
</description>
<pubDate>Mon, 08 Nov 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165253</guid>
<dc:date>2021-11-08T00:00:00Z</dc:date>
</item>
<item>
<title>Emerging GaN technologies for power, RF, digital, and quantum computing applications: Recent advances and prospects</title>
<link>https://hdl.handle.net/1721.1/165252</link>
<description>Emerging GaN technologies for power, RF, digital, and quantum computing applications: Recent advances and prospects
Hoo Teo, Koon; Zhang, Yuhao; Chowdhury, Nadim; Rakheja, Shaloo; Ma, Rui; Xie, Qingyun; Yagyu, Eiji; Yamanaka, Koji; Li, Kexin; Palacios, Tomás
GaN technology is not only gaining traction in power and RF electronics but is also rapidly expanding into other application areas including digital and quantum computing electronics. This paper provides a glimpse of future GaN device technologies and advanced modeling approaches that can push the boundaries of these applications in terms of performance and reliability. While GaN power devices have recently been commercialized in the 15–900 V classes, new GaN devices are greatly desirable to explore both higher-voltage and ultra-low-voltage power applications. Moving into the RF domain, ultra-high frequency GaN devices are being used to implement digitized power amplifier circuits, and further advances using the hardware–software co-design approach can be expected. On the horizon is the GaN CMOS technology, a key missing piece to realize the full-GaN platform with integrated digital, power, and RF electronics technologies. Although currently a challenge, high-performance p-type GaN technology will be crucial to realize high-performance GaN CMOS circuits. Due to its excellent transport characteristics and ability to generate free carriers via polarization doping, GaN is expected to be an important technology for ultra-low temperature and quantum computing electronics. Finally, given the increasing cost of hardware prototyping of new devices and circuits, the use of high-fidelity device models and data-driven modeling approaches for technology-circuit co-design are projected to be the trends of the future. In this regard, physically inspired, mathematically robust, less computationally taxing, and predictive modeling approaches are indispensable. With all these and future efforts, we envision GaN to become the next Si for electronics.
</description>
<pubDate>Fri, 29 Oct 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165252</guid>
<dc:date>2021-10-29T00:00:00Z</dc:date>
</item>
<item>
<title>Underscreening and hidden ion structures in large scale simulations of concentrated electrolytes</title>
<link>https://hdl.handle.net/1721.1/165251</link>
<description>Underscreening and hidden ion structures in large scale simulations of concentrated electrolytes
Krucker-Velasquez, Emily; Swan, James W
The electrostatic screening length predicted by Debye–Hückel theory decreases with increasing ionic strength, but recent experiments have found that the screening length can instead increase in concentrated electrolytes. This phenomenon, referred to as underscreening, is believed to result from ion–ion correlations and short-range forces such as excluded volume interactions among ions. We use Brownian Dynamics to simulate a version of the Restrictive Primitive Model for electrolytes over a wide range of ion concentrations, ionic strengths, and ion excluded volume radii for binary electrolytes. We measure the decay of the charge–charge correlation among ions in the bulk and compare it against scaling trends found experimentally and determined in certain weak coupling theories of ion–ion correlation. Moreover, we find that additional large scale ion structures emerge at high concentrations. In this regime, the frequency of oscillations computed from the charge–charge correlation function is not dominated by electrostatic interactions but rather by excluded volume interactions and with oscillation periods on the order of the ion diameter. We also find the nearest neighbor correlation of ions sharing the same charge transitions from negative at small concentrations to positive at high concentrations, representing the formation of small, like-charge ion clusters. We conclude that the increase in local charge density due to the formation of these clusters and the topological constraints of macroscopic charged surfaces can help explain the degree of underscreening observed experimentally.
</description>
<pubDate>Fri, 01 Oct 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165251</guid>
<dc:date>2021-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Quantum Chemistry Common Driver and Databases (QCDB) and Quantum Chemistry Engine (QCEngine): Automation and interoperability among computational chemistry programs</title>
<link>https://hdl.handle.net/1721.1/165250</link>
<description>Quantum Chemistry Common Driver and Databases (QCDB) and Quantum Chemistry Engine (QCEngine): Automation and interoperability among computational chemistry programs
Community efforts in the computational molecular sciences (CMS) are evolving toward modular, open, and interoperable interfaces that work with existing community codes to provide more functionality and composability than could be achieved with a single program. The Quantum Chemistry Common Driver and Databases (QCDB) project provides such capability through an application programming interface (API) that facilitates interoperability across multiple quantum chemistry software packages. In tandem with the Molecular Sciences Software Institute and their Quantum Chemistry Archive ecosystem, the unique functionalities of several CMS programs are integrated, including CFOUR, GAMESS, NWChem, OpenMM, Psi4, Qcore, TeraChem, and Turbomole, to provide common computational functions, i.e., energy, gradient, and Hessian computations as well as molecular properties such as atomic charges and vibrational frequency analysis. Both standard users and power users benefit from adopting these APIs as they lower the language barrier of input styles and enable a standard layout of variables and data. These designs allow end-to-end interoperable programming of complex computations and provide best practices options by default.
</description>
<pubDate>Mon, 22 Nov 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165250</guid>
<dc:date>2021-11-22T00:00:00Z</dc:date>
</item>
<item>
<title>A deep learning augmented genetic algorithm approach to polycrystalline 2D material fracture discovery and design</title>
<link>https://hdl.handle.net/1721.1/165249</link>
<description>A deep learning augmented genetic algorithm approach to polycrystalline 2D material fracture discovery and design
Lew, Andrew J; Buehler, Markus J
The gestalt of computational methods including physics-based molecular dynamics simulations, data-driven machine learning (ML) models, and biologically-inspired genetic algorithms affords a powerful toolbox for tackling materials mechanism discovery and design problems. Here, we leverage these methods to investigate the complex multidimensional problem of polycrystalline 2D material fracture. We focus first on graphene and in doing so, demonstrate a practical workflow for exploring the structural dependencies of fracture energy. Despite training our ML model on exclusively single crystal fracture in increments of 10° orientations, we can identify a crack branching mechanism responsible for high bicrystal toughness centered at initial crystal orientation angles of 19° and 41°. These high peaks span only a few degrees in range and are completely overlooked by a search with stride 10°. Furthermore, we can discover qualitative physical phenomena such as collective fracture branch termination and extract quantitative trends relating angular dispersion and mis-orientation angles of crystal grains to fracture energy. None of these complex polycrystalline behaviors were presented in the training data, and the predictive power of the model ultimately allows us to expeditiously generate polycrystalline graphene structures with bespoke fracture paths, a task with great implications in industrial design applications and mechanism discovery. Furthermore, the approach is not limited to graphene specifically, as we demonstrate by retraining the model for another more complex 2D material—MoS2—and achieve polycrystalline fracture predictions of comparable accuracy.
</description>
<pubDate>Fri, 10 Dec 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165249</guid>
<dc:date>2021-12-10T00:00:00Z</dc:date>
</item>
<item>
<title>Estimating the task content of work: workforce design for AI-driven human-robot collaboration in intralogistics</title>
<link>https://hdl.handle.net/1721.1/165248</link>
<description>Estimating the task content of work: workforce design for AI-driven human-robot collaboration in intralogistics
Bouquet, Pierre; Bagnoli, Nicolò Piergiovanni; Sheffi, Yossi
This paper addresses the challenge of strategic workforce planning for AI-driven human-robot collaboration (AI-HRC) in intralogistics. We ask two questions: how can task-level full-time equivalent&#13;
(FTE) estimates be constructed from existing labour statistics, and how can these estimates, combined with AI exposure metrics, inform strategic AI-HRC design and workforce planning? Drawing&#13;
on U.S. Bureau of Labor Statistics employment data, O∗NET occupational profiles, and task-level AI&#13;
exposure scores, we develop a stochastic task-time framework that decomposes occupations into&#13;
tasks and models task frequencies as probability vectors on the simplex. A covariance-completion&#13;
procedure reconstructs task covariance matrices consistent with survey standard errors, enabling&#13;
the translation of occupational data into task-level and detailed work activity (DWA)-level FTE estimates with uncertainty bounds. Applying the framework to the U.S. intralogistics workforce, we find&#13;
that approximately 370,000 FTEs (about 17% of workers) are concentrated in the top 15% most AIexposed DWAs. These results provide task-specific insight into AI-driven automation and support&#13;
scenario-based workforce planning by linking alternative AI-HRC adoption paths to task-level FTE&#13;
impacts, uncertainty bands, and upskilling priorities, thereby offering an analytical foundation for&#13;
resilient, human-centered AI-HRC systems.
</description>
<pubDate>Sun, 22 Mar 2026 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165248</guid>
<dc:date>2026-03-22T00:00:00Z</dc:date>
</item>
<item>
<title>The stress in a dispersion of mutually polarizable spheres</title>
<link>https://hdl.handle.net/1721.1/165247</link>
<description>The stress in a dispersion of mutually polarizable spheres
Reed, KM; Swan, JW
Dispersions of dielectric and paramagnetic nanoparticles polarize in response to an external electric or magnetic field and can form chains or other ordered structures depending on the strength of the applied field. The mechanical properties of these materials are of interest for a variety of applications; however, computational studies in this area have so far been limited. In this work, we derive expressions for two important properties for dispersions of polarizable spherical particles with dipoles induced by a uniform external field—the isothermal stress tensor and the pressure. Numerical calculations of these quantities, evaluated using a spectrally accurate Ewald summation method, are validated using thermodynamic integration. We also compare the stress obtained using the mutual dipole model, which accounts for the mutual polarization of particles, to the stress expected from calculations using a fixed dipole model, which neglects mutual polarization. We find that as the conductivity of the particles increases relative to the surrounding medium, the fixed dipole model does not accurately describe the dipolar contribution to the stress. The thermodynamic pressure, calculated from the trace of the stress tensor, is compared to the virial expression for the pressure, which is simpler to calculate but inexact. We find that the virial pressure and the thermodynamic pressure differ, especially in suspensions with a high volume fraction of particles.
</description>
<pubDate>Thu, 01 Jul 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165247</guid>
<dc:date>2021-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Wind farm yaw control set-point optimization under model parameter uncertainty</title>
<link>https://hdl.handle.net/1721.1/165246</link>
<description>Wind farm yaw control set-point optimization under model parameter uncertainty
Howland, Michael F
Wake steering, the intentional yaw misalignment of certain turbines in an array, has demonstrated potential as a wind farm control approach to increase collective power. Existing algorithms optimize the yaw misalignment angle set-points using steady-state wake models and either deterministic frameworks or optimizers that account for wind direction and yaw misalignment variability and uncertainty. Wake models rely on parameterizations of physical phenomena in the mean flow field, such as the wake spreading rate. The wake model parameters are uncertain and vary in time at a wind farm depending on the atmospheric conditions, including turbulence intensity, stability, shear, veer, and other atmospheric features. In this study, we develop a yaw set-point optimization approach that includes model parameter uncertainty in addition to wind condition variability and uncertainty. To enable computationally efficient online set-point optimization under model parameter uncertainty, a simplified, approximate parameter distribution estimation method is used. The optimization is tested in open-loop control numerical experiments using utility-scale wind farm operational data for which the set-point optimization framework with parametric uncertainty has a statistically significant impact on the wind farm power production for certain wind turbine layouts at low turbulence intensity, but the results are not significant for all layouts considered nor at higher turbulence intensity. The set-point optimizer is also tested for closed-loop wake steering control of a model wind farm in large eddy simulations of a convective atmospheric boundary layer (ABL). The yaw set-point optimization with model parameter uncertainty reduced the sensitivity of the closed-loop wake steering control to increases in the yaw controller update frequency. Increases in wind farm power production were not statistically significant due to the high ambient power variability in the turbulent, convective ABL.
</description>
<pubDate>Wed, 21 Jul 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165246</guid>
<dc:date>2021-07-21T00:00:00Z</dc:date>
</item>
<item>
<title>Native oxide reconstructions on AlN and GaN (0001) surfaces</title>
<link>https://hdl.handle.net/1721.1/165245</link>
<description>Native oxide reconstructions on AlN and GaN (0001) surfaces
Mirrielees, Kelsey J; Dycus, J Houston; Baker, Jonathon N; Reddy, Pramod; Collazo, Ramón; Sitar, Zlatko; LeBeau, James M; Irving, Douglas L
Properties of AlN/GaN surfaces are important for realizing the tunability of devices, as the presence of surface states contributes to Fermi level pinning. This pinning can influence the performance of high electron mobility transistors and is also important for passivation of the surface when developing high-power electronic devices. It is widely understood that both AlN and GaN surfaces oxidize. Since there are many possible reconstructions for each surface, it is a challenge to identify the relevant surface reconstructions in advance of a detailed simulation. Because of this, different approaches are often employed to down select initial structures to reduce the computational load. These approaches usually rely on either electron counting rules or oxide stoichiometry, as both of these models tend to lead to structures that are energetically favorable. Here we explore models from these approaches but also explore a reconstruction of the (0001) surface directly observed using scanning transmission electron microscopy with predictive density functional theory simulations. Two compositions of the observed surface reconstruction—one which obeys oxide stoichiometry and one which is cation deficient and obeys electron counting—are compared to reconstructions from the previous work. Furthermore, surface states are directly calculated using hybrid exchange-correlation functionals that correct for the underestimation of the bandgaps in AlN and GaN and improve the predicted positions of surface states within the gap. It is found that cation deficiency in the observed reconstruction yields surface states consistent with the experiment. Based on all of these results, we provide insight into the observed properties of oxidized AlGaN surfaces.
</description>
<pubDate>Wed, 19 May 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165245</guid>
<dc:date>2021-05-19T00:00:00Z</dc:date>
</item>
<item>
<title>Chemical kinetic mechanisms and scaling of two-dimensional polymers via irreversible solution-phase reactions</title>
<link>https://hdl.handle.net/1721.1/165244</link>
<description>Chemical kinetic mechanisms and scaling of two-dimensional polymers via irreversible solution-phase reactions
Zhang, Ge; Zeng, Yuwen; Gordiichuk, Pavlo; Strano, Michael S
Two-dimensional (2D) polymers are extended networks of multi-functional repeating units that are covalently linked together but confined to a single plane. The past decade has witnessed a surge in interest and effort toward producing and utilizing 2D polymers. However, facile synthesis schemes suitable for mass production are yet to be realized. In addition, unifying theories to describe the 2D polymerization process, such as those for linear polymers, have not yet been established. Herein, we perform a chemical kinetic simulation to study the recent synthesis of 2D polymers in homogeneous solution with irreversible chemistry. We show that reaction sites for polymerization in 2D always scale unfavorably compared to 3D, growing as molecular weight to the 1/2 power vs 2/3 power for 3D. However, certain mechanisms can effectively suppress out-of-plane defect formation and subsequent 3D growth. We consider two such mechanisms, which we call bond-planarity and templated autocatalysis. In the first, although single bonds can easily rotate out-of-plane to render polymerization in 3D, some double-bond linkages prefer a planar configuration. In the second mechanism, stacked 2D plates may act as van der Waals templates for each other to enhance growth, which leads to an autocatalysis. When linkage reactions possess a 1000:1 selectivity (γ) for staying in plane vs rotating, solution-synthesized 2D polymers can have comparable size and yield with those synthesized from confined polymerization on a surface. Autocatalysis could achieve similar effects when self-templating accelerates 2D growth by a factor β of 106. A combined strategy relaxes the requirement of both mechanisms by over one order of magnitude. We map the dependence of molecular weight and yield for the 2D polymer on the reaction parameters, allowing experimental results to be used to estimate β and γ. Our calculations show for the first time from theory the feasibility of producing two-dimensional polymers from irreversible polymerization in solution.
</description>
<pubDate>Mon, 17 May 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165244</guid>
<dc:date>2021-05-17T00:00:00Z</dc:date>
</item>
<item>
<title>Ionic thermoelectric materials for near ambient temperature energy harvesting</title>
<link>https://hdl.handle.net/1721.1/165243</link>
<description>Ionic thermoelectric materials for near ambient temperature energy harvesting
Liu, Weishu; Qian, Xin; Han, Cheng-Gong; Li, Qikai; Chen, Gang
Ionic thermoelectric (i-TE) materials, using ions as the energy carrier, can generate a voltage under a temperature difference, bearing similarities to the Seebeck effect of electrons and holes in solid-state materials. Recent experiments have demonstrated large thermopower of quasi-solid-state i-TE materials, which are attractive for harvesting ambient heat as large enough voltage can be generated under a small temperature difference to match the voltage input needs of sensors for internet-of-things applications. In this perspective article, we discuss similarities and differences of i-TE materials from electronic-based thermoelectric materials and also different i-TE thermoelectric effects including the thermodiffusion (Soret) effect and the thermogalvanic effect, in which the latter includes redox reaction entropy and the Soret effect. Strategies to improve performances of materials and devices are elaborated, together with needs for future research in understanding microscopic origins of different effects.
</description>
<pubDate>Mon, 11 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165243</guid>
<dc:date>2021-01-11T00:00:00Z</dc:date>
</item>
<item>
<title>Ultrafast low-temperature metal–insulator interface phonon dynamics and heat transport in a Pt/Gd3Fe5O12 heterostructure</title>
<link>https://hdl.handle.net/1721.1/165242</link>
<description>Ultrafast low-temperature metal–insulator interface phonon dynamics and heat transport in a Pt/Gd3Fe5O12 heterostructure
Sri Gyan, Deepankar; Li, Ni; Chen, Zhantao; Geprägs, Stephan; Dietlein, Maxim; Gross, Rudolf; Sato, Takahiro; Sun, Yanwen; Hoffmann, Matthias C; Zhu, Diling; Haskel, Daniel; Strempfer, Jörg; Li, Mingda; Mannix, Danny; Evans, Paul G
Interfacial thermal and acoustic phenomena have an important role in quantum science and technology, including in spintronic and spincaloritronic materials and devices. Simultaneous measurements of the low-temperature thermal and acoustic properties of a metal/insulator heterostructure reveal distinct dynamics in the characteristic phonon frequency ranges of acoustic and thermal transport. The measurements probed a heterostructure consisting of a thin film of Pt on the ferrimagnetic insulator gadolinium iron garnet (Gd3Fe5O12, GdIG) grown epitaxially on a gadolinium gallium garnet substrate. Ultrafast structural dynamics within the Pt layer were tracked using time-resolved ultrafast x-ray diffraction and analyzed to probe interfacial acoustic and thermal properties. The rapid heating of the Pt layer by a 400 nm wavelength femtosecond-duration optical pulse produced transient structural changes that provided the stimulus for these measurements. Rapid heating produced a broadband acoustic pulse that was partially reflected by the Pt/GdIG interface. Temporal frequencies up to 740 GHz, corresponding to angular frequencies of several THz, were detected in a wavelet analysis of the acoustic oscillations of the strain in the Pt layer. The structural results were analyzed to determine (i) the acoustic damping coefficient and phonon mean free path in Pt at frequencies of hundreds of GHz and (ii) the Grüneisen anharmonicity parameter. The thermal conductance of the Pt/GdIG interface was tracked using the slower, tens-of-picosecond-scale, dynamics of the initial cooling of the heated Pt layer. Analysis using a model based on the Boltzmann transport equation shows that the phonon transmission is lower at the phonon frequencies relevant to thermal transport than for subterahertz regime acoustics.
</description>
<pubDate>Mon, 24 Nov 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165242</guid>
<dc:date>2025-11-24T00:00:00Z</dc:date>
</item>
<item>
<title>Two-dimensional electron gases as non-Newtonian fluids</title>
<link>https://hdl.handle.net/1721.1/165241</link>
<description>Two-dimensional electron gases as non-Newtonian fluids
Kryhin, Serhii; Levitov, Leonid
Two-dimensional electron systems offer an appealing platform to explore long-lived excitations arising due to collinear carrier scattering enabled by phase-space constraints at the Fermi surface. Recently it was found that these effects can boost excitation lifetimes over the fundamental bound set by Landau’s Fermi-liquid theory by a factor as large as (TF/T)α with α≈2. Long-lived degrees of freedom possess the capability to amplify the response to weak perturbations, producing lasting collective memory effects. This leads to non-Newtonian hydrodynamics in 2D electron fluids driven by multiple viscous modes with scale-dependent viscosity. We describe these modes as Fermi surface modulations of odd parity evolving in space and time, and discuss their implications for experimental studies of electron hydrodynamics.
</description>
<pubDate>Mon, 30 Oct 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165241</guid>
<dc:date>2023-10-30T00:00:00Z</dc:date>
</item>
<item>
<title>Humpback whale (Megaptera novaeangliae) breathing sound characteristics from simultaneous above and underwater measurements</title>
<link>https://hdl.handle.net/1721.1/165240</link>
<description>Humpback whale (Megaptera novaeangliae) breathing sound characteristics from simultaneous above and underwater measurements
Radermacher, Max K; Schinault, Matthew E; Seri, Sai Geetha; Mohebbi-Kalkhoran, Hamed; Makris, Nicholas C; Ratilal, Purnima
Humpback whale breathing-related sounds were recorded on elements of a coherent hydrophone array subaperture deployed vertically at the Great South Channel on the US Northeastern continental shelf in Fall 2021, where half of the hydrophones were in-air and the rest submerged underwater. In-air hydrophones recorded breathing sounds with approximately 2.5 s duration, but smaller bandwidths compared to underwater hydrophones where signal energies extended beyond 50 kHz, and a mean underwater source level of 161 ± 4 dB re 1 μPa at 1 m, based on measurements at 22.9 m. The underwater recorded humpback whale breathing sound spectra displayed a broadband dip centered at 15.7 kHz, with approximately 400 Hz half-power bandwidth, likely caused by attenuation from propagation through pulsating air bubbles. The air bubble radius for natural frequency of oscillations at 15.7 kHz is estimated to be 0.205–0.21 mm. These bubbles are capable of removing energy from the forward propagated humpback breathing sounds via resonance absorption most pronounced at and near bubble natural oscillation frequency. Humpback whale distances from the vertically deployed hydrophones are estimated and tracked by matching the curved nonlinear travel-time wavefront of its breathing sounds, since the whale was in the near-field of the subarray.
</description>
<pubDate>Tue, 01 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165240</guid>
<dc:date>2025-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Estimation of the spatial variability of the New England Mud Patch geoacoustic properties using a distributed array of hydrophones and deep learning</title>
<link>https://hdl.handle.net/1721.1/165239</link>
<description>Estimation of the spatial variability of the New England Mud Patch geoacoustic properties using a distributed array of hydrophones and deep learning
Vardi, Ariel; Dahl, Peter H; Dall'Osto, David; Knobles, David; Wilson, Preston; Leonard, John; Bonnel, Julien
This article presents a spatial environmental inversion scheme using broadband impulse signals with deep learning (DL) to model a single spatially-varying sediment layer over a fixed basement. The method is applied to data from the Seabed Characterization Experiment 2022 (SBCEX22) in the New England Mud-Patch (NEMP). Signal Underwater Sound (SUS) explosive charges generated impulsive signals recorded by a distributed array of bottom-moored hydrophones. The inversion scheme is first validated on a range-dependent synthetic test set simulating SBCEX22 conditions, then applied to experimental data to predict the lateral spatial structure of sediment sound speed and its ratio with the interfacial water sound speed. Traditional geoacoustic inversion requires significant computational resources. Here, a neural network enables rapid single-signal inversion, allowing the processing of 1836 signals along 722 tracks. The method is applied to both synthetic and experimental data. Results from experimental data suggest an increase in both absolute compressional sound speed and sound speed ratio from southwest to northeast in the NEMP, consistent with published coring surveys and geoacoustic inversion results. This approach demonstrates the potential of DL for efficient spatial geoacoustic inversion in shallow water environments.
</description>
<pubDate>Fri, 06 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165239</guid>
<dc:date>2024-12-06T00:00:00Z</dc:date>
</item>
<item>
<title>The JIBO Kids Corpus: A speech dataset of child-robot interactions in a classroom environment</title>
<link>https://hdl.handle.net/1721.1/165238</link>
<description>The JIBO Kids Corpus: A speech dataset of child-robot interactions in a classroom environment
Shankar, Natarajan Balaji; Afshan, Amber; Johnson, Alexander; Mahapatra, Aurosweta; Martin, Alejandra; Ni, Haolun; Park, Hae Won; Perez, Marlen Quintero; Yeung, Gary; Bailey, Alison; Breazeal, Cynthia; Alwan, Abeer
This paper describes an original dataset of children's speech, collected through the use of JIBO, a social robot. The dataset encompasses recordings from 110 children, aged 4–7 years old, who participated in a letter and digit identification task and extended oral discourse tasks requiring explanation skills, totaling 21 h of session data. Spanning a 2-year collection period, this dataset contains a longitudinal component with a subset of participants returning for repeat recordings. The dataset, with session recordings and transcriptions, is publicly available, providing researchers with a valuable resource to advance investigations into child language development.
</description>
<pubDate>Fri, 01 Nov 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165238</guid>
<dc:date>2024-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Range-dynamical low-rank split-step Fourier method for the parabolic wave equation</title>
<link>https://hdl.handle.net/1721.1/165237</link>
<description>Range-dynamical low-rank split-step Fourier method for the parabolic wave equation
Charous, Aaron; Lermusiaux, Pierre FJ
Numerical solutions to the parabolic wave equation are plagued by the curse of dimensionality coupled with the Nyquist criterion. As a remedy, a new range-dynamical low-rank split-step Fourier method is developed. The integration scheme scales sub-linearly with the number of classical degrees of freedom in the transverse directions. It is orders of magnitude faster than the classic full-rank split-step Fourier algorithm and saves copious amounts of storage space. This enables numerical solutions of the parabolic wave equation at higher frequencies and on larger domains, and simulations may be performed on laptops rather than high-performance computing clusters. Using a rank-adaptive scheme to optimize the low-rank equations further ensures the approximate solution is highly accurate and efficient. The methodology and algorithms are demonstrated on realistic high-resolution data-assimilative ocean fields in Massachusetts Bay for two three-dimensional acoustic configurations with different source locations and frequencies. The acoustic pressure, transmission loss, and phase solutions are analyzed in the two geometries with seamounts and canyons across and along Stellwagen Bank. The convergence with the rank of the subspace and the properties of the rank-adaptive scheme are demonstrated, and all results are successfully compared with those of the full-rank method when feasible.
</description>
<pubDate>Wed, 30 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165237</guid>
<dc:date>2024-10-30T00:00:00Z</dc:date>
</item>
<item>
<title>Replayed reef sounds induce settlement of Favia fragum coral larvae in aquaria and field environments</title>
<link>https://hdl.handle.net/1721.1/165236</link>
<description>Replayed reef sounds induce settlement of Favia fragum coral larvae in aquaria and field environments
Aoki, Nadège; Weiss, Benjamin; Jézéquel, Youenn; Apprill, Amy; Mooney, T Aran
Acoustic cues of healthy reefs are known to support critical settlement behaviors for one reef-building coral, but acoustic responses have not been demonstrated in additional species. Settlement of Favia fragum larvae in response to replayed coral reef soundscapes were observed by exposing larvae in aquaria and reef settings to playback sound treatments for 24–72 h. Settlement increased under 24 h sound treatments in both experiments. The results add to growing knowledge that acoustically mediated settlement may be widespread among stony corals with species-specific attributes, suggesting sound could be one tool employed to rehabilitate and build resilience within imperiled reef communities.
</description>
<pubDate>Mon, 21 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165236</guid>
<dc:date>2024-10-21T00:00:00Z</dc:date>
</item>
<item>
<title>A deep learning method for reflective boundary estimation</title>
<link>https://hdl.handle.net/1721.1/165235</link>
<description>A deep learning method for reflective boundary estimation
Arikan, Toros; Weiss, Amir; Vishnu, Hari; Deane, Grant B; Singer, Andrew C; Wornell, Gregory W
Environment estimation is a challenging task in reverberant settings such as the underwater and indoor acoustic domains. The locations of reflective boundaries, for example, can be estimated using acoustic echoes and leveraged for subsequent, more accurate localization and mapping. Current boundary estimation methods are constrained to high signal-to-noise ratios or are customized to specific environments. Existing methods also often require a correct assignment of echoes to boundaries, which is difficult if spurious echoes are detected. To evade these limitations, a convolutional neural network (NN) method is developed for robust two-dimensional boundary estimation, given known emitter and receiver locations. A Hough transform-inspired algorithm is leveraged to transform echo times of arrival into images, which are amenable to multi-resolution regression by NNs. The same architecture is trained on transform images of different resolutions to obtain diverse NNs, deployed sequentially for increasingly refined boundary estimation. A correct echo labeling solution is not required, and the method is robust to reverberation. The proposed method is tested in simulation and for real data from a water tank, where it outperforms state-of-the-art alternatives. These results are encouraging for the future development of data-driven three-dimensional environment estimation with high practical value in underwater acoustic detection and tracking.
</description>
<pubDate>Mon, 01 Jul 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165235</guid>
<dc:date>2024-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Dynamically orthogonal narrow-angle parabolic equations for stochastic underwater sound propagation. Part I: Theory and schemes</title>
<link>https://hdl.handle.net/1721.1/165234</link>
<description>Dynamically orthogonal narrow-angle parabolic equations for stochastic underwater sound propagation. Part I: Theory and schemes
Ali, Wael H; Lermusiaux, Pierre FJ
Robust informative acoustic predictions require precise knowledge of ocean physics, bathymetry, seabed, and acoustic parameters. However, in realistic applications, this information is uncertain due to sparse and heterogeneous measurements and complex ocean physics. Efficient techniques are thus needed to quantify these uncertainties and predict the stochastic acoustic wave fields. In this work, we derive and implement new stochastic differential equations that predict the acoustic pressure fields and their probability distributions. We start from the stochastic acoustic parabolic equation (PE) and employ the instantaneously-optimal Dynamically Orthogonal (DO) equations theory. We derive stochastic DO-PEs that dynamically reduce and march the dominant multi-dimensional uncertainties respecting the nonlinear governing equations and non-Gaussian statistics. We develop the dynamical reduced-order DO-PEs theory for the Narrow-Angle parabolic equation and implement numerical schemes for discretizing and integrating the stochastic acoustic fields.
</description>
<pubDate>Thu, 25 Jan 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165234</guid>
<dc:date>2024-01-25T00:00:00Z</dc:date>
</item>
<item>
<title>Dynamically orthogonal narrow-angle parabolic equations for stochastic underwater sound propagation. Part II: Applications</title>
<link>https://hdl.handle.net/1721.1/165233</link>
<description>Dynamically orthogonal narrow-angle parabolic equations for stochastic underwater sound propagation. Part II: Applications
The stochastic dynamically orthogonal (DO) narrow-angle parabolic equations (NAPEs) are exemplified and their properties and capabilities are described using three new two-dimensional stochastic range-independent and range-dependent test cases with uncertain sound speed field, bathymetry, and source location. We validate results against ground-truth deterministic analytical solutions and direct Monte Carlo (MC) predictions of acoustic pressure and transmission loss fields. We verify the stochastic convergence and computational advantages of the DO-NAPEs and discuss the differences with normal mode approaches. Results show that a single DO-NAPE simulation can accurately predict stochastic range-dependent acoustic fields and their non-Gaussian probability distributions, with computational savings of several orders of magnitude when compared to direct MC methods. With their coupling properties and their adaptation in range to the dominant uncertainties, the DO-NAPEs are shown to predict accurate statistics, from mean and variance to multiple modes and full probability distributions, and to provide excellent reconstructed realizations, from amplitudes and phases to other specific properties of complex realization fields.
</description>
<pubDate>Thu, 25 Jan 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165233</guid>
<dc:date>2024-01-25T00:00:00Z</dc:date>
</item>
<item>
<title>Weird A.I.</title>
<link>https://hdl.handle.net/1721.1/165227</link>
<description>Weird A.I.
Lindquist, Benjamin
This essay sketches out what I call “weird AI”: probabilistic, associative systems that behave as if they feel. It shows how midcentury architects of artificial neural networks deliberately courted the uncanny as they engineered space for intuition, emotion, and nonrational thought, even as standard histories cast computing as an Enlightenment project of calculation and control. To make sense of artificial intelligence’s past and present, historians must move beyond an information-centric framework and reckon with the affective undercurrents that have shaped the field from its start.
</description>
<pubDate>Wed, 18 Mar 2026 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165227</guid>
<dc:date>2026-03-18T00:00:00Z</dc:date>
</item>
<item>
<title>Vibrotactile stimulation at gamma frequency mitigates pathology related to neurodegeneration and improves motor function</title>
<link>https://hdl.handle.net/1721.1/165226</link>
<description>Vibrotactile stimulation at gamma frequency mitigates pathology related to neurodegeneration and improves motor function
Suk, Ho-Jun; Buie, Nicole; Xu, Guojie; Banerjee, Arit; Boyden, Edward S; Tsai, Li-Huei
The risk for neurodegenerative diseases increases with aging, with various pathological conditions and functional deficits accompanying these diseases. We have previously demonstrated that non-invasive visual stimulation using 40 Hz light flicker ameliorated pathology and modified cognitive function in mouse models of neurodegeneration, but whether 40 Hz stimulation using another sensory modality can impact neurodegeneration and motor function has not been studied. Here, we show that whole-body vibrotactile stimulation at 40 Hz leads to increased neural activity in the primary somatosensory cortex (SSp) and primary motor cortex (MOp). In two different mouse models of neurodegeneration, Tau P301S and CK-p25 mice, daily exposure to 40 Hz vibrotactile stimulation across multiple weeks also led to decreased brain pathology in SSp and MOp. Furthermore, both Tau P301S and CK-p25 mice showed improved motor performance after multiple weeks of daily 40 Hz vibrotactile stimulation. Vibrotactile stimulation at 40 Hz may thus be considered as a promising therapeutic strategy for neurodegenerative diseases with motor deficits.
</description>
<pubDate>Thu, 18 May 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165226</guid>
<dc:date>2023-05-18T00:00:00Z</dc:date>
</item>
<item>
<title>Traffic management protocols for advanced air mobility</title>
<link>https://hdl.handle.net/1721.1/165225</link>
<description>Traffic management protocols for advanced air mobility
Chin, Christopher; Qin, Victor; Gopalakrishnan, Karthik; Balakrishnan, Hamsa
The demand for advanced air mobility (AAM) operations is expected to be at a much larger scale than conventional aviation. Additionally, AAM flight operators are likely to compete in providing a range of on-demand services in congested airspaces, with varying operational costs. These characteristics motivate the need for the development of new traffic management algorithms for advanced air mobility. In this paper, we explore the use of traffic management protocols (“rules-of-the-road” for airspace access) to enable efficient and fair operations. First, we show that it is possible to avoid gridlock and improve efficiency by leveraging the concepts of cycle detection and backpressure. We then develop a cost-aware traffic management protocol based on the second-price auction. Using simulations of representative advanced air mobility scenarios, we demonstrate that our traffic management protocols can help balance efficiency and fairness, in both the operational and the economic contexts.
</description>
<pubDate>Wed, 17 May 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165225</guid>
<dc:date>2023-05-17T00:00:00Z</dc:date>
</item>
<item>
<title>If blockchain is the solution, robot security is the problem</title>
<link>https://hdl.handle.net/1721.1/165224</link>
<description>If blockchain is the solution, robot security is the problem
Ferrer, Eduardo Castelló
Robotics systems of all types are revolutionizing a wide variety of industries—transportation, manufacturing, and even healthcare—and yet, many essential ingredients for robotics systems in the real world are not technologically ready for deployment. Currently, robots lack the protocols and standards required to be safe and secure outside factories. In an attempt to close this gap, recent research has demonstrated the security benefits of combining robotics systems with blockchain-based and related technologies (e.g., smart contracts, zero-knowledge proofs, Merkle trees). In this perspective article, I argue that blockchain-based robotics is starting to provide innovative solutions (e.g., secure data sharing, consensus mechanisms, and new interaction methods) to urgent problems of robot security. I list the most important takeaways so far from this emerging field of research that I helped establish together with a growing community. I close the article by discussing the implications of the security challenges that the robotics research community is facing, and possible ways for us to move forward.
</description>
<pubDate>Tue, 16 May 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165224</guid>
<dc:date>2023-05-16T00:00:00Z</dc:date>
</item>
<item>
<title>Quantification of heat vulnerability using system dynamics</title>
<link>https://hdl.handle.net/1721.1/165223</link>
<description>Quantification of heat vulnerability using system dynamics
Bayomi, Norhan; Fernandez, John E
One of the major climate threats is extreme heat events, as they pose significant risks to public health that are well documented in the epidemiologic literature. The effects of extreme heat events have been evident over the past years by several extreme heat events worldwide. With the growing concerns of future heat exposure, numerous studies in the literature have developed heat vulnerability indices based on determinants that have heat-related impacts. However, there has been limited guidance on heat vulnerability assessment that accounts for the impacts of the characteristics of the built environment and changes in population dynamics over time. This paper focuses on developing the methodology for heat vulnerability assessment in urban areas using System Dynamics (SD) based on integrating three levels of the physical urban environment: the urban level, the building level, and the human adaptive capacity to heat exposure. We examine the viability of using SD modeling as an approach to examine the key drivers in heat vulnerability assessment in urban areas. Thus, the paper assesses the dynamic relationship between heat vulnerability components, namely, Susceptibility, Exposure, Coping Capacity, and Adaptive Capacity, and their effect on increased or decreased vulnerability under extreme heat events. The paper concludes with an applied case study in Cairo, Egypt, to test the use of the SD approach in assessing heat vulnerability in urban settings. Results from the proposed SD model confirm the underlying hypothesis that vulnerability from heat exposure is dynamically linked to the coping and adaptive capacity of the surrounding built environment with the urban population’s socioeconomic characteristics. The main contribution of this approach is that it allows for parallel examination of the effect of the human system that simulation models cannot include and the performance of the built environment system that epidemic heat vulnerability studies cannot capture.
</description>
<pubDate>Tue, 04 Apr 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165223</guid>
<dc:date>2023-04-04T00:00:00Z</dc:date>
</item>
<item>
<title>Predicting one-year left ventricular mass index regression following transcatheter aortic valve replacement in patients with severe aortic stenosis: A new era is coming</title>
<link>https://hdl.handle.net/1721.1/165222</link>
<description>Predicting one-year left ventricular mass index regression following transcatheter aortic valve replacement in patients with severe aortic stenosis: A new era is coming
Asheghan, Mohammad Mostafa; Javadikasgari, Hoda; Attary, Taraneh; Rouhollahi, Amir; Straughan, Ross; Willi, James Noel; Awal, Rabina; Sabe, Ashraf; de la Cruz, Kim I; Nezami, Farhad R
Aortic stenosis (AS) is the most common valvular heart disease in the western world, particularly worrisome with an ever-aging population wherein postoperative outcome for aortic valve replacement is strongly related to the timing of surgery in the natural course of disease. Yet, guidelines for therapy planning overlook insightful, quantified measures from medical imaging to educate clinical decisions. Herein, we leverage statistical shape analysis (SSA) techniques combined with customized machine learning methods to extract latent information from segmented left ventricle (LV) shapes. This enabled us to predict left ventricular mass index (LVMI) regression a year after transcatheter aortic valve replacement (TAVR). LVMI regression is an expected phenomena in patients undergone aortic valve replacement reported to be tightly correlated with survival one and five year after the intervention. In brief, LV geometries were extracted from medical images of a cohort of AS patients using deep learning tools, and then analyzed to create a set of statistical shape models (SSMs). Then, the supervised shape features were extracted to feed a support vector regression (SVR) model to predict the LVMI regression. The average accuracy of the predictions was validated against clinical measurements calculating root mean square error and R2 score which yielded the satisfactory values of 0.28 and 0.67, respectively, on test data. Our work reveals the promising capability of advanced mathematical and bioinformatics approaches such as SSA and machine learning to improve medical output prediction and treatment planning.
</description>
<pubDate>Tue, 04 Apr 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165222</guid>
<dc:date>2023-04-04T00:00:00Z</dc:date>
</item>
<item>
<title>Aesthetic chills cause an emotional drift in valence and arousal</title>
<link>https://hdl.handle.net/1721.1/165221</link>
<description>Aesthetic chills cause an emotional drift in valence and arousal
Jain, Abhinandan; Schoeller, Felix; Horowitz, Adam; Hu, Xiaoxiao; Yan, Grace; Salomon, Roy; Maes, Pattie
Aesthetic chills are an embodied peak emotional experience induced by stimuli such as music, films, and speeches and characterized by dopaminergic release. The emotional consequences of chills in terms of valence and arousal are still debated and the existing empirical data is conflicting. In this study, we tested the effects of ChillsDB, an open-source repository of chills-inducing stimuli, on the emotional ratings of 600+ participants. We found that participants experiencing chills reported significantly more positive valence and greater arousal during the experience, compared to participants who did not experience chills. This suggests that the embodied experience of chills may influence one’s perception and affective evaluation of the context, in favor of theoretical models emphasizing the role of interoceptive signals such as chills in the process of perception and decision-making. We also found an interesting pattern in the valence ratings of participants, which tended to harmonize toward a similar mean after the experiment, though initially disparately distributed. We discuss the significance of these results for the diagnosis and treatment of dopaminergic disorders such as Parkinson’s, schizophrenia, and depression.
</description>
<pubDate>Tue, 07 Mar 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165221</guid>
<dc:date>2023-03-07T00:00:00Z</dc:date>
</item>
<item>
<title>On projection and the shadow of [wh]</title>
<link>https://hdl.handle.net/1721.1/165220</link>
<description>On projection and the shadow of [wh]
Newman, Elise
Much work has shown that wh-movement is subject to several kinds of locality restrictions cross-linguistically. I propose that these restrictions arise when [wh] projects&#13;
past the maximal projection that it came from. When this happens, it intervenes for whmovement, trapping the wh-element in its base position, unless it can escape through other&#13;
means. The need to escape the domain of [wh] is proposed to capture the distribution of&#13;
successive-cyclic movement and interactions with Voice in different languages. Different&#13;
locality conditions in different languages are captured by different distributions of the same&#13;
features and probes.
</description>
<pubDate>Thu, 01 Jan 2026 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165220</guid>
<dc:date>2026-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Discussion of “Experimental Design and Modeling for Forward-Inverse Maps” by R. Barton &amp; M. Morris, appearing in Technometrics</title>
<link>https://hdl.handle.net/1721.1/165219</link>
<description>Discussion of “Experimental Design and Modeling for Forward-Inverse Maps” by R. Barton &amp; M. Morris, appearing in Technometrics
Marzouk, Youssef M.
Inverse design—essentially the problem of finding system parameter values that achieve a given performance metric—is an enormously important problem across a wide range of engineering fields. Typical methods for inverse design employ a forward model for example, a complex computer simulation mapping design parameters to performance metrics, and embed it in an optimization loop. If the forward model is a black box, for which direct evaluation of gradients is intractable, then one must resort to derivative-free or so-called “zeroth order” optimization approaches (e.g., Močkus Citation1975; Jones, Schonlau, and Welch Citation1998; Conn, Scheinberg, and Vicente Citation2009; Larson, Menickelly, and Wild Citation2019). Most of these approaches iteratively construct a metamodel for the forward map during optimization. Barton and Morris (henceforth “the authors” or “BM”) propose instead to build an inverse metamodel, that is, a computationally inexpensive approximation of the performance metric-to-parameter map. The promise of such an inverse metamodel is that it makes inverse design much faster and more direct, bypassing the need for explicit optimization.
</description>
<pubDate>Tue, 20 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165219</guid>
<dc:date>2025-05-20T00:00:00Z</dc:date>
</item>
<item>
<title>Where Are All the CFOs?</title>
<link>https://hdl.handle.net/1721.1/165218</link>
<description>Where Are All the CFOs?
Wright, Randall S.
In my 38 years with MIT’s Office of Corporate Relations, where I worked with and advised executives, I saw chief executive officers (CEOs), company presidents, managing directors, general managers, chief information officers (CIOs), chief technology officers (CTOs), vice presidents, chief scientists, chief counsels, directors of innovation hubs, managers of strategic alliances, technology entrepreneurs, managers of supply chains, presidents of European Trade Commissions, scientific attachés, senior NATO officers, ministers of economics, speakers of parliaments, chancellors, governors, Members of Congress . . .
</description>
<pubDate>Tue, 03 Mar 2026 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165218</guid>
<dc:date>2026-03-03T00:00:00Z</dc:date>
</item>
<item>
<title>An Alternative to Probability Tables for Modeling Unresolved Resonance Behavior</title>
<link>https://hdl.handle.net/1721.1/165217</link>
<description>An Alternative to Probability Tables for Modeling Unresolved Resonance Behavior
Ridley, Gavin; Forget, Benoit
We first review the requirements of Monte Carlo neutron transport codes to accurately model the phenomena associated with the unresolved resonance regime and present the first rigorous justification for the modeling assumption employed by probability tables for unresolved resonances. We present a new method named AURA (analytic unresolved resonance algorithm) for modeling unresolved resonance cross-sections with the normal inverse Gaussian distribution. The new method accurately models temperature dependence in the unresolved resonance region (URR), requires fewer data than previous methods, and outperforms the conventional URR treatment on GPUs.
</description>
<pubDate>Wed, 17 Dec 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165217</guid>
<dc:date>2025-12-17T00:00:00Z</dc:date>
</item>
<item>
<title>Direct Sampling of the Egelstaff-Schofield Scattering Law and the Phonon Sampling Method for Liquids</title>
<link>https://hdl.handle.net/1721.1/165216</link>
<description>Direct Sampling of the Egelstaff-Schofield Scattering Law and the Phonon Sampling Method for Liquids
Ridley, Gavin; Forget, Benoit
The phonon sampling method (PSM) enables accurate and efficient simulation of thermal neutron scattering from solids; it obviates a substantial degree of discretization, enabling continuous representations of scattering law behavior across a variety of conditions. Despite these strengths, the PSM relies on the phonon expansion, which only applies to solid materials. To remove this limitation, we demonstrate an efficient sampling algorithm for the nonsymmetric translational &#119982;⁡(&#120572;,&#120573;) distribution of Egelstaff and Schofield, and show how this can be used to extend the PSM for the simulation of thermal neutron scattering in materials exhibiting mixed translational and vibrational behavior, such as water or molten salts. We demonstrate the correctness of the method with example results for scattering from room temperature water at a few incident neutron energies.
</description>
<pubDate>Mon, 15 Dec 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165216</guid>
<dc:date>2025-12-15T00:00:00Z</dc:date>
</item>
<item>
<title>Preliminary Verification of Fixed-Source Sensitivity Analysis in OpenMC</title>
<link>https://hdl.handle.net/1721.1/165215</link>
<description>Preliminary Verification of Fixed-Source Sensitivity Analysis in OpenMC
Ebiwonjumi, Bamidele; Forget, Benoit; Peterson, Ethan
Sensitivity analysis capabilities have yet to find extensive use in fusion reactor design applications where they can help understand the impact of nuclear data uncertainties on the tritium breeding ratio (TBR), shutdown dose rates, and nuclear heating. Significant uncertainty exists in nuclear data for fusion applications, and the goal of this work is to explore whether adjoint- and perturbation theory–based eigenvalue and generalized response sensitivity methods recently developed within the OpenMC Monte Carlo code can be extended to fixed-source Monte Carlo radiation transport simulations.&#13;
&#13;
This paper presents a derivation for the extended sensitivity analysis method. The adjoint-based sensitivity coefficients are compared with Monte Carlo SERPENT sensitivity coefficients for a TBR calculation in a simplified ARC-class tokamak. Further verification with the Monte Carlo MCSEN and deterministic ASUSD sensitivity coefficients for the tritium production rate in the Frascati neutron generator helium-cooled pebble bed test blanket module mock-up experiment was performed. The OpenMC sensitivity coefficients were found to agree with the other code systems. The use of the sensitivity coefficients with nuclear data covariance within the sandwich rule for cross-section uncertainty propagation also showed good agreement with the reference total Monte Carlo nuclear data uncertainty.
</description>
<pubDate>Fri, 02 Jan 2026 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165215</guid>
<dc:date>2026-01-02T00:00:00Z</dc:date>
</item>
<item>
<title>Supply chain mapping through retrieval-augmented generation: applications to the electronics industry</title>
<link>https://hdl.handle.net/1721.1/165214</link>
<description>Supply chain mapping through retrieval-augmented generation: applications to the electronics industry
Jackson, Ilya; Saénz, Maria Jesus; Ivanov, Dmitry; Ma, Benedict Jun
This paper presents a novel methodology for automated multi-tier supply chain mapping, leveraging Retrieval-Augmented Generation (RAG) and network science techniques. We developed an RAG-based approach that extracts supplier-customer relationships from unstructured public data sources, including SEC 10-K filings and earnings calls. The extracted entities are structured into a directed supply chain graph and analysed using network science metrics such as centrality, modularity, and path length. The case study focuses on three of the largest contract manufacturers in the electronics industry: Hon Hai Precision Industry (Foxconn), Flex Ltd., and Jabil Inc. Our findings demonstrate that Generative AI (GAI), specifically LLMs enhanced with RAG, can construct scalable and comprehensive supply chain graphs. The proof of concept is successful, as evidenced by the construction of a directed supply chain graph encompassing 4,644 nodes and 8,341 edges, covering three of the largest contract manufacturers in the electronics industry.
</description>
<pubDate>Fri, 19 Dec 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165214</guid>
<dc:date>2025-12-19T00:00:00Z</dc:date>
</item>
<item>
<title>Quantifying Flash Flood Inundation and Assessing Damage Using Satellite Earth Observations: The Case of 2022 Flash Flood in Bangladesh</title>
<link>https://hdl.handle.net/1721.1/165213</link>
<description>Quantifying Flash Flood Inundation and Assessing Damage Using Satellite Earth Observations: The Case of 2022 Flash Flood in Bangladesh
Sariful, Md; Mitra, Juthi Rani; Rahman, Munshi Khaledur
Floods pose significant threats to Bangladesh, frequently causing the loss of lives, properties, infrastructure, and livelihoods. This study examines the spatial and temporal dynamics of the 2022 Sylhet flood by integrating Sentinel-1 Synthetic Aperture Radar (SAR) imagery with socioeconomic datasets. Otsu thresholding method was used to estimate water extent before, during, and after the 2022 flood event in the Sylhet region. Results suggest that over 1,600 km² of land in Sylhet division was inundated, with Habiganj district experiencing the highest surge. More than half a million people were directly impacted, alongside significant damage to cropland and urban areas. Furthermore, the study highlighted substantial impacts on urban built-up areas and cropland across districts. Cumulative cropland damage increased from 3376.30 km² in June to 4701.30 km² in July, indicating severe consequences for agricultural productivity. Concurrently, the affected built-up areas were found to be 40.9 km², emphasizing the impact on human settlements. By providing a detailed assessment of flood extent and damage, this study provides valuable insights for policymakers, planners, and disaster management authorities. The findings support the development of targeted strategies for flood risk reduction, agricultural resilience, and the mitigation of future disaster impacts in vulnerable regions of Bangladesh.
</description>
<pubDate>Fri, 02 Jan 2026 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165213</guid>
<dc:date>2026-01-02T00:00:00Z</dc:date>
</item>
<item>
<title>GeoXCP: uncertainty quantification of spatial explanations in explainable AI</title>
<link>https://hdl.handle.net/1721.1/165212</link>
<description>GeoXCP: uncertainty quantification of spatial explanations in explainable AI
Lou, Xiayin; Luo, Peng; Li, Ziqi; Gao, Song; Meng, Liqiu
Understanding and explaining complex geographic phenomena—ranging from climate change to socioeconomic disparities—is a central focus in both geography and the broader scientific community. Various methods have been developed to elucidate relationships between variables, from coefficient estimates in linear regression models to the increasingly dominant use of feature attribution scores in Explainable AI (XAI) techniques. However, explanations generated by XAI methods often carry uncertainty, stemming from the model itself and the data used to train the model. Despite the critical importance of accounting for such uncertainty, this issue remains largely overlooked in the geospatial domain. In this study, we developed an uncertainty quantification framework for XAI explanations based on conformal prediction, termed Geospatial eXplanation Conformal Prediction (GeoXCP). By incorporating spatial dependence into the modeling process, GeoXCP produced spatially adaptive explanations with calibrated uncertainty estimates. We validated the effectiveness of GeoXCP through extensive simulation experiments and real-world datasets. The results demonstrated that GeoXCP provided reliable explanations while effectively quantifying uncertainty across diverse geospatial scenarios. Our approach represented a significant advancement in explainable geospatial machine learning, enabling decision-makers to better assess the trustworthiness of model-driven insights. The proposed framework was implemented in a python package, named GeoXCP.
</description>
<pubDate>Fri, 10 Oct 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165212</guid>
<dc:date>2025-10-10T00:00:00Z</dc:date>
</item>
<item>
<title>Rapidly innovating firms: patent lifecycle and support for trade and IP enforcement</title>
<link>https://hdl.handle.net/1721.1/165211</link>
<description>Rapidly innovating firms: patent lifecycle and support for trade and IP enforcement
Cha, Sujin; Lee, Jieun; Osgood, Iain; Park, Sojun
The rate of technological innovation and the speed with which old technologies are discarded are fundamental features of industries in the modern economy. How do these factors shape the politics of trade and trade agreements? We describe two potential channels. In one, rapid innovators support trade agreements and trade liberalization because of their privileged competitive position at the cutting edge of innovation. In the other, rapid innovators support trade agreements and intellectual property (IP) enforcement due to their particular need for speed in patent approval and enforcement. By matching data on patent lifecycles to data on United States (US) tariffs and corporate support for trade agreements and IP enforcement, we test these two theoretical accounts. We find evidence consistent with both. We conclude that rapidly innovating firms are strong supporters of globalization, though their political successes have also contributed to contention over the contemporary international order, both at home and abroad.
</description>
<pubDate>Wed, 04 Mar 2026 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165211</guid>
<dc:date>2026-03-04T00:00:00Z</dc:date>
</item>
<item>
<title>Grounding infrastructure: community ownership of an international cooperation project in Kibera, Nairobi</title>
<link>https://hdl.handle.net/1721.1/165210</link>
<description>Grounding infrastructure: community ownership of an international cooperation project in Kibera, Nairobi
Williams, Sarah; Carolini, Gabriella
International infrastructure projects are deeply ensconced in debates about power, intentions, and imaginaries of progress that also stir debate about what it means to decolonize international relations and projects that nominally build improvements in low-income environments. Traditional prioritization of technical expertise, the criticality of contextual knowledge, and the dynamics of uneven partnerships involved are all central elements of these pronounced tensions in the practice of international infrastructure development. This paper instead describes an international infrastructure project that is centred on community stewardship and co-design. The infrastructure project Living Data Hubs (LDH), highlighted here, is a small-scale information and communication technology (ICT) and data management project in the informal settlement of Kibera in Nairobi, Kenya. In its development, equal attention was afforded to technical, conceptual, and procedural knowledge production, ensuring that both universal and contextual elements of the ICT infrastructure and data management were explicitly discussed and valued. We argue that international partnerships like LDH that give space to technical, conceptual, and procedural capacity building alike can produce community stewarded infrastructure that is sustainable and puts forward a pathway for diminishing uneven power relations and enhancing community well-being.
</description>
<pubDate>Fri, 06 Feb 2026 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165210</guid>
<dc:date>2026-02-06T00:00:00Z</dc:date>
</item>
<item>
<title>Writing revolutions:1 From Haiti to MIT to Palestine through the lenses of linguistics and history for decolonization and liberation</title>
<link>https://hdl.handle.net/1721.1/165209</link>
<description>Writing revolutions:1 From Haiti to MIT to Palestine through the lenses of linguistics and history for decolonization and liberation
DeGraff, Michel
In this reflexive essay, I explore, through historical and linguistic lenses, the similarities between the freedom struggles of Haitians and Palestinians. One of my goals is to uncover certain pervasive mechanisms of oppression and resistance. As I do so, I reflect on my personal and professional journey as a Haitian scholar–activist at MIT, highlighting the political repression I have faced after proposing an elective “Special Topics” seminar on language and linguistics for decolonization and liberation in Haiti, Palestine and Israel. Employing distinct yet complementary analytical approaches, I dissect the complex interplay between language and history in the Haitian and Palestinian struggles. By integrating historical perspectives with linguistic analysis, I illustrate how these elements collectively contribute to forging paths toward decolonization and liberation, at both the individual and collective levels.
</description>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165209</guid>
</item>
<item>
<title>Advancing Pedestrian Models: A Comparative Review and Vision for the Future</title>
<link>https://hdl.handle.net/1721.1/165208</link>
<description>Advancing Pedestrian Models: A Comparative Review and Vision for the Future
Zafri, Niaz Mahmud; Sevtsuk, Andres
Problem, research strategy, and findings&#13;
Pedestrian mobility is essential for creating sustainable, healthy, and equitable cities, yet pedestrian modeling remains underdeveloped compared with vehicle-centric approaches. To advance the state of the art, we critically review four available pedestrian modeling frameworks—urban network analysis (UNA), multi-agent transport simulation (MATSim), model of pedestrian demand (MoPeD), and spatial design network analysis (sDNA)—through the lens of the traditional four-step transportation modeling framework. We assess their methodological foundations, capabilities, practical applications, and limitations and outline future directions for improving modeling practice. UNA and sDNA offer high-resolution, trip-based network analyses; MATSim supports agent- and activity-based multimodal simulations; and MoPeD estimates grid-level pedestrian demand. Despite these strengths, several key gaps remain: Most models focus predominantly on utilitarian walking, neglect leisure and social activities, typically assume homogeneous pedestrian behavior by overlooking sociodemographic variations, face shortcomings with mode choice estimation, and are rarely applied in informal urban contexts. Furthermore, limited availability of pedestrian count data continues to constrain effective model calibration and validation.&#13;
&#13;
Takeaway for practice&#13;
We propose that researchers and planners adopt a human-centered, inclusive, and policy-aligned modeling agenda, emphasizing simple yet intuitive metrics that capture the full spectrum of walking benefits, supporting early-stage planning even in data-scarce contexts, fostering stronger collaboration with practitioners, and promoting a modular, adaptable modeling ecosystem. Ultimately, reorienting pedestrian models as flexible decision support tools—rather than narrowly focused forecasting instruments—can meaningfully support the development of more walkable cities.
</description>
<pubDate>Mon, 23 Feb 2026 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165208</guid>
<dc:date>2026-02-23T00:00:00Z</dc:date>
</item>
<item>
<title>Reasons for Non-Agents</title>
<link>https://hdl.handle.net/1721.1/165207</link>
<description>Reasons for Non-Agents
Watkins, Eliot
According to a standard picture, normative reasons do not extend beyond the boundaries of agency. If something isn’t an agent, then there can’t be normative reasons for it to do one thing rather than another. This paper argues that the standard picture is false. There are reasons for smoke detectors to alarm when exposed to smoke, and for Venus Flytraps to close around their prey when stimulated. I argue that the collapse of the standard picture has important implications for philosophical debates about reasons, and especially serious consequences for theories that analyse normative reasons in terms of the standards of good practical reasoning.
</description>
<pubDate>Mon, 16 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165207</guid>
<dc:date>2024-09-16T00:00:00Z</dc:date>
</item>
<item>
<title>Fully 3D-Printed Electric Motor Manufactured via Multi-Modal, Multi-Material Extrusion</title>
<link>https://hdl.handle.net/1721.1/165206</link>
<description>Fully 3D-Printed Electric Motor Manufactured via Multi-Modal, Multi-Material Extrusion
Cañada, Jorge; Bigelow, Zoey; Velásquez-García, Luis Fernando
Material extrusion additive manufacturing can process a wide variety of functional materials including electrically conductive, magnetic, and mechanically compliant polymer composites. While filaments developed for 3D printing often exhibit limited functionality, highly loaded functional composites originally formulated for specialised manufacturing processes can be processed via material extrusion. In this work, a commercial multi-material extrusion 3D printer was modified to process conductive inks, soft and hard magnetic composite pellets, and rigid and compliant polymeric filaments. Using this system, solenoids, hard magnets, and springs were fabricated. These components were combined through straightforward assembly to demonstrate the first fully 3D-printed electric motor — a linear actuator composed of five distinct functional materials: dielectric, electrically conductive, soft magnetic, hard magnetic, and flexible. The solenoids produced up to 2.03 mT magnetic fields, the magnets generated up to 71 mT magnetic fields, and the linear actuator attained a maximum displacement of 318 μm at its resonant frequency (41.6 Hz). This study demonstrates the capability of multi-modal, multi-material extrusion 3D printing to fabricate all critical components of electrical machines, with magnetisation of the hard magnets being the only post-printing step. This milestone advances multi-material, multi-functional 3D printing towards implementing in-situ, customised, low-waste, and low-cost functional hardware.
</description>
<pubDate>Fri, 02 Jan 2026 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165206</guid>
<dc:date>2026-01-02T00:00:00Z</dc:date>
</item>
<item>
<title>Partial Identification of Individual-Level Parameters Using Aggregate Data in a Nonparametric Model</title>
<link>https://hdl.handle.net/1721.1/165205</link>
<description>Partial Identification of Individual-Level Parameters Using Aggregate Data in a Nonparametric Model
Moon, Sarah
I develop a methodology to partially identify linear combinations of conditional mean outcomes when the researcher only has access to aggregate data. Unlike the existing literature, I only allow for marginal, not joint, distributions of covariates in my model of aggregate data. Bounds are obtained by solving an optimization program and can easily accommodate additional polyhedral shape restrictions. I provide a procedure to construct confidence intervals on the identified set and demonstrate the performance of my method in a simulation study. In an empirical illustration of the method using Rhode Island standardized exam data, I find that conditional pass rates vary across student subgroups and across counties.
</description>
<pubDate>Mon, 08 Dec 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165205</guid>
<dc:date>2025-12-08T00:00:00Z</dc:date>
</item>
<item>
<title>Shaping suburbia: the one-and-a-half-century evolution of setback regulations in American neighbourhood development</title>
<link>https://hdl.handle.net/1721.1/165204</link>
<description>Shaping suburbia: the one-and-a-half-century evolution of setback regulations in American neighbourhood development
Zhu, Chenhao; Ben-Joseph, Eran
A comprehensive understanding of setback regulations is vital to American neighbourhood development and ongoing zoning reform efforts; however, the topic remains underexplored. By examining zoning codes, historical documents, norms, and development plans, this study traces the evolution of setback regulations from the 19th century to the present. The results map out the historical trajectory of setback regulations, highlight their evolving roles in shaping neighbourhood development, and synthesize the regulatory and dimensional trends. Building on the findings, insights are offered into the multifaceted roles of setbacks, the implications of identified historical periods, and recommendations for innovating future setback regulations.
</description>
<pubDate>Thu, 15 Jan 2026 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165204</guid>
<dc:date>2026-01-15T00:00:00Z</dc:date>
</item>
<item>
<title>The European Union’s CBAM: averting emissions leakage or promoting the diffusion of carbon pricing?</title>
<link>https://hdl.handle.net/1721.1/165203</link>
<description>The European Union’s CBAM: averting emissions leakage or promoting the diffusion of carbon pricing?
Mehling, Michael A.; Dolphin, Geoffroy; Ritz, Robert A.
Adopted in 2023, the Carbon Border Adjustment Mechanism (CBAM) is a significant component of the European Union’s ambitious decarbonization strategy under the European Green Deal. This article questions the output effectiveness of the CBAM in achieving its stated objective, prevention of carbon leakage, while demonstrating its impact effectiveness as an instrument for advancing the global diffusion of carbon pricing. Empirical evidence for carbon leakage remains sparse, and implementation challenges might limit the capacity of the CBAM to counteract leakage even where it occurs. Nonetheless, the CBAM has already demonstrated a powerful spillover effect by incentivizing the acceleration of carbon pricing roadmaps across EU trade partners, suggesting that trade-related climate measures can effectively encourage global climate action. As the EU navigates the complexities of operationalizing the CBAM, it must balance several tradeoffs to maintain this important spillover effect. If successful, the CBAM could catalyze a virtuous cycle of carbon pricing adoption, reinforcing its potentially pivotal role in the EU’s toolbox to manage the environment-trade nexus.
</description>
<pubDate>Thu, 13 Nov 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165203</guid>
<dc:date>2025-11-13T00:00:00Z</dc:date>
</item>
<item>
<title>Shadow players of the eviction crisis: identifying and characterizing professional evicting attorneys in Massachusetts</title>
<link>https://hdl.handle.net/1721.1/165202</link>
<description>Shadow players of the eviction crisis: identifying and characterizing professional evicting attorneys in Massachusetts
Aizman, Asya; Huntley, Eric Robsky
This paper examines an under-studied class of actors in the housing system: attorneys representing landlords. We use a cluster-detection algorithm to identify salient clusters of attorneys based on their scale of operations. We then characterize these groups—what we call professional, active, less active, and least active evicting and tenant attorneys—using metrics related to the geographic scope of their practice, their prevalence in eviction court, case outcomes, and client base. We find that there are large differences between the practices of professional and less active landlord attorneys: professional attorneys’ cases are resolved more quickly, more regularly result in executions, affect more varied geographies, and have a greater proportion of institutional clients. Placed in a growing body of literature on eviction and advocacy for tenants’ Right to Counsel, we argue that professional evicting attorneys—most of whom are named or founding principal—are not neutral actors, but are contributing to the worsening eviction crisis in class solidarity with landlords.
</description>
<pubDate>Mon, 17 Nov 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165202</guid>
<dc:date>2025-11-17T00:00:00Z</dc:date>
</item>
<item>
<title>Adaptive Digital Twin Modeling with Control: Integration of Extended Kalman Filter-Based Recursive Sparse Nonlinear Identification with Model Predictive Control</title>
<link>https://hdl.handle.net/1721.1/165201</link>
<description>Adaptive Digital Twin Modeling with Control: Integration of Extended Kalman Filter-Based Recursive Sparse Nonlinear Identification with Model Predictive Control
Wang, Jingyi; Cao, Liang; Cao, Yankai; Gopaluni, Bhushan
The adoption of digital twins has revolutionized industrial process simulation, monitoring, and control effectiveness. However, practical implementations of digital twins are hindered by substantial challenges, including extended development time, diminishing model accuracy, and restricted interactive capabilities. Addressing these critical issues, this paper proposes a comprehensive digital twin development framework that integrates digital twin identification, real-time model updating, and advanced process control. The proposed approach first identifies the offline digital twin model through the sparse identification of a nonlinear dynamics algorithm, reducing the digital twin development time while maintaining model fidelity. Then, the identified model is updated by the extended Kalman filter to mitigate the problem of diminishing accuracy. Finally, incorporating the latest updated model into the model predictive control facilitates the control inputs optimization and enhances the interactive capacity of digital twins. Through one industrial case study and two simulation examples, the advantages of the proposed algorithm are demonstrated.
</description>
<pubDate>Mon, 09 Mar 2026 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165201</guid>
<dc:date>2026-03-09T00:00:00Z</dc:date>
</item>
<item>
<title>FastComposer: Tuning-Free Multi-subject Image Generation with Localized Attention</title>
<link>https://hdl.handle.net/1721.1/165200</link>
<description>FastComposer: Tuning-Free Multi-subject Image Generation with Localized Attention
Xiao, Guangxuan; Yin, Tianwei; Freeman, William T.; Durand, Frédo; Han, Song
Diffusion models excel at text-to-image generation, especially in subject-driven generation for personalized images. However, existing methods are inefficient due to the subject-specific fine-tuning, which is computationally intensive and hampers efficient deployment. Moreover, existing methods struggle with multi-subject generation as they often blend identity among subjects. We present FastComposer which enables efficient, personalized, multi-subject text-to-image generation without fine-tuning. FastComposer uses subject embeddings extracted by an image encoder to augment the generic text conditioning in diffusion models, enabling personalized image generation based on subject images and textual instructions with only forward passes. To address the identity blending problem in the multi-subject generation, FastComposer proposes cross-attention localization supervision during training, enforcing the attention of reference subjects localized to the correct regions in the target images. Naively conditioning on subject embeddings results in subject overfitting. FastComposer proposes delayed subject conditioning in the denoising step to maintain both identity and editability in subject-driven image generation. FastComposer generates images of multiple unseen individuals with different styles, actions, and contexts. It achieves 300&#13;
              &#13;
                &#13;
              &#13;
              $$\times $$&#13;
              &#13;
                ×&#13;
              &#13;
            –2500&#13;
              &#13;
                &#13;
              &#13;
              $$\times $$&#13;
              &#13;
                ×&#13;
              &#13;
             speedup compared to fine-tuning-based methods and requires zero extra storage for new subjects. FastComposer paves the way for efficient, personalized, and high-quality multi-subject image creation. Code, model, and dataset are available here (&#13;
              https://github.com/mit-han-lab/fastcomposer&#13;
              &#13;
            ).
</description>
<pubDate>Thu, 19 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165200</guid>
<dc:date>2024-09-19T00:00:00Z</dc:date>
</item>
<item>
<title>Molecular Mechanisms of Liver Metastasis: Connecting Biology, Biomarkers, and Outcomes</title>
<link>https://hdl.handle.net/1721.1/165199</link>
<description>Molecular Mechanisms of Liver Metastasis: Connecting Biology, Biomarkers, and Outcomes
Wagner, Doris; Balzer, Felix; Margonis, Georgios Antonios
In our Special Issue titled “Molecular Mechanisms of Liver Metastases,” we aimed to attract articles that connect metastasis mechanisms and biomarkers with clinical disease characteristics and patient outcomes. Starting with the review paper by the Ohio State group, the authors provide valuable insight into the complex, multi-step molecular mechanisms underpinning liver metastasis [1]. Moving from pathogenesis to the prognostic and predictive role of clinically actionable proxies of tumor biology, the University of California, San Francisco (UCSF) group reviews current evidence on the prognostic and predictive relevance of key alterations—including RAS, BRAF, mismatch repair (MMR) genes, TP53, and SMAD4—in surgically treated colorectal liver metastases (CRLMs) [2]. For RAS and BRAF in particular—given the breadth of existing evidence—the authors emphasize the largest and/or the most recently published studies to provide the most robust and contemporary overview for readers.
</description>
<pubDate>Thu, 05 Mar 2026 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165199</guid>
<dc:date>2026-03-05T00:00:00Z</dc:date>
</item>
<item>
<title>Stability and Reactivity of Alternative Nucleobases in Concentrated Sulfuric Acid</title>
<link>https://hdl.handle.net/1721.1/165198</link>
<description>Stability and Reactivity of Alternative Nucleobases in Concentrated Sulfuric Acid
Huang, Jingcheng; Seager, Sara; Seager, Maxwell D.; Petkowski, Janusz J.
Recent findings demonstrate that concentrated sulfuric acid supports rich organic chemistry, including the stability of the canonical DNA bases adenine, thymine, guanine and cytosine. Yet, due to full protonation in concentrated sulfuric acid, these bases may not pair as effectively as they do in water. We are therefore motivated to study nucleic acid bases that pair via hydrophobic and van der Waals interactions instead of canonical hydrogen bonding. Here, we investigate the stability of 14 selected, commercially available alternative nucleobases in concentrated sulfuric acid to evaluate their potential for forming DNA-like polymers in this solvent. The reactivity of compounds 1–14 have not been previously investigated in concentrated sulfuric acid. We incubate the selected compounds in 98% and 81% w/w sulfuric acid and monitor their stability using 1H and 13C NMR spectroscopy over 3 weeks at room temperature. In 98% w/w sulfuric acid, six bases—benzo[c][1,2,5]thiadiazole (1), 2,2′-bipyridine (2), 1,1′-biphenyl (3), 1-methoxy-3-methylbenzene (MMO2) (7) and 1-chloro-3-methoxybenzene (ClMO) (13), and 2,4-difluorotoluene (14)—remain soluble and stable with no detectable degradation. A few compounds show non-destructive reactivity, like sulfonation (compound 3) or H/D exchange (compounds 7, 13, 14). The other compounds react rapidly or are insoluble in 98% w/w sulfuric acid. In 81% w/w sulfuric acid, only compounds 1 and 2 remain stable and soluble, while other selected compounds are insoluble or unstable. Our findings identify a subset of alternative bases stable in concentrated sulfuric acid, advancing efforts towards the design of an example genetic-like polymer in this unusual solvent. Our work further highlights sulfuric acid’s potential for supporting complex organic chemistry, with implications for astrobiology, planetary science of Venus and synthetic biology.
</description>
<pubDate>Tue, 03 Mar 2026 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165198</guid>
<dc:date>2026-03-03T00:00:00Z</dc:date>
</item>
<item>
<title>Examining the Uptake and Effectiveness of the Climate Justice Instructional Toolkit</title>
<link>https://hdl.handle.net/1721.1/165197</link>
<description>Examining the Uptake and Effectiveness of the Climate Justice Instructional Toolkit
Rabe, Chris; Schlegel, Madeline; Mitchell, Pearl
Climate justice is an accelerating global movement in which education plays a vital role in building awareness, understanding and catalyzing action. Yet, many institutions of higher education face persistent barriers to integrating climate justice across disciplines, including limited institutional focus, lack of faculty expertise, and, more recently, heightened political constraints. In response, climate justice educators have proposed a range of strategies—such as conducting faculty workshops, hosting events, offering incentives, increasing funding, and developing curricular tools—though few studies have examined how curricular resources themselves can advance climate justice education. The Climate Justice Instructional Toolkit (CJIT) was developed to address this gap by providing accessible, adaptable materials to help instructors, even those with limited prior experience, incorporate climate and environmental justice into diverse disciplines. Using a novel survey tool across multiple educational contexts, this study used a mixed methods approach to analyze 76 survey responses to evaluate the CJIT’s effectiveness and user experiences among postsecondary instructors, K–12 teachers, and students. The findings showed broad positive feedback regarding the toolkit’s adaptability and utility while underscoring the need for stronger inclusion of Indigenous perspectives, greater socioeconomic accountability, and more attention to solutions and resilience. Respondents also emphasized that inclusive design, community engagement, and institutional support are critical for advancing climate justice integration and empowering educators in this work.
</description>
<pubDate>Tue, 24 Feb 2026 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165197</guid>
<dc:date>2026-02-24T00:00:00Z</dc:date>
</item>
<item>
<title>GraphHash: Graph Clustering Enables Parameter Efficiency in Recommender Systems</title>
<link>https://hdl.handle.net/1721.1/165196</link>
<description>GraphHash: Graph Clustering Enables Parameter Efficiency in Recommender Systems
Wu, Xinyi; Loveland, Donald; Chen, Runjin; Liu, Yozen; Chen, Xin; Neves, Leonardo; Jadbabaie, Ali; Ju, Mingxuan; Shah, Neil; Zhao, Tong
Deep recommender systems rely heavily on large embedding tables to handle high-cardinality categorical features such as user/item identifiers, and face significant memory constraints at scale. To tackle this challenge, hashing techniques are often employed to map multiple entities to the same embedding and thus reduce the size of the embedding tables. Concurrently, graph-based collaborative signals have emerged as powerful tools in recommender systems, yet their potential for optimizing embedding table reduction remains unexplored. This paper introduces GraphHash, the first graph-based approach that leverages modularity-based bipartite graph clustering on user-item interaction graphs to reduce embedding table sizes. We demonstrate that the modularity objective has a theoretical connection to message-passing, which provides a foundation for our method. By employing fast clustering algorithms, GraphHash serves as a computationally efficient proxy for message-passing during preprocessing and a plug-and-play graph-based alternative to traditional ID hashing. Extensive experiments show that GraphHash substantially outperforms diverse hashing baselines on both retrieval and click-through-rate prediction tasks. In particular, GraphHash achieves on average a 101.52% improvement in recall when reducing the embedding table size by more than 75%, highlighting the value of graph-based collaborative information for model reduction. Our code is available at https://github.com/snap-research/GraphHash.
WWW ’25, April 28-May 2, 2025, Sydney, NSW, Australia
</description>
<pubDate>Tue, 22 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165196</guid>
<dc:date>2025-04-22T00:00:00Z</dc:date>
</item>
<item>
<title>Mindful young brains and minds: a systematic review of the neural correlates of mindfulness-based interventions in youth</title>
<link>https://hdl.handle.net/1721.1/165195</link>
<description>Mindful young brains and minds: a systematic review of the neural correlates of mindfulness-based interventions in youth
Jande, Jovan; Treves, Isaac N.; Ely, Samantha L.; Gowatch, Leah C.; Carpenter, Carmen; Shampine, MacKenna; Webb, Christian A.; Sacchet, Matthew D.; Gabrielli, John D. E.; Marusak, Hilary A.
This systematic narrative review examines neuroimaging studies that investigated the neural correlates of mindfulness-based interventions in youth (ages 0–18). We extracted 13 studies with a total of 467 participants aged 5–18 years from the MEDLINE database on February 21st, 2024. These studies included both typically developing youth and those at risk of developing or recovering from neuropsychiatric disorders. Most studies (76.9%) utilized a pre-post intervention design, with resting-state functional magnetic resonance imaging (fMRI) being the most common imaging modality (46.1%), followed by task-based fMRI (38.4%), diffusion-weighted imaging (15.4%), and structural MRI (7.7%). Despite substantial heterogeneity across study designs and findings, several consistent patterns emerged. Resting-state fMRI studies generally reported increased functional connectivity within and between networks, notably involving the salience network, frontoparietal network, and default mode network. Studies using diffusion-weighted imaging indicated enhancements in white matter microstructural properties, supporting overall connectivity improvements. Several task-based fMRI studies identified decreased activation of the default mode network and heightened reactivity of the salience network during or after mindfulness practice, with real-time neurofeedback further amplifying these effects. While preliminary, the reviewed studies suggest that mindfulness interventions may alter both functional and structural connectivity and activity in youth, potentially bolstering self-regulation and cognitive control. Nonetheless, the variability in methodologies and small sample sizes restricts the generalizability of these results. Future research should prioritize larger and more diverse samples, and standardized mindfulness-based interventions to deepen our understanding of the neural mechanisms underlying mindfulness-based interventions in youth and to optimize their efficacy.
</description>
<pubDate>Mon, 03 Mar 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165195</guid>
<dc:date>2025-03-03T00:00:00Z</dc:date>
</item>
<item>
<title>Theory Instead of Experiment (TIE): A Creator Valuation System at Tencent</title>
<link>https://hdl.handle.net/1721.1/165194</link>
<description>Theory Instead of Experiment (TIE): A Creator Valuation System at Tencent
Huang, Lei; Zhang, Juanjuan
Experiments are informative but should be used judiciously as a costly resource. Well-constructed theory may serve as a substitute. We develop a ''Theory Instead of Experiment'' (TIE) framework and, in collaboration with Tencent, apply the framework to assess how much value (e.g., user clicks) each creator contributes to its WeChat Official Accounts Platform. This TIE application models content demand and supply upon the counterfactual departure of a creator. The demand model predicts user clicks based on estimated user preferences, while the supply model captures the platform's content distribution response. Together, they predict how each creator influences user engagement through the platform's content distribution strategy. We test the predictions of the TIE system with 168 experiments, each examining a different mix of creators and involving more than 9 million unique users. The TIE system and the experiments demonstrate a 97% correlation on the key performance metric (change in user clicks). Based on its low costs, high accuracy, granular output, and minimal latency, Tencent has deployed the TIE system as the default approach to creator valuation, assessing tens of millions of creators each day while avoiding a 2.5% user click loss associated with a typical experiment.
KDD ’25, Toronto, ON, Canada
</description>
<pubDate>Sun, 03 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165194</guid>
<dc:date>2025-08-03T00:00:00Z</dc:date>
</item>
<item>
<title>The Seasonal Cycle of Atmospheric Heating and Temperature</title>
<link>https://hdl.handle.net/1721.1/165193</link>
<description>The Seasonal Cycle of Atmospheric Heating and Temperature
Donohoe, Aaron; Battisti, David S
The seasonal cycle of the heating of the atmosphere is divided into a component due to direct solar absorption in the atmosphere and a component due to the flux of energy from the surface to the atmosphere via latent, sensible, and radiative heat fluxes. Both observations and coupled climate models are analyzed. The vast majority of the seasonal heating of the northern extratropics (78% in the observations and 67% in the model average) is due to atmospheric shortwave absorption. In the southern extratropics, the seasonal heating of the atmosphere is entirely due to atmospheric shortwave absorption in both the observations and the models, and the surface heat flux opposes the seasonal heating of the atmosphere. The seasonal cycle of atmospheric temperature is surface amplified in the northern extratropics and nearly barotropic in the Southern Hemisphere; in both cases, the vertical profile of temperature reflects the source of the seasonal heating.&#13;
&#13;
In the northern extratropics, the seasonal cycle of atmospheric heating over land differs markedly from that over the ocean. Over the land, the surface energy fluxes complement the driving absorbed shortwave flux; over the ocean, they oppose the absorbed shortwave flux. This gives rise to large seasonal differences in the temperature of the atmosphere over land and ocean. Downgradient temperature advection by the mean westerly winds damps the seasonal cycle of heating of the atmosphere over the land and amplifies it over the ocean. The seasonal cycle in the zonal energy transport is 4.1 PW.&#13;
&#13;
Finally, the authors examine the change in the seasonal cycle of atmospheric heating in 11 models from phase 3 of the Coupled Model Intercomparison Project (CMIP3) due to a doubling of atmospheric carbon dioxide from preindustrial concentrations. The seasonal heating of the troposphere is everywhere enhanced by increased shortwave absorption by water vapor; it is reduced where sea ice has been replaced by ocean, which increases the effective heat storage reservoir of the climate system and thereby reduces the seasonal magnitude of energy fluxes between the surface and the atmosphere. As a result, the seasonal amplitude of temperature increases in the upper troposphere (where atmospheric shortwave absorption increases) and decreases at the surface (where the ice melts).
</description>
<pubDate>Mon, 15 Jul 2013 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165193</guid>
<dc:date>2013-07-15T00:00:00Z</dc:date>
</item>
<item>
<title>Mechanistic modeling explains the production dynamics of recombinant adeno-associated virus with the baculovirus expression vector system</title>
<link>https://hdl.handle.net/1721.1/165192</link>
<description>Mechanistic modeling explains the production dynamics of recombinant adeno-associated virus with the baculovirus expression vector system
Destro, Francesco; Joseph, John; Srinivasan, Prasanna; Kanter, Joshua M; Neufeld, Caleb; Wolfrum, Jacqueline M; Barone, Paul W; Springs, Stacy L; Sinskey, Anthony J; Cecchini, Sylvain; Kotin, Robert M; Braatz, Richard D
Current manufacturing processes for recombinant adeno-associated viruses (rAAVs) have less-than-desired yields and produce significant amounts of empty capsids. The increasing demand and the high cost of goods for rAAV-based gene therapies motivate development of more efficient manufacturing processes. Recently, the US Food and Drug Administration (FDA) approved the first rAAV-based gene therapy product manufactured in the baculovirus expression vector system (BEVS), a technology that demonstrated production of high titers of full capsids. This work presents a first mechanistic model describing the key extracellular and intracellular phenomena occurring during baculovirus infection and rAAV maturation in the BEVS. The model predictions are successfully validated for in-house and literature experimental measurements of the vector genome and of structural and non-structural proteins collected during rAAV manufacturing in the BEVS with the TwoBac and ThreeBac constructs. A model-based analysis of the process is carried out to identify the bottlenecks that limit full capsid formation. Vector genome amplification is found to be the limiting step for rAAV production in Sf9 cells using either the TwoBac or ThreeBac system. In turn, vector genome amplification is hindered by limiting Rep78 levels. Transgene and non-essential baculovirus protein expression in the insect cell during rAAV manufacturing also negatively influences the rAAV production yields.
</description>
<pubDate>Thu, 14 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165192</guid>
<dc:date>2023-09-14T00:00:00Z</dc:date>
</item>
<item>
<title>PixBric: Precision Morphological Control of Pre-Stretched Fabrics Through Tessellated Primitive Geometries</title>
<link>https://hdl.handle.net/1721.1/165191</link>
<description>PixBric: Precision Morphological Control of Pre-Stretched Fabrics Through Tessellated Primitive Geometries
Youn, Hye Jun; Sara, Serena; Ishii, Hiroshi
3D printing patterns onto pre-stretched fabrics has emerged as a promising method for the rapid fabrication of self-shaping textiles. However, the influence of design parameters on morphing behavior remains insufficiently explored, often resulting in heuristic-driven decisions. This study introduces PixBric, a pixel-based textile methodology composed of primitive geometries designed to induce controlled morphing behaviors—such as undulation and bending—and mechanical properties including multistability. By parametrically adjusting geometry, thickness, and inter-pixel spacing, PixBric enables precise morphing outcomes. The framework includes a morphing simulation tool and a design chart linking geometric variables to deformation results. We also propose a streamlined fabrication protocol using biaxial pre-stretching with magnetic framing. These contributions establish a systematic design approach for the functional and interactive deployment of self-shaping textile structures.
UIST Adjunct ’25, Busan, Republic of Korea
</description>
<pubDate>Sat, 27 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165191</guid>
<dc:date>2025-09-27T00:00:00Z</dc:date>
</item>
<item>
<title>Cellular anatomy of the mouse primary motor cortex</title>
<link>https://hdl.handle.net/1721.1/165190</link>
<description>Cellular anatomy of the mouse primary motor cortex
An essential step toward understanding brain function is to establish a structural framework with cellular resolution on which multi-scale datasets spanning molecules, cells, circuits and systems can be integrated and interpreted1. Here, as part of the collaborative Brain Initiative Cell Census Network (BICCN), we derive a comprehensive cell type-based anatomical description of one exemplar brain structure, the mouse primary motor cortex, upper limb area (MOp-ul). Using genetic and viral labelling, barcoded anatomy resolved by sequencing, single-neuron reconstruction, whole-brain imaging and cloud-based neuroinformatics tools, we delineated the MOp-ul in 3D and refined its sublaminar organization. We defined around two dozen projection neuron types in the MOp-ul and derived an input–output wiring diagram, which will facilitate future analyses of motor control circuitry across molecular, cellular and system levels. This work provides a roadmap towards a comprehensive cellular-resolution description of mammalian brain architecture.
</description>
<pubDate>Wed, 06 Oct 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165190</guid>
<dc:date>2021-10-06T00:00:00Z</dc:date>
</item>
<item>
<title>Pair Crossing Number, Cutwidth, and Good Drawings on Arbitrary Point Sets</title>
<link>https://hdl.handle.net/1721.1/165113</link>
<description>Pair Crossing Number, Cutwidth, and Good Drawings on Arbitrary Point Sets
Pi, Oriol S.
Determining whether there exists a graph such that its crossing number and pair crossing number are distinct is an important open problem in geometric graph theory. We show that cr ( G ) = O ( pcr ( G ) 3 / 2 ) for every graph G, improving the previous best bound by a logarithmic factor. Answering a question of Pach and Tóth, we prove that the bisection width (and, in fact, the cutwidth as well) of a graph G with degree sequence d 1 , d 2 , ⋯ , d n satisfies bw ( G ) = O ( pcr ( G ) + ∑ k = 1 n d k 2 ) . Then we show that there is a constant C ≥ 1 such that the following holds: For any graph G of order n and any set S of at least n C points in general position on the plane, G admits a straight-line drawing which maps the vertices to points of S and has no more than O log n · pcr ( G ) + ∑ k = 1 n d k 2 crossings. Our proofs rely on a slightly modified version of a separator theorem for string graphs by Lee, which might be of independent interest.
</description>
<pubDate>Wed, 22 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165113</guid>
<dc:date>2025-01-22T00:00:00Z</dc:date>
</item>
<item>
<title>Emergence of a proton exchange-based isomerization and lactonization mechanism in the plant coumarin synthase COSY</title>
<link>https://hdl.handle.net/1721.1/165112</link>
<description>Emergence of a proton exchange-based isomerization and lactonization mechanism in the plant coumarin synthase COSY
Kim, Colin Y; Mitchell, Andrew J; Kastner, David W; Albright, Claire E; Gutierrez, Michael A; Glinkerman, Christopher M; Kulik, Heather J; Weng, Jing-Ke
Plants contain rapidly evolving specialized enzymes that support the biosynthesis of functionally diverse natural products. In coumarin biosynthesis, a BAHD acyltransferase-family enzyme COSY was recently discovered to accelerate coumarin formation as the only known BAHD enzyme to catalyze an intramolecular acyl transfer reaction. Here we investigate the structural and mechanistic basis for COSY’s coumarin synthase activity. Our structural analyses reveal an unconventional active-site configuration adapted to COSY’s specialized activity. Through mutagenesis studies and deuterium exchange experiments, we identify a unique proton exchange mechanism at the α-carbon of the o-hydroxylated trans-hydroxycinnamoyl-CoA substrates during the catalytic cycle of COSY. Quantum mechanical cluster modeling and molecular dynamics further support this key mechanism for lowering the activation energy of the rate-limiting trans-to-cis isomerization step in coumarin production. This study unveils an unconventional catalytic mechanism mediated by a BAHD-family enzyme, and sheds light on COSY’s evolutionary origin and its recruitment to coumarin biosynthesis in eudicots.
</description>
<pubDate>Fri, 03 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165112</guid>
<dc:date>2023-02-03T00:00:00Z</dc:date>
</item>
<item>
<title>Identification and Validation of Cyclic Peptides with Mucin-Selective, Location-Specific Binding in the Gastrointestinal Tract</title>
<link>https://hdl.handle.net/1721.1/165111</link>
<description>Identification and Validation of Cyclic Peptides with Mucin-Selective, Location-Specific Binding in the Gastrointestinal Tract
Subramanian, Deepak A; Chin, Austin; Shi, Yunhua; Liu, Gary W; Langer, Robert; Traverso, Giovanni
Oral drug delivery is a widely preferred method of drug administration due to its ease of use and convenience for patients. Localization of drug release in the gastrointestinal (GI) tract is important to treat localized diseases and maximize drug absorption. However, achieving drug localization in the dynamic GI tract is challenging. To address this challenge, we leveraged the geographic diversity of the GI tract by targeting its mucus layers, which coat the epithelial surfaces. These layers, composed of mucin glycoproteins, are synthesized with unique chemical compositions and expressed in different regions, making them ideal targets for drug localization. In this article, we identify cyclic peptides that bind selectively to MUC2 (in the intestines) and MUC5AC (in the stomach), serving as targeting ligands to these regions of the GI tract. We demonstrate the effectiveness of these peptides through in vitro, ex vivo, and in vivo experiments, showing that incorporating these targeting ligands can increase binding and selectivity 2-fold to the desired regions, thus potentially overcoming challenges with localizing drug distribution in oral delivery. These results indicate that cyclic peptides can be used to localize drug cargoes at certain sites in the body compared to free drugs.
</description>
<pubDate>Fri, 11 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165111</guid>
<dc:date>2025-04-11T00:00:00Z</dc:date>
</item>
<item>
<title>Pd-Catalyzed Amination of Base-Sensitive Five-Membered Heteroaryl Halides with Aliphatic Amines</title>
<link>https://hdl.handle.net/1721.1/165110</link>
<description>Pd-Catalyzed Amination of Base-Sensitive Five-Membered Heteroaryl Halides with Aliphatic Amines
Reichert, Elaine C; Feng, Kaibo; Sather, Aaron C; Buchwald, Stephen L
We report a versatile and functional-group-tolerant method for the Pd-catalyzed C–N cross-coupling of five-membered heteroaryl halides with primary and secondary amines, an important but underexplored transformation. Coupling reactions of challenging, pharmaceutically relevant heteroarenes, such as 2-H-1,3-azoles, are reported in good-to-excellent yields. High-yielding coupling reactions of a wide set of five-membered heteroaryl halides with sterically demanding α-branched cyclic amines and acyclic secondary amines are reported for the first time. The key to the broad applicability of this method is the synergistic combination of (1) the moderate-strength base NaOTMS, which limits base-mediated decomposition of sensitive five-membered heteroarenes that ultimately leads to catalyst deactivation, and (2) the use of a GPhos-supported Pd catalyst, which effectively resists heteroarene-induced catalyst deactivation while promoting efficient coupling, even for challenging and sterically demanding amines. Cross-coupling reactions between a wide variety of five-membered heteroaryl halides and amines are demonstrated, including eight examples involving densely functionalized medicinal chemistry building blocks.
</description>
<pubDate>Tue, 31 Jan 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165110</guid>
<dc:date>2023-01-31T00:00:00Z</dc:date>
</item>
<item>
<title>Abiotic peptides as carriers of information for the encoding of small-molecule library synthesis</title>
<link>https://hdl.handle.net/1721.1/165109</link>
<description>Abiotic peptides as carriers of information for the encoding of small-molecule library synthesis
Rössler, Simon L; Grob, Nathalie M; Buchwald, Stephen L; Pentelute, Bradley L
Encoding small-molecule information in DNA has been leveraged to accelerate the discovery of ligands for therapeutic targets such as proteins. However, oligonucleotide-based encoding is hampered by inherent limitations of information stability and density. In this study, we establish abiotic peptides for next-generation information storage and apply them for the encoding of diverse small-molecule synthesis. The chemical stability of the peptide-based tag allows the use of palladium-mediated reactions to efficiently synthesize peptide-encoded libraries (PELs) with broad chemical diversity and high purity. We demonstrate the successful de novo discovery of small-molecule protein ligands from PELs by affinity selection against carbonic anhydrase IX and the oncogenic protein targets BRD4(1) and MDM2. Collectively, this work establishes abiotic peptides as carriers of information for the encoding of small-molecule synthesis, leveraged herein for the discovery of protein ligands.
</description>
<pubDate>Thu, 02 Mar 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165109</guid>
<dc:date>2023-03-02T00:00:00Z</dc:date>
</item>
<item>
<title>Studying Regioisomer Formation in the Pd‐Catalyzed Fluorination of Cyclic Vinyl Triflates: Evidence for in situ Ligand Modification</title>
<link>https://hdl.handle.net/1721.1/165108</link>
<description>Studying Regioisomer Formation in the Pd‐Catalyzed Fluorination of Cyclic Vinyl Triflates: Evidence for in situ Ligand Modification
Ye, Yuxuan; Kim, Seoung‐Tae; King, Ryan P; Baik, Mu‐Hyun; Buchwald, Stephen L
Pd-catalyzed nucleophilic fluorination reactions are important methods for the synthesis of fluoroarenes and fluoroalkenes. However, these reactions can generate a mixture of regioisomeric products that are often difficult to separate. While investigating the Pd-catalyzed fluorination of cyclic vinyl triflates, we observed that the addition of a substoichiometric quantity of TESCF3 significantly improved the regioselectivity of the reaction. Herein, we report a combined experimental and computational study on the mechanism of this transformation focusing on the role of TESCF3. The poor regioselectivity of the reaction in the absence of additives results from the formation of LPd-cyclohexyne complexes (L=biaryl monophosphine ligand). When TESCF3 is added to the reaction mixture, the generation of the Pd-cyclohexyne complexes is diminished by an unexpected pathway involving the dearomatization of the ligand by nucleophilic attack from a trifluoromethyl anion (CF3−).
</description>
<pubDate>Sun, 12 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165108</guid>
<dc:date>2023-02-12T00:00:00Z</dc:date>
</item>
<item>
<title>Room-Temperature Cu-Catalyzed Amination of Aryl Bromides Enabled by DFT-Guided Ligand Design</title>
<link>https://hdl.handle.net/1721.1/165107</link>
<description>Room-Temperature Cu-Catalyzed Amination of Aryl Bromides Enabled by DFT-Guided Ligand Design
Kim, Seoung-Tae; Strauss, Michael J; Cabré, Albert; Buchwald, Stephen L
Ullmann-type C–N coupling reactions represent an important alternative to well-established Pd-catalyzed approaches due to the differing reactivity and the lower cost of Cu. While the design of anionic Cu ligands, particularly those by Ma, has enabled the coupling of various classes of aryl halides and alkyl amines, most methods require conditions that can limit their utility on complex substrates. Herein, we disclose the development of anionic N1,N2-diarylbenzene-1,2-diamine ligands that promote the Cu-catalyzed amination of aryl bromides under mild conditions. Guided by DFT calculations, these ligands were designed to (1) increase the electron density on Cu, thereby increasing the rate of oxidative addition of aryl bromides, and (2) stabilize the active anionic CuI complex via a π-interaction. Under optimized conditions, structurally diverse aryl and heteroaryl bromides and a broad range of alkyl amine nucleophiles, including pharmaceuticals bearing multiple functional groups, were efficiently coupled at room temperature. Combined computational and experimental studies support a mechanism of C–N bond formation that follows a catalytic cycle akin to the well-explored Pd-catalyzed variants. Modification of the ligand structure to include a naphthyl residue resulted in a lower energy barrier to oxidative addition, providing a 30-fold rate increase relative to what is seen with other ligands. Collectively, these results establish a new class of anionic ligands for Cu-catalyzed C–N couplings, which we anticipate may be extended to other Cu-catalyzed C–heteroatom and C–C bond-forming reactions.
</description>
<pubDate>Thu, 16 Mar 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165107</guid>
<dc:date>2023-03-16T00:00:00Z</dc:date>
</item>
<item>
<title>Stereoselective Synthesis of Trisubstituted Alkenes via Copper Hydride-Catalyzed Alkyne Hydroalkylation</title>
<link>https://hdl.handle.net/1721.1/165106</link>
<description>Stereoselective Synthesis of Trisubstituted Alkenes via Copper Hydride-Catalyzed Alkyne Hydroalkylation
Kutateladze, Dennis A; Mai, Binh Khanh; Dong, Yuyang; Zhang, Yu; Liu, Peng; Buchwald, Stephen L
Alkenes are ubiquitous in organic chemistry, yet many classes of alkenes remain challenging to access by current synthetic methodology. Herein, we report a copper hydride-catalyzed approach for the synthesis of Z-configured trisubstituted alkenes with high stereo- and regioselectivity via alkyne hydroalkylation. A DTBM-dppf-supported Cu catalyst was found to be optimal, providing a substantial increase in product yield compared to reactions conducted with dppf as the ligand. DFT calculations show that the DTBM substitution leads to the acceleration of alkyne hydrocupration through combined ground and transition state effects related to preventing catalyst dimerization and enhancing catalyst–substrate dispersion interactions, respectively. Alkyne hydroalkylation was successfully demonstrated with methyl and larger alkyl tosylate electrophiles to produce a variety of (hetero)aryl-substituted alkenes in moderate to high yields with complete selectivity for the Z stereochemically configured products. In the formation of the key C–C bond, computational studies revealed a direct SN2 pathway for alkylation of the vinylcopper intermediate with in situ-formed alkyl iodides.
</description>
<pubDate>Fri, 04 Aug 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165106</guid>
<dc:date>2023-08-04T00:00:00Z</dc:date>
</item>
<item>
<title>Room‐Temperature Copper‐Catalyzed Etherification of Aryl Bromides</title>
<link>https://hdl.handle.net/1721.1/165105</link>
<description>Room‐Temperature Copper‐Catalyzed Etherification of Aryl Bromides
Strauss, Michael J; Greaves, Megan E; Kim, Seoung‐Tae; Teijaro, Christiana N; Schmidt, Michael A; Scola, Paul M; Buchwald, Stephen L
We disclose the development of a Cu-catalyzed C−O coupling method utilizing a new N1,N2-diarylbenzene-1,2-diamine ligand, L8. Under optimized reaction conditions, structurally diverse aryl and heteroaryl bromides underwent efficient coupling with a variety of alcohols at room temperature using an L8-based catalyst. Notably, the L8-derived catalyst exhibited enhanced activity when compared to the L4-based system previously disclosed for C−N coupling, namely the ability to functionalize aryl bromides containing acidic functional groups. Mechanistic studies demonstrate that C−O coupling utilizing L8 ⋅ Cu involves rate-limiting alkoxide transmetallation, resulting in a mechanism of C−O bond formation that is distinct from previously described Pd-, Cu-, or Ni-based systems. This lower energy pathway leads to rapid C−O bond formation; a 7-fold increase relative to what is seen with other ligands. The results presented in this report overcome limitations in previously described C−O coupling methods and introduce a new ligand that we anticipate may be useful in other Cu-catalyzed C-heteroatom bond-forming reactions.
</description>
<pubDate>Thu, 15 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165105</guid>
<dc:date>2024-02-15T00:00:00Z</dc:date>
</item>
<item>
<title>CuH-Catalyzed Regio- and Enantioselective Formal Hydroformylation of Vinyl Arenes</title>
<link>https://hdl.handle.net/1721.1/165104</link>
<description>CuH-Catalyzed Regio- and Enantioselective Formal Hydroformylation of Vinyl Arenes
Garhwal, Subhash; Dong, Yuyang; Mai, Binh Khanh; Liu, Peng; Buchwald, Stephen L
A highly enantioselective formal hydroformylation of vinyl arenes enabled by copper hydride (CuH) catalysis is reported. Key to the success of the method was the use of the mild Lewis acid zinc triflate to promote the formation of oxocarbenium electrophiles through the activation of diethoxymethyl acetate. Using the newly developed protocol, a broad range of vinyl arene substrates underwent efficient hydroacetalization reactions to provide access to highly enantioenriched α-aryl acetal products in good yields with exclusively branched regioselectivity. The acetal products could be converted to the corresponding aldehydes, alcohols, and amines with full preservation of the enantiomeric purity. Density functional theory studies support that the key C–C bond-forming event between the alkyl copper intermediate and the oxocarbenium electrophile takes place with inversion of configuration of the Cu–C bond in a backside SE2-type mechanism.
</description>
<pubDate>Thu, 09 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165104</guid>
<dc:date>2024-05-09T00:00:00Z</dc:date>
</item>
<item>
<title>Cu-Catalyzed Amination of Base-Sensitive Aryl Bromides and the Chemoselective N- and O-Arylation of Amino Alcohols</title>
<link>https://hdl.handle.net/1721.1/165103</link>
<description>Cu-Catalyzed Amination of Base-Sensitive Aryl Bromides and the Chemoselective N- and O-Arylation of Amino Alcohols
Strauss, Michael J; Liu, Kaylee X; Greaves, Megan E; Dahl, Jakob C; Kim, Seoung-Tae; Wu, Yong-Jin; Schmidt, Michael A; Scola, Paul M; Buchwald, Stephen L
We report a general and functional-group-tolerant method for the Cu-catalyzed amination of base-sensitive aryl bromides including substrates possessing acidic functional groups and small five-membered heteroarenes. The results presented herein substantially expand the scope of Cu-catalyzed C–N coupling reactions. The combination of L8, an anionic N1,N2-diarylbenzene-1,2-diamine ligand, along with the mild base NaOTMS leads to the formation of a stable yet reactive catalyst that resists deactivation from coordination to heterocycles or charged intermediates. This system enables the use of low catalyst and ligand loadings. Exploiting the differences in nucleophile deprotonation in C–O and C–N coupling reactions catalyzed by Cu·L8 we developed a method to chemoselectively N- and O-arylate a variety of amino alcohol substrates. Employing NaOt-Bu as the base resulted exclusively in C–O coupling when the amino alcohols featured primary alcohols and more hindered amines or aniline groups. Utilizing NaOTMS enabled the ability to override the steric-based selectivity of these reactions completely and exclusively promoted C–N coupling regardless of the structure of the amino alcohol. The ability to invert the observed chemoselectivity is distinct from previously described methods that require protecting group manipulations or rely entirely on steric effects to control reactivity. These results substantially improve the scope of Cu-catalyzed C–N coupling reactions using N1,N2-diarylbenzene-1,2-diamine ligands and introduce a new chemoselective method to arylate amino alcohols.
</description>
<pubDate>Wed, 26 Jun 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165103</guid>
<dc:date>2024-06-26T00:00:00Z</dc:date>
</item>
<item>
<title>Copper-Catalyzed Amination of Aryl Chlorides under Mild Reaction Conditions</title>
<link>https://hdl.handle.net/1721.1/165102</link>
<description>Copper-Catalyzed Amination of Aryl Chlorides under Mild Reaction Conditions
Ai, Han-Jun; Kim, Seoung-Tae; Liu, Cecilia; Buchwald, Stephen L
We report a mild method for the copper-catalyzed amination of aryl chlorides. Key to the success of the method was the use of highly sterically encumbered &lt;i&gt;N&lt;/i&gt;&lt;sup&gt;1&lt;/sup&gt;,&lt;i&gt;N&lt;/i&gt;&lt;sup&gt;2&lt;/sup&gt;-diaryl diamine ligands which resist catalyst deactivation, allowing reactions to proceed at significantly lower temperatures and with a broader scope than current protocols. A sequence of highly chemoselective C-N and C-O cross-coupling reactions were demonstrated, and mechanistic studies indicate that oxidative addition of the Cu catalyst to the aryl chlorides is rate-limiting. We anticipate that the design principles disclosed herein will help motivate further advances in Cu-catalyzed transformations of aryl chlorides.
</description>
<pubDate>Mon, 16 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165102</guid>
<dc:date>2024-09-16T00:00:00Z</dc:date>
</item>
<item>
<title>Development of a Deactivation-Resistant Dialkylbiarylphosphine Ligand for Pd-Catalyzed Arylation of Secondary Amines</title>
<link>https://hdl.handle.net/1721.1/165101</link>
<description>Development of a Deactivation-Resistant Dialkylbiarylphosphine Ligand for Pd-Catalyzed Arylation of Secondary Amines
Feng, Kaibo; Raguram, Elaine Reichert; Howard, James R; Peters, Ellyn; Liu, Cecilia; Sigman, Matthew S; Buchwald, Stephen L
Despite the prevalence of N-heteroarenes in small-molecule pharmaceuticals, Pd-catalyzed C-N cross-coupling reactions of aryl halides and amines containing these rings remain challenging due to their ability to displace the supporting ligand via coordination to the metal center. To address this limitation, we report the development of a highly robust Pd catalyst supported by a new dialkylbiarylphosphine ligand, FPhos. The FPhos-supported catalyst effectively resists N-heteroarene-mediated catalyst deactivation to readily promote C-N coupling between a wide variety of Lewis-basic aryl halides and secondary amines, including densely functionalized pharmaceuticals. Mechanistic and structural investigations, as well as principal component analysis and density functional theory, elucidated two key design features that enable FPhos to overcome the limitations of previous ligands. First, the ligated Pd complex is stabilized through its conformational preference for the O-bound isomer, which likely resists coordination by N-heteroarenes. Second, 3',5'-disubstitution on the non-phosphorus-containing ring of FPhos creates the ideal steric environment around the Pd center, which facilitates binding by larger secondary amines while mitigating the formation of off-cycle palladacycle species.
</description>
<pubDate>Tue, 17 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165101</guid>
<dc:date>2024-09-17T00:00:00Z</dc:date>
</item>
<item>
<title>Kinetic Modeling Enables Understanding of Off-Cycle Processes in Pd-Catalyzed Amination of Five-Membered Heteroaryl Halides</title>
<link>https://hdl.handle.net/1721.1/165100</link>
<description>Kinetic Modeling Enables Understanding of Off-Cycle Processes in Pd-Catalyzed Amination of Five-Membered Heteroaryl Halides
Raguram, Elaine Reichert; Dahl, Jakob C; Jensen, Klavs F; Buchwald, Stephen L
The mechanism of Pd-catalyzed amination of five-membered heteroaryl halides was investigated by integrating experimental kinetic analysis with kinetic modeling through predictive testing and likelihood ratio analysis, revealing an atypical productive coupling pathway and multiple off-cycle events. The GPhos-supported Pd catalyst, along with the moderate-strength base NaOTMS, was previously found to promote efficient coupling between five-membered heteroaryl halides and secondary amines. However, slight deviations from the optimal concentration, temperature, and/or solvent resulted in significantly lower yields, contrary to typical reaction optimization trends. We found that the coupling of 4-bromothiazole with piperidine proceeds through an uncommon mechanism in which the NaOTMS base, rather than the amine, binds first to the oxidative addition complex; the resulting OTMS-bound Pd species is the resting state. Formation of the Pd-amido complex via base/amine exchange was identified as the turnover-limiting step, unlike other reported catalyst systems for which reductive elimination is turnover-limiting. We determined that the amine-bound Pd complex, usually an on-cycle intermediate, is instead a reversibly generated off-cycle species, and that base-mediated decomposition of 4-bromothiazole is the primary irreversible catalyst deactivation pathway. Predictive testing and kinetic modeling were key to the identification of these off-cycle processes, providing insight into minor mechanistic pathways that are difficult to observe experimentally. Collectively, this report reveals the unique enabling features of the Pd-GPhos/NaOTMS system, implementing mechanistic insights to improve the yields of particularly challenging coupling reactions. Moreover, these findings highlight the utility of applying predictive tests to kinetic models for the rapid evaluation of mechanistic possibilities in small-molecule catalytic systems.
</description>
<pubDate>Wed, 20 Nov 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165100</guid>
<dc:date>2024-11-20T00:00:00Z</dc:date>
</item>
<item>
<title>Characterization of the response of radiochromic film to quasi-monoenergetic x rays through a cross-calibration with image plates</title>
<link>https://hdl.handle.net/1721.1/165099</link>
<description>Characterization of the response of radiochromic film to quasi-monoenergetic x rays through a cross-calibration with image plates
Buschmann, BI; Cufari, M; Vanderloo, N; Vargas, J; Foo, BC; DeVault, A; Dannhoff, SG; Evans, TE; Johnson, TM; Kunimune, JH; Lawrence, Y; Pearcy, JA; Reichelt, BL; Wink, CW; Russell, L; Gatu Johnson, M; Petrasso, RD; Frenje, JA
Radiochromic film (RCF) and image plates (IPs) are both commonly used detectors in diagnostics fielded at inertial confinement fusion (ICF) and high-energy-density physics (HEDP) research facilities. Due to the intense x-ray background in all ICF/HEDP experiments, accurately calibrating the optical density of RCF as a function of x-ray dose, and the photostimulated luminescence per photon of IPs as a function of x-ray energy, is necessary for interpreting experimental results. Various measurements of the sensitivity curve of different IPs to x rays have been performed [Izumi et al., Proc. SPIE 8850, 885006 (2013) and Rosenberg et al., Rev. Sci. Instrum. 90(1), 013506 (2019)]; however, calibrating RCF is a tedious process that depends on factors such as the orientation in which the RCF is scanned in the film scanner and the batch of RCF used. These issues can be mitigated by cross-calibrating RCF with IPs to enable the use of IPs for the determination of dose on the RCF without scanning the RCF. Here, the first cross-calibration of RCF with IPs to quasi-monoenergetic titanium, copper, and molybdenum K-line x rays is presented. It is found that the IP-inferred dose rates on the RCF for the Ti and Mo x rays agree well with the measured dose rates, while the IP-inferred dose rate for the Cu x rays is larger than the measured dose rate by ∼2×. Explanations for this discrepancy and plans for future work are discussed.
</description>
<pubDate>Fri, 20 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165099</guid>
<dc:date>2024-09-20T00:00:00Z</dc:date>
</item>
<item>
<title>Determination of the response for the National Ignition Facility particle time of flight (PTOF) detector using single particle counting</title>
<link>https://hdl.handle.net/1721.1/165098</link>
<description>Determination of the response for the National Ignition Facility particle time of flight (PTOF) detector using single particle counting
Lawrence, Y; Reichelt, BL; Wink, CW; Rigon, G; Johnson, M Gatu; Li, CK; Frenje, JA
The Particle Time of Flight (PTOF) detector is a chemical vapor deposition diamond-based detector used to measure bang times in low-yield (≲ 1015 neutrons) experiments at the National Ignition Facility (NIF) at Lawrence Livermore National Laboratory (LLNL). Historically, the impulse response for PTOF diamond detectors has been obtained from x-ray timing shots on the NIF and shots on the MegaRay pulsed electron accelerator at LLNL. The impulse response may alternatively be obtained using single particle interactions with the detector, at substantially less cost and higher frequency compared to NIF timing shots, which typically occur months apart. Here, the response of a PTOF detector setup is characterized by statistically averaging a large number of single particle waveforms. A high fidelity instrument response function can be constructed in this way. This is confirmed by comparison of the single particle counting-constructed response to the impulse response function measured for the same detector at LLNL’s MegaRay facility.
</description>
<pubDate>Wed, 02 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165098</guid>
<dc:date>2024-10-02T00:00:00Z</dc:date>
</item>
<item>
<title>Development of a compact magnetic spectrometer for use at the OMEGA Laser Facility and the National Ignition Facility</title>
<link>https://hdl.handle.net/1721.1/165097</link>
<description>Development of a compact magnetic spectrometer for use at the OMEGA Laser Facility and the National Ignition Facility
Pearcy, JA; Russell, L; Kabadi, NV; Johnson, TM; Adrian, PA; Gatu-Johnson, M; Casco, E; Palmisano, K; Gates, G; Burgett, T; Scott, M; Petrasso, RD; Li, CK; Frenje, J
Measurement of proton spectra is an important diagnostic for a variety of high energy density physics experiments. Current diagnostics are either not designed to capture the spectrum of low-energy protons or are unsuitable for high debris experiments. To bridge the gap, a new CR-39 based compact magnetic spectrometer (MagSpec) has been developed to measure proton spectra in the 1–20 MeV energy range, with a particular focus on the low-energy (1–6 MeV) spectrum, for use in experiments at the OMEGA Laser Facility and the National Ignition Facility (NIF). In the MagSpec diagnostic, protons of different energies are dispersed as they pass through a magnetic field before impinging on a differentially filtered CR-39 surface, resulting in a spatial distribution of CR-39 tracks that corresponds to the energy spectrum. In this paper, we discuss details of the design and implementation of MagSpec on the NIF and OMEGA.
</description>
<pubDate>Thu, 03 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165097</guid>
<dc:date>2024-10-03T00:00:00Z</dc:date>
</item>
<item>
<title>Temperature stabilization of a lab space at 10 mK-level over a day</title>
<link>https://hdl.handle.net/1721.1/165096</link>
<description>Temperature stabilization of a lab space at 10 mK-level over a day
Fife, Dylan; Shin, Dong-Chel; Sudhir, Vivishek
Temperature fluctuations over long time scales (≳ 1 h) are an insidious problem for precision measurements. In optical laboratories, the primary effect of temperature fluctuations is drifts in optical circuits over spatial scales of a few meters and temporal scales extending beyond a few minutes. We present a lab-scale environment temperature control system approaching 10 mK-level temperature instability across a lab for integration times above an hour and extending to a day. This is achieved by passive isolation of the laboratory space from the building walls using a circulating air gap and an active control system feeding back to heating coils at the outlet of the laboratory’s Heating-Ventilation-Air-Conditioning (HVAC) unit. These techniques together result in 20 dB suppression of the temperature power spectrum across the lab at 10−4 Hz—approaching the limit set by statistical coherence of the temperature field—and 10 mK Allan deviation around 15 °C after an hour of averaging, which is an order of magnitude better than any previous report for a full laboratory.
</description>
<pubDate>Fri, 27 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165096</guid>
<dc:date>2024-09-27T00:00:00Z</dc:date>
</item>
<item>
<title>Incentives to Comply with the Minimum Wage in the United States and the United Kingdom</title>
<link>https://hdl.handle.net/1721.1/165095</link>
<description>Incentives to Comply with the Minimum Wage in the United States and the United Kingdom
Stansbury, Anna
There is substantial evidence of minimum wage non-compliance in the United States and the United Kingdom. In this article, the author compiles new, comprehensive data on the costs that minimum wage violators incur when non-compliance is detected. In both countries, the costs violators face are often little more than the money they saved by underpaying. To have an incentive to comply under existing penalty regimes, typical US firms would thus have to expect a 47% to 83% probability of detection by the Department of Labor (DOL), or a 25% probability of a successful Fair Labor Standards Act (FLSA) suit. In the United Kingdom, typical firms would have to expect a 44% to 56% probability of detection. Actual probabilities of detection are substantially lower than this for many firms and would likely remain so even with realistic increases in enforcement capacity. Improved enforcement alone is thus insufficient: Expected penalties must also substantially increase to ensure that most firms have an incentive to comply.
</description>
<pubDate>Wed, 01 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165095</guid>
<dc:date>2025-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Non-Euclidean motion planning with graphs of geodesically convex sets</title>
<link>https://hdl.handle.net/1721.1/165094</link>
<description>Non-Euclidean motion planning with graphs of geodesically convex sets
Cohn, Thomas; Petersen, Mark; Simchowitz, Max; Tedrake, Russ
Computing optimal, collision-free trajectories for high-dimensional systems is a challenging and important problem. Sampling-based planners struggle with the dimensionality, whereas trajectory optimizers may get stuck in local minima due to inherent nonconvexities in the optimization landscape. The use of mixed-integer programming to encapsulate these nonconvexities and find globally optimal trajectories has recently shown great promise, thanks in part to tight convex relaxations and efficient approximation strategies that greatly reduce runtimes. These approaches were previously limited to Euclidean configuration spaces, precluding their use with mobile bases or continuous revolute joints. In this paper, we handle such scenarios by modeling configuration spaces as Riemannian manifolds, and we describe a reduction procedure for the zero-curvature case to a mixed-integer convex optimization problem. We further present a method for obtaining approximate solutions via piecewise-linear approximations that is applicable to manifolds of arbitrary curvature. We demonstrate our results on various robot platforms, including producing efficient collision-free trajectories for a PR2 bimanual mobile manipulator.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165094</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Extended state infrastructure power in an age of networked competition: The cases of Thailand and Taiwan</title>
<link>https://hdl.handle.net/1721.1/165093</link>
<description>Extended state infrastructure power in an age of networked competition: The cases of Thailand and Taiwan
Stokols, Andrew; Kollar, Justin
Scholars have highlighted the emergence of infrastructure as a key domain in the struggle over network centrality in what some call the ‘Second Cold War’ between the U.S. and China. We qualify this ‘infrastructural turn’ by drawing attention to the contingent nature of state infrastructural power as depending on key domestic firms that often serve as intermediaries between domestic infrastructure and global supply chains or international partners. Utilising empirical case studies based on field research conducted between 2021 and 2023 in Thailand and Taiwan, we analyse the ways in which state infrastructure power is exercised through strategic negotiation between national politics of the state and territorial investment decisions of multinational and major domestic firms within global supply chains. The study highlights how outcomes of state projects to foster connectivity or centrality in networks are shaped by contingent and sometimes ad-hoc coalitions between state agencies and domestic and multinational companies with their own interests and agency. In the case of Taiwan, the centrality of Taiwan Semiconductor Manufacturing Company (TSMC) to global supply chains makes it an important player amidst continued U.S.-China tension. In Thailand, CP Group’s connections to China have afforded it a role as an interlocutor between Thailand and China, allowing it to obtain state infrastructure contracts. Through comparative case studies the paper complicates both ‘globalist’ and methodologically nationalist perspectives on the ‘infrastructural turn’, and introduces the concept of ‘extended state infrastructural power’ to account for this complex, networked exercise of state authority.
</description>
<pubDate>Sun, 17 Nov 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165093</guid>
<dc:date>2024-11-17T00:00:00Z</dc:date>
</item>
<item>
<title>Uncovering Patterns in Overdose Deaths: An Analysis of Spike Identification in Fatal Drug Overdose Data in Massachusetts, 2017-2023</title>
<link>https://hdl.handle.net/1721.1/165092</link>
<description>Uncovering Patterns in Overdose Deaths: An Analysis of Spike Identification in Fatal Drug Overdose Data in Massachusetts, 2017-2023
Lee, Hannah; Otero-Leon, Daniel; Dong, Huiru; Stringfellow, Erin J; Jalali, Mohammad S
Objectives:&#13;
Yearly rolling aggregate trends or rates are commonly used to analyze trends in overdose deaths, but focusing on long-term trends can obscure short-term fluctuations (eg, daily spikes). We analyzed data on spikes in daily fatal overdoses and how various spike detection thresholds influence the identification of spikes.&#13;
Materials and Methods:&#13;
We used a spike detection algorithm to identify spikes among 16 660 drug-related overdose deaths (from any drug) reported in Massachusetts’ vital statistics from 2017 through 2023. We adjusted the parameters of the algorithm to define spikes in 3 distinct scenarios: deaths exceeding 2 adjusted moving SDs above the 7-, 30-, and 90-day adjusted moving average.&#13;
Results:&#13;
Our results confirmed the on-the-ground observation that there are days when many more people die of overdoses than would be expected based on fluctuations due to differences among people alone. We identified spikes on 5.8% to 20.6% of the days across the 3 scenarios, annually, constituting 11.1% to 31.6% of all overdose deaths. The absolute difference in percentage points of days identified as spikes varied from 5.2 to 11.5 between 7- and 30-day lags and from 0 to 4.6 between 30- and 90-day lags across years. When compared with the adjusted moving average across the 3 scenarios, in 2017 an average of 3.9 to 5.5 additional deaths occurred on spike days, while in 2023 the range was 3.7 to 6.0.&#13;
Practice Implications:&#13;
A substantial percentage of deaths occurred annually on spike days, highlighting the need for effectively monitoring short-term overdose trends. Moreover, our study serves as a foundational analysis for future research into exogenous events that may contribute to spikes in overdose deaths, aiming to prevent future deaths.
</description>
<pubDate>Thu, 01 Jan 2026 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165092</guid>
<dc:date>2026-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Experiential learning amid disequilibrium: Attuning to student emotions</title>
<link>https://hdl.handle.net/1721.1/165091</link>
<description>Experiential learning amid disequilibrium: Attuning to student emotions
O’Flanagan, Sinead E; Y Jester, Michellana
Educators recognize the significant role emotions play in experiential learning (EL), particularly in how they support students through the inherent emotion work. However, the traditional design of experiential learning theory (ELT) in higher education (HE) often presupposes a stable environment, which overlooks the impact of unpredictable external factors on students’ emotions and learning. Despite its critical importance, emotion work in EL remains underexplored, with emotional dynamics often obscured or dismissed as isolated incidents. This study sheds light on the heightened emotional challenges that arise during periods of sustained disequilibrium, such as the COVID-19-induced restrictions. It provides novel insights into the dynamic interplay of emotions and learning progression within EL frameworks, drawing on perspectives from EL educators, advisors, and students. The research underscores the importance of emotion-focused dialogue, educator-student connection, and assimilating autonomy needs in EL amid disequilibrium. It also identifies often-neglected elements in EL frameworks, such as students “sharing struggles” or “valuing work efforts,” alongside educator strategies like “personal anchoring.” The findings contribute to ELT by proposing adaptive strategies that integrate emotion work into pedagogical frameworks, enhancing reflection and conceptualization practices, and extending ELT’s applicability across diverse educational and work-based management learning settings.
</description>
<pubDate>Thu, 09 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165091</guid>
<dc:date>2025-01-09T00:00:00Z</dc:date>
</item>
<item>
<title>Insulin Delivery Pumps for Human Spaceflight: Steps Toward an Accessible Space Future</title>
<link>https://hdl.handle.net/1721.1/165089</link>
<description>Insulin Delivery Pumps for Human Spaceflight: Steps Toward an Accessible Space Future
Horn, Kyle J; Hoffman, Jeffrey A
Commercially available insulin pumps for treatment of diabetes mellitus are currently not qualified to operate in the space environment. This work rigorously tested the fluid delivery performance of a Tandem t:slim X2 insulin pump in both micro- and hypergravity during a parabolic microgravity research flight. The parabolic research flight environment serves as an analogue to the types of transient gravitational loadings experienced during human-led missions, which provides a foundation to expand testing to suborbital and orbital flights in addition to other extreme environmental tests for wilderness dependency. The results of the flight data showed no significant difference between fluid delivery performance at 0, 1, and 2g acceleration regimes, nor at the transitions between gravity environments. Recommendations are made for further experimentation and qualification tests before use in future spaceflight missions.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165089</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>On the nonlinear Eshelby inclusion problem and its isomorphic growth limit</title>
<link>https://hdl.handle.net/1721.1/165088</link>
<description>On the nonlinear Eshelby inclusion problem and its isomorphic growth limit
Bonavia, Joseph E; Chockalingam, S; Cohen, Tal
In the late 1950s, Eshelby’s linear solutions for the deformation field inside an ellipsoidal inclusion and, subsequently, the infinite matrix in which it is embedded were published. The solutions’ ability to capture the behavior of an orthotropically symmetric shaped inclusion made it invaluable in efforts to understand the behavior of defects within, and the micromechanics of, metals and other stiff materials throughout the rest of the 20th century. Over half a century later, we wish to understand the analogous effects of microstructure on the behavior of soft materials, both organic and synthetic, but in order to do so, we must venture beyond the linear limit, far into the nonlinear regime. However, no solutions to these analogous problems currently exist for non-spherical inclusions. In this work, we present an accurate semi-inverse solution for the elastic field in an isotropically growing spheroidal inclusion embedded in an infinite matrix, both made of the same incompressible neo-Hookean material. We also investigate the behavior of such an inclusion as it grows infinitely large, demonstrating the existence of a non-spherical asymptotic shape and an associated asymptotic pressure. We call this the isomorphic limit, and the associated pressure the isomorphic pressure.
</description>
<pubDate>Thu, 01 Jan 2026 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165088</guid>
<dc:date>2026-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Evaluating Risk for Astronaut Involvement in In-Space Manufacturing: Analog Field Testing and Future Planetary Surface Procedures</title>
<link>https://hdl.handle.net/1721.1/165087</link>
<description>Evaluating Risk for Astronaut Involvement in In-Space Manufacturing: Analog Field Testing and Future Planetary Surface Procedures
MacRobbie, Madelyn; Patel, Palak B.
Introduction&#13;
A key objective of the NASA Artemis program is to establish a sustained human presence on the Moon, along with its international and commercial partners. NASA aims to establish a lunar economy, increasing the need for infrastructure to support human habitation and facilitate growth. In-space manufacturing (ISM) coupled with in situ resource utilization (ISRU) can reduce launch mass and reduce the dependency on Earth resupply for long-term habitation, enabling rapid expansion. However, the space environment introduces unique challenges compared to Earth, such as the absence of an atmosphere, reduced gravity levels, and high consequences of human-machine interactions given the barrier to evacuating an astronaut injured in a manufacturing accident on the Moon, necessitating new safety standards for ISM processes.&#13;
Methods&#13;
This study proposes the application of a modified analytical hierarchy process (AHP) to identify high-risk aspects of crew procedures in molten regolith electrolysis (MRE) for both Earth-based analog testing and lunar production.&#13;
Results&#13;
The modified AHP assists in pinpointing areas needing hazard mitigation to protect crew members, enabling the improvement of safety standards for MRE in both environments.&#13;
Conclusion&#13;
Findings will inform the development of robust safety protocols for ISM, crucial for the success of NASA's Artemis missions and the broader goal of sustained human presence on the Moon and Mars.
</description>
<pubDate>Sat, 29 Mar 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165087</guid>
<dc:date>2025-03-29T00:00:00Z</dc:date>
</item>
<item>
<title>Origins of Face Responses in the Human Cortex: fNIRS and fMRI Evidence From Infants</title>
<link>https://hdl.handle.net/1721.1/165086</link>
<description>Origins of Face Responses in the Human Cortex: fNIRS and fMRI Evidence From Infants
Saxe, Rebecca; Kosakowski, Heather L
In adults, cortical regions in the fusiform face area (FFA), superior temporal sulcus (STS), and medial prefrontal cortex (MPFC) respond selectively to faces but underlie distinct perceptual and social processes. When do each of these regions, and their distinctive functions, develop? We reviewed recent studies of awake human infants’ cortical responses to faces using functional near-infrared spectroscopy (fNIRS) and functional MRI (fMRI). The results converged and do not support a slow, sequential posterior-to-anterior development of face-selective responses. Instead, cortical face-selective responses arise very early and simultaneously in infancy and may reflect distinctively social processes from the start.
</description>
<pubDate>Wed, 01 Oct 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165086</guid>
<dc:date>2025-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Effect of Basketball Analytics Investment on National Basketball Association (NBA) Team Performance</title>
<link>https://hdl.handle.net/1721.1/165085</link>
<description>The Effect of Basketball Analytics Investment on National Basketball Association (NBA) Team Performance
Wang, Henry; Sarker, Arnab; Hosoi, Anette
In the National Basketball Association (NBA), basketball data and analytics is an area of significant financial investment for all 30 franchises, despite there being little quantitative evidence demonstrating analytics adoption actually improves team-level performance. This study seeks to measure the return on investment of analytics on NBA team success in a time of great demand for analytical front office personnel. Using a two-way fixed effects modeling approach, we identify the causal effect of analytics department headcounts on regular season wins using 12 years of season-level data for each team. We find a positive and statistically significant effect, suggesting clubs that invest more in analytics tend to outperform competitors when controlling for roster characteristics, injuries, difficulty of schedule, and team-specific and time-specific effects. This research contributes to the body of literature affirming the value of data analytics for organizational performance and supports current investments in analytics being made by NBA teams.
</description>
<pubDate>Fri, 01 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165085</guid>
<dc:date>2025-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Workshop on Noninvasive Glucose Monitoring 2024</title>
<link>https://hdl.handle.net/1721.1/165084</link>
<description>Workshop on Noninvasive Glucose Monitoring 2024
Kang, Jeon Woong; Arnold, Mark A; Steenkamp, Devin; Tapsak, Mark A; Mäntele, Werner; Khang, Yoonho; Jue, Miyeon; So, Peter TC
This first workshop on noninvasive glucose monitoring (NIGM) was held at the Massachusetts Institute of Technology (MIT) on October 30, 2024. Six invited speakers, representing industry, academia, and clinics, gave presentations that covered (1) an overview of the NIGM technologies, (2) the state of the art in NIGM technologies, such as near-infrared (NIR), mid-infrared (IR), photoacoustic, and Raman spectroscopies, (3) minimally invasive implantable continuous glucose monitoring (CGM) sensors, and (4) a clinician’s perspective on the impact of the current CGM devices for patient care.
</description>
<pubDate>Tue, 01 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165084</guid>
<dc:date>2025-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>The community test tube of American civilization: Burt and Ethel Aginsky’s Social Science Field Laboratory, 1939–47</title>
<link>https://hdl.handle.net/1721.1/165083</link>
<description>The community test tube of American civilization: Burt and Ethel Aginsky’s Social Science Field Laboratory, 1939–47
Kapsalakis, Lauren
The Social Science Field Laboratory (SSFL, 1939–47), a field school in the Ukiah Valley that trained students in social scientific and anthropological methodology, sheds light on a period in anthropology when methods were shifting from objective empiricism to meaningful participation. As analytic tools for framing the study of society failed to keep pace with social change, sociopolitical trends inside and outside anthropology situated a valley in northern California as the opportune place to gather a sample of ‘American history in vitro’. Founded by Columbia-trained anthropologists Burt and Ethel Aginsky, the SSFL responded to trends inside and outside anthropology. As the Great Depression directed anthropologists’ attention to the study of practical, modern problems in complex American communities—such as race relations, immigration, modernization, and urbanization—funding agencies strengthened the relations between sociology and anthropology and encouraged the development of interdisciplinary approaches. The Aginskys conceived of the Ukiah Valley as a ‘community test-tube of American civilization’, where scientists from all disciplines ‘can come for a convenient sample of the United States, past and present’. In teaching students how to collect data in the field, the Aginskys pierced the widely held notion that ethnographic technique cannot be taught but must be experienced by the lone individual in the field.
</description>
<pubDate>Mon, 30 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165083</guid>
<dc:date>2025-06-30T00:00:00Z</dc:date>
</item>
<item>
<title>Exploring How Organizational Actors Experience Evaluation and Its Influence: A Q-Methodological Study</title>
<link>https://hdl.handle.net/1721.1/165082</link>
<description>Exploring How Organizational Actors Experience Evaluation and Its Influence: A Q-Methodological Study
Kelly, Catherine
This article contributes to research on evaluation by examining how organizational actors respond to and use evaluation imposed on them within an evaluation system. Drawing on Henry and Mark's theory of evaluation influence, this study uses Q-methodology to explore how staff within English higher education providers experience evaluation and its influence on their widening participation practice and strategy decision-making. The experiences of organizational actors are examined and classified into four types: strategic practitioners, pragmatic practitioners, staff with indirect involvement in widening participation, and evaluation enthusiasts. Through analyzing these experiences, the findings illustrate the diverse ways organizational actors are influenced by evaluation within evaluation systems. To deepen our understanding of evaluation influence in the contexts of evaluation systems, this article recommends explicitly embedding organizational theories into future theories of evaluation influence and provides suggestions for future research on the topic.
</description>
<pubDate>Wed, 23 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165082</guid>
<dc:date>2025-04-23T00:00:00Z</dc:date>
</item>
<item>
<title>Mapping the Caregiver Experience: Predicting Dimensions of Caregiver Strain Through Task-Based Profiles</title>
<link>https://hdl.handle.net/1721.1/165081</link>
<description>Mapping the Caregiver Experience: Predicting Dimensions of Caregiver Strain Through Task-Based Profiles
Brady, Samantha; Ashebir, Sophia; D’Ambrosio, Lisa; Balmuth, Alexa; Felts, Adam; Lee, Chaiwoo
Objective: Family caregiving is a prevalent, diverse, and often challenging experience. We develop caregiving activity profiles to better understand how sets of care-tasks contribute to various aspects of strain.&#13;
Methods: Using diary data from a survey of 213 family caregivers in the U.S., we perform latent class analysis to group commonly occurring care-related tasks into activity profiles. We then use these classifications to predict physical, financial, and emotional strain.&#13;
Main Findings: We identified 4 unique activity profiles based on a set of 36 daily caregiving activities performed. Activity profiles varied significantly across the three analyzed strain dimensions.&#13;
Conclusion: Activity profiles present opportunities to better understand how caregiving tasks are related to specific types kinds of caregiving strain.
</description>
<pubDate>Thu, 01 Jan 2026 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165081</guid>
<dc:date>2026-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Your home is not a school: The limits of homeschooling as a political practice</title>
<link>https://hdl.handle.net/1721.1/165080</link>
<description>Your home is not a school: The limits of homeschooling as a political practice
Pavel, Sonia Maria; Cynamon, Jeremy Kingston
Homeschooling is on the rise. It appeals to very different perspectives and ideologies that tend not to have common ground, from classical conservative to radical progressive. But the justifications for the practice are weak. In this paper, we build a case against the “home school” as a political practice using the existing commitments of liberal, conservative, and democratic theories of education. Whether education should aim at the cultivation of children's autonomy, their formation as members of cultural communities, or their training as democratic citizens, there are reasons to doubt that the practice of homeschooling can fulfill our educational goals. As such, we argue that liberals, conservatives, and democrats each have their own motivations to oppose homeschooling as an institutional alternative to traditional schools. Through our critiques, we also advance a metatheoretical argument in favor of centering the aims of education in our philosophical and political debates.
</description>
<pubDate>Fri, 18 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165080</guid>
<dc:date>2025-04-18T00:00:00Z</dc:date>
</item>
<item>
<title>Qualitative Assessment of Terrestrial Care Settings to Inform Self-sufficient Spaceflight Medical Care</title>
<link>https://hdl.handle.net/1721.1/165079</link>
<description>Qualitative Assessment of Terrestrial Care Settings to Inform Self-sufficient Spaceflight Medical Care
Porter, Allison; Arquilla, Katya; Stankovic, Aleksandra
Introduction&#13;
Long communication latencies in exploration spaceflight will necessitate in situ resolution to medical problems. Integrating automation into the care paradigm can address challenges posed by resource gaps inherent to spaceflight operations. However, it is not clear what aspects of exploration care are most well suited for automation integration.&#13;
Methods&#13;
To probe the potential role of automation in spaceflight medicine, we began by decomposing the human-automation system to first characterize the work domain(s) of the human tasks. Using the lens of point-of-care ultrasound, we leveraged existing analogous Earth medical domains to conduct in situ observations in a hospital emergency department to understand how clinicians process contextual information to provide urgent care using ultrasound and semistructured interviews with specialists to identify key procedural information components for automation.&#13;
Results&#13;
This investigation allowed us to characterize the dynamic system surrounding a task that does not exist in its intended—currently inaccessible—use case (ie, point-of-care ultrasound on Mars) to guide future human-automation systems development.&#13;
Conclusion&#13;
We conclude that specific aspects of the care environment that influence the result of a task or process (“mediating factors”) from candidate work domains call for distinct, targeted guidance for automation support and are valuable in providing system developers with tunable automation level and implementation guidelines within and/or between those work domains. Such evidence-based design practice is directly translatable to automation assistance for medical providers in resource-limited environments as well as to any situation where a person's sensory processing, perception, decision making, or response selection could be aided by automation to accomplish a task.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165079</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Solutions and Challenges for Addressing Misinformation</title>
<link>https://hdl.handle.net/1721.1/165078</link>
<description>Solutions and Challenges for Addressing Misinformation
Martel, Cameron; Rand, David G
Research on mitigating the effects of misinformation has contributed to the development of multiple feasible interventions designed to reduce belief in, and sharing of, falsehoods. The authors review these interventions and discuss challenges and open questions for future research. First, they provide an overview of content-neutral and content-based interventions. Next, they discuss two practical challenges to deploying and assessing these interventions in the field: scalability and pushback against content moderation efforts due to perceived political bias. Finally, they highlight several open theoretical questions and common pitfalls of research on misinformation. In particular, they argue for critical evaluation of how interventions may be effective across different types of misinformative content, different key subpopulations, and different media and environmental contexts.
</description>
<pubDate>Tue, 10 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165078</guid>
<dc:date>2025-06-10T00:00:00Z</dc:date>
</item>
<item>
<title>Atomic Transactions</title>
<link>https://hdl.handle.net/1721.1/165077</link>
<description>Atomic Transactions
Lynch, Nancy; Merritt, Michael; Weihl, William; Fekete, Alan
</description>
<pubDate>Sat, 01 Jan 1994 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165077</guid>
<dc:date>1994-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Recirculation through western boundary currents varies nonlinearly with the ocean basin's aspect ratio</title>
<link>https://hdl.handle.net/1721.1/165066</link>
<description>Recirculation through western boundary currents varies nonlinearly with the ocean basin's aspect ratio
Gianchandani, Kaushal
Recirculation gyres adjacent to western boundary currents (WBCs) in the ocean enhance the poleward transport of these currents. While it is well-established that the WBC in a barotropic ocean strengthens with increase in basin's aspect ratio (the meridional-to-zonal extent ratio), how intensity of the recirculation through the western boundary layer varies with this parameter remains unexplored. I address this using the non-dimensional form of the nonlinear, wind-driven Stommel–Munk model of westward intensification that comprises three parameters—the aspect ratio (δ), the damping coefficient (ϵ), and the β-Rossby number (Rβ). Here, ϵ is set by the ratio of Rayleigh friction coefficient (or eddy viscosity) to the meridional gradient of the Coriolis frequency and the basin's zonal dimension, while Rβ is proportional to wind stress amplitude and quantifies the strength of nonlinearity. In the weak-to-moderate nonlinearity limit (Rβ&amp;amp;lt;∼ϵ), perturbation analysis reveals that recirculation varies concavely with aspect ratio, suggesting existence of an optimal aspect ratio (δopt) for which the recirculation is maximum and for typical values of ϵ (10−3−10−2), δopt follows the power-law relation δopt=4.3ϵ. Numerical simulations further validate the existence of δopt. For large ϵ (&amp;amp;gt;5×10−3), the power-law predicts δopt for the numerical solutions rather accurately, but does not hold for smaller ϵ (2×10−3) due to increased importance of nonlinear terms. Nevertheless, the nonlinear variation in recirculation through the western boundary layer with aspect ratio is observed for all ϵ values and may contribute to the heterogeneous increase in the WBC's transport across different ocean basins in a warming climate.
</description>
<pubDate>Tue, 17 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165066</guid>
<dc:date>2024-09-17T00:00:00Z</dc:date>
</item>
<item>
<title>Streamlining Physics Problem Generation to Support Physics Teachers in Using Generative Artificial Intelligence</title>
<link>https://hdl.handle.net/1721.1/165065</link>
<description>Streamlining Physics Problem Generation to Support Physics Teachers in Using Generative Artificial Intelligence
El-Adawy, Shams; Liao, Isaac; Lad, Vedang; Abdelhafez, Mohamed; Dourmashkin, Peter
The rapid advancement of large language models (LLMs) presents a unique opportunity for educators to find ways to include artificial intelligence (AI) in physics course design. By critically engaging with LLMs to help with the task of generating problems, physics teachers can not only model a potentially effective way to use LLMs for other teachers, but also showcase to students ways to productively engage with LLMs. This article presents a workflow with two different starting points to generate physics problems using ChatGPT 3.5. The first initialization involves interacting with ChatGPT in a conversational manner, guiding iterative problem creation by breaking tasks into smaller tasks. The second initialization harnesses ChatGPT’s generative abilities, aligning problem generation with established problem styles by instructing the model to emulate contexts from question banks. We discuss the implications of this workflow for other physics instructors exploring productive ways to incorporate the use of AI in their own course design.
</description>
<pubDate>Tue, 01 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165065</guid>
<dc:date>2024-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Ion optical design of the magnetic proton recoil neutron spectrometer for the SPARC tokamak</title>
<link>https://hdl.handle.net/1721.1/165064</link>
<description>Ion optical design of the magnetic proton recoil neutron spectrometer for the SPARC tokamak
Mackie, S; Wink, CW; Dalla Rosa, M; Berg, GPA; Ball, JL; Wang, X; Carmichael, J; Tinguely, RA; Rigamonti, D; Tardocchi, M; Raj, P; Frenje, J; Rice, J
A magnetic proton recoil (MPR) neutron spectrometer is being designed for SPARC, a high magnetic field (BT = 12 T), compact (R0 = 1.85 m, a = 0.57 m) tokamak currently under construction in Devens, MA, USA. MPR neutron spectrometers are versatile tools for making high fidelity ab initio calibrated measurements of fusion neutron flux spectra and have been used to infer fusion power, ion temperature, fuel ion ratio, and suprathermal fuel populations at several high performance fusion experiments. The performance of an MPR neutron spectrometer is in large part determined by the design of the magnetic field, which disperses and focuses recoil protons. This article details the ion optical design of a high-resolution MPR neutron spectrometer, including the amelioration of image aberrations due to nonlinear effects. An optimized design is presented that achieves ion optical energy resolution δE/E &lt; 1% and focal plane properties that enable straightforward integration with the hodoscope detector array.
</description>
<pubDate>Tue, 01 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165064</guid>
<dc:date>2024-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Performance predictions of the SPARC x-ray crystal spectrometers for ion temperature and toroidal rotation measurements</title>
<link>https://hdl.handle.net/1721.1/165063</link>
<description>Performance predictions of the SPARC x-ray crystal spectrometers for ion temperature and toroidal rotation measurements
Perks, C; Vezinet, D; Rice, JE; Reinke, ML
SPARC will be outfitted with three systems of x-ray crystal spectrometer arrays. Two of these are designed using cylindrically bent crystals to achieve high spectral-resolution for ion temperature and toroidal velocity measurements via imaging He-like Kr and Ne-like Xe. The last acts as a spectral survey system to monitor Ne-like W and nearby H- and He-like emission from Cr, Fe, Co, Ni, and Cu. Line radiation intensities are calculated using the Flexible Atomic Code for atomic data and ColRadPy for collisional-radiative modeling, then convoluted with a Voigt line shape. Free–free, free-bound, and two-photon continuum radiation is also included. The ToFu code is used to perform volume-of-sight integration to produce synthetic detector images. In addition, presented is cross-validation performed using the XICSRT Monte Carlo ray-tracing code. Ion temperature and toroidal velocity profiles are reconstructed using ToFu via tomographic inversion.
</description>
<pubDate>Tue, 27 Aug 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165063</guid>
<dc:date>2024-08-27T00:00:00Z</dc:date>
</item>
<item>
<title>Edge scanning reflectometry for density profile measurement on the SPARC tokamak</title>
<link>https://hdl.handle.net/1721.1/165062</link>
<description>Edge scanning reflectometry for density profile measurement on the SPARC tokamak
Lin, Y; Nikolaeva, V; Hachmeister, D; Kowalski, E; Reinke, ML
Edge scanning reflectometry (ESRL) on the SPARC tokamak aims to measure the electron density profile from the far scrape-off layer to the top of the typical H-mode pedestal and provide real-time data for plasma control. ESRL uses a standard frequency-modulated continuous wave technique from 18 to 90 GHz. By implementing both the O-mode and left-hand-cutoff X-mode, it covers densities from ∼4 × 1018 to ∼4 × 1020 m−3 at B0 ∼12 T. A voltage-controlled oscillator acts as the frequency sweep source. Phase-locked dielectric resonator oscillators and bandpass filters generate base signals ∼9–15 GHz. The signals are then frequency multiplied and amplified to reach the K (18–26 GHz), Ka (26–40 GHz), U (40–60 GHz), and E (60–90 GHz) bands. Multi-band signals are combined via the quasi-optical technique. ESRL plans to use oversized waveguides (∼20 m one-way) and a bi-static arrangement to minimize signal losses and distortions while allowing system flexibility. A COMSOL Multiphysics RF model in 2D has been set up to simulate the reflectometry process and help decide the layout of the horn antennas. Engineering analyses of the key parts of the system have been carried out in support of its preliminary design.
</description>
<pubDate>Wed, 21 Aug 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165062</guid>
<dc:date>2024-08-21T00:00:00Z</dc:date>
</item>
<item>
<title>Neutronics simulations for the design of neutron flux monitors in SPARC</title>
<link>https://hdl.handle.net/1721.1/165061</link>
<description>Neutronics simulations for the design of neutron flux monitors in SPARC
Wang, X; Gocht, R; Ball, J; Mackie, S; Panontin, E; Tinguely, RA; Raj, P; Holmes, I; Saltos, AA; Johnson, A; Grieve, A
This paper presents the development and application of high-fidelity neutronic models of the SPARC tokamak for the design of neutron flux monitors (NFM) for application during plasma operations. NFMs measure the neutron flux in the tokamak hall, which is related to fusion power via calibration. We have explored Boron-10 gamma-compensated ionization chambers (ICs) and parallel-plate Uranium-238 fission chambers (FCs). We plan for all NFMs to be located by the wall in the tokamak hall and directly exposed to neutrons streaming through a shielded opening in a midplane port. Our simulations primarily use a constructive solid geometry-based OpenMC model based on the true SPARC geometry. The OpenMC model is benchmarked against a detailed CAD-based MCNP6 model. The B10 ICs are equipped with high-density polyethylene (HDPE) sleeves, borated HDPE housings, and borated aluminum covers to shield out scattered neutrons, optimize detector response levels, and make calibration robust against changes in the tokamak hall. The B10 neutron absorption branching ratio may cause the detectors’ responses to be non-linear to neutron flux &gt;200 keV. However, our simulations unveil that, in the SPARC environment and with the proposed housings and sleeves, &gt;99% of the detector responses are induced by &lt;100 keV neutrons. U238’s insensitivity to slow neutrons makes this FC a promising candidate for direct fusion neutron measurements. Along with a borated HDPE sleeve, about 60% of the FCs’ responses are induced by direct neutrons.
</description>
<pubDate>Fri, 30 Aug 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165061</guid>
<dc:date>2024-08-30T00:00:00Z</dc:date>
</item>
<item>
<title>Image plate multi-scan response to fusion protons in the range of 1–14 MeV</title>
<link>https://hdl.handle.net/1721.1/165060</link>
<description>Image plate multi-scan response to fusion protons in the range of 1–14 MeV
Vanderloo, N; Cufari, M; Russell, L; Johnson, TM; Vargas, J; Foo, BC; Buschmann, BI; Dannhoff, SG; DeVault, A; Evans, TE; Kunimune, JH; Lawrence, Y; Pearcy, JA; Reichelt, BL; Wink, CW; Gatu Johnson, M; Petrasso, RD; Frenje, JA; Li, CK
Image plates (IPs) are a quickly recoverable and reusable radiation detector often used to measure proton and x-ray fluence in laser-driven experiments. Recently, IPs have been used in a proton radiography detector stack on the OMEGA laser, a diagnostic historically implemented with CR-39, or radiochromic film. The IPs used in this and other diagnostics detect charged particles, neutrons, and x-rays indiscriminately. IPs detect radiation using a photo-stimulated luminescence (PSL) material, often phosphor, in which electrons are excited to metastable states by ionizing radiation. Protons at MeV energies deposit energy deeper into the IP compared with x rays below ∼20 keV due to the Bragg peak present for protons. This property is exploited to discriminate between radiation types. Doses of mono-energetic protons between 1.7 and 14 MeV are applied to IPs using the MIT linear electrostatic ion accelerator. This paper presents the results from consecutive scans of IPs irradiated with different proton energies. The PSL ratios between subsequent scans are shown to depend on proton energy, with higher energy protons having lower PSL ratios for each scan. This finding is separate from the known energy dependence in the absolute sensitivity of IPs. The results can be compared to complimentary work on x rays, showing a difference between protons and x rays, forging a path to discriminate between proton and x-ray fluence in mixed radiation environments.
</description>
<pubDate>Tue, 24 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165060</guid>
<dc:date>2024-09-24T00:00:00Z</dc:date>
</item>
<item>
<title>A compact and portable gamma-ray spectrometer (GRASP) for inertial confinement fusion and basic science experiments</title>
<link>https://hdl.handle.net/1721.1/165059</link>
<description>A compact and portable gamma-ray spectrometer (GRASP) for inertial confinement fusion and basic science experiments
Dannhoff, SG; Wink, CW; Mackie, S; Berg, GPA; Frenje, JA
A compact and portable gamma-ray spectrometer has been designed to diagnose different components of the inertial confinement fusionrelevant γ-ray spectrum with energies between ∼3.7–17.9 MeV. The system is designed to be as compact as possible for convenient transportation and fielding in diagnostic ports on the OMEGA laser, the National Ignition Facility, and other photon-source facilities. The system consists of a conversion foil for Compton scattering in front of four magnetic spectrometer “arms,” each covering a different energy range and constructed out of cylindrical permanent magnet Halbach arrays. Monte Carlo simulations have been used to optimize and assess the performance of the conversion foil, and COSY INFINITY ion-optical simulations have been used to optimize the spectrometer magnets. The performance of the design is assessed for a simulated direct-drive γ-ray spectrum. Spanning its total γ-ray energy bandwidth and using a 1.7 mm thick boron conversion foil, the system’s total energy resolution and efficiency are ∼15.8%–4.5% and 5.4 × 10−7 –3.7 × 10−7 e − /γ, respectively, with room for improvement. Spectral γ-ray measurements will provide guidance to the inertial confinement fusion program toward achieving high-energy gain relevant to inertial fusion energy and enable new measurement capabilities for basic discovery science.
</description>
<pubDate>Mon, 05 Aug 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165059</guid>
<dc:date>2024-08-05T00:00:00Z</dc:date>
</item>
<item>
<title>Characterization of the image plate multi-scan response to mono-energetic x-rays</title>
<link>https://hdl.handle.net/1721.1/165058</link>
<description>Characterization of the image plate multi-scan response to mono-energetic x-rays
Cufari, M; Vanderloo, N; Buschmann, BI; DeVault, A; Foo, BC; Vargas, J; Dannhoff, SG; Evans, TE; Johnson, TM; Kunimune, J; Lawrence, Y; Pearcy, JA; Reichelt, BL; Russell, L; Wink, CW; Gatu Johnson, M; Petrasso, RD; Frenje, JA
Image plates (IPs), or phosphor storage screens, are a technology employed frequently in inertial confinement fusion (ICF) and high energy density plasma (HEDP) diagnostics because of their sensitivity to many types of radiation, including, x rays, protons, alphas, beta particles, and neutrons. Prior studies characterizing IPs are predicated on the signal level remaining below the scanner saturation threshold. Since the scanning process removes some signal from the IP via photostimulated luminescence, repeatedly scanning an IP can bring the signal level below the scanner saturation threshold. This process, in turn, raises concerns about the signal response of IPs after an arbitrary number of scans and whether such a process yields, for example, a constant ratio of signal between the nth and n + 1st scan. Here, the sensitivity of IPs is investigated when scanned multiple times. It is demonstrated that the ratio of signal decay is not a constant with the number of scans and that the signal decay depends on the x-ray energy. As such, repeatedly scanning an IP with a mixture of signal types (e.g., x ray, neutron, and protons) enables ICF and HEDP diagnostics employing IPs to better isolate a particular signal type.
</description>
<pubDate>Tue, 24 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165058</guid>
<dc:date>2024-09-24T00:00:00Z</dc:date>
</item>
<item>
<title>Microbially-enhanced dissolution of calcite in sinking marine particles</title>
<link>https://hdl.handle.net/1721.1/165057</link>
<description>Microbially-enhanced dissolution of calcite in sinking marine particles
Borer, Benedict; Subhas, Adam V.; Hayden, Matthew G.; Woosley, Ryan J.; Babbin, Andrew R.
Evidence for the shallow cycling of calcium carbonate in the global ocean is mounting, but the mechanisms driving the dissolution of thermodynamically stable polymorphs, like aragonite and calcite, in the surface ocean remain unconstrained. Here, we quantify how microbial metabolism creates acidic microenvironments in marine particles that enhance the local dissolution of calcite despite supersaturated conditions in bulk waters. A temporal decoupling of particle deoxygenation and acidification suggests that respiration-derived carbon dioxide is not the sole driver of the observed undersaturation. Rapid dissolution occurs in particles exhibiting bacterial growth, with rates exceeding abiotic dissolution at the same bulk saturation by more than an order of magnitude. We observe the highest particle-associated dissolution rates at intermediate settling velocities, indicating that a trade-off between elevated mass transfer due to settling and bacterial respiration governs the ensuing dissolution rates. Translation of our experiments to the water column suggests that microbially driven undersaturation in marine particles may dissolve sufficient calcite in the mesopelagic ocean to extend particle transit times by eliminating this vital ballast mineral, reducing the efficiency of organic carbon sequestration.
</description>
<pubDate>Mon, 09 Mar 2026 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165057</guid>
<dc:date>2026-03-09T00:00:00Z</dc:date>
</item>
<item>
<title>Process cost analysis of performance challenges and their mitigations in sodium-ion battery cathode materials</title>
<link>https://hdl.handle.net/1721.1/165056</link>
<description>Process cost analysis of performance challenges and their mitigations in sodium-ion battery cathode materials
Munjal, Mrigi; Prein, Thorben; Ramadan, Mahmoud M.; Smith, Hugh B.; Venugopal, Vineeth; Rupp, Jennifer L.M.; Abate, Iwnetim I.; Olivetti, Elsa A.; Huang, Kevin J.
The success of sodium-ion batteries (SIBs) hinges on mitigating underperformance in ways that are cost effective, manufacturable, and scalable. This work investigates interfacial, morphological, and bulk interventions to enhance the performance of layered metal oxide cathode active materials (CAMs) for SIBs. We mapped the full space of literature-reported SIB CAM challenges and their mitigations. We then estimated the manufacturing costs for a diverse and representative set of mitigation approaches. Adding sacrificial salts can be cost effective, given low materials costs and minimal process changes. By contrast, many methods are reported to tune CAM morphology. Several are likely challenging at scale due to process throughput and yield limitations. Finally, bulk modifications can mitigate the moisture sensitivity of some CAMs, a likely less costly route than expanding stringent atmosphere controls during manufacturing. We end by discussing the limits and promise of process cost analysis, given the current state of battery reporting in the literature.
</description>
<pubDate>Wed, 21 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165056</guid>
<dc:date>2025-05-21T00:00:00Z</dc:date>
</item>
<item>
<title>Design of a diamond-based in-vessel soft x-ray detector for the SPARC tokamak</title>
<link>https://hdl.handle.net/1721.1/165042</link>
<description>Design of a diamond-based in-vessel soft x-ray detector for the SPARC tokamak
Normile, S; Vezinet, D; Perks, C; Bombarda, F; Verona-Rinati, G; Rice, JE; Verona, C; Raso, AM; Angelone, M
The in-vessel silicon diode arrays that are used for soft x-ray detection in many tokamaks are sensitive to neutron damage, making them unsuitable for burning plasma devices such as SPARC. In such a device, the silicon diodes would need to be placed far from the plasma—limiting their field of view—or an alternative detector could be used. Here, we present the design of a camera containing an array of chemical vapor deposition single-crystal diamonds, which will be placed in the upper and lower port plugs of the SPARC tokamak with a large enough view of the poloidal cross section to enable tomographic inversion. The camera design presented here is optimized to provide a wide field of view of the poloidal cross section. Simulated plasma conditions are used to estimate the x-ray signal that this detector array will receive and to fine-tune the camera placement within the tokamak.
</description>
<pubDate>Tue, 24 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165042</guid>
<dc:date>2024-09-24T00:00:00Z</dc:date>
</item>
<item>
<title>Development of the prototype for the SPARC hard X-ray monitor</title>
<link>https://hdl.handle.net/1721.1/165041</link>
<description>Development of the prototype for the SPARC hard X-ray monitor
Panontin, E; Tinguely, RA; Hartwig, ZS; Saltos, AA; Vezinet, D; Rice, J
The SPARC tokamak will be equipped with a hard X-ray (HXR) monitor system capable of measuring the bremsstrahlung emission from runaway electrons with photon energies in excess of about 100 keV. This diagnostic will detect the formation of runaway electron beams during plasma start-up and inform the plasma control system to terminate the discharge early to protect the machine. In this work, we present a 0D estimate of the HXR emission in SPARC during plasma start-up. Then we discuss the characterization of a prototype of the HXR monitor. The detector mounts a 1 × 1-in.2 LaBr3 inorganic scintillator coupled with a photomultiplier tube and has been tested with γ-ray sources to find its dynamic range. Finally, two possible modes of operation for spectroscopic and current mode measurements on SPARC are proposed.
</description>
<pubDate>Mon, 05 Aug 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165041</guid>
<dc:date>2024-08-05T00:00:00Z</dc:date>
</item>
<item>
<title>Perspectives on pilot-wave hydrodynamics</title>
<link>https://hdl.handle.net/1721.1/165040</link>
<description>Perspectives on pilot-wave hydrodynamics
Bush, John WM; Frumkin, Valeri; Sáenz, Pedro J
We present a number of fresh perspectives on pilot-wave hydrodynamics, the field initiated in 2005 by Couder and Fort's discovery that millimetric droplets self-propelling along the surface of a vibrating bath can capture certain features of quantum systems. A recurring theme will be that pilot-wave hydrodynamics furnishes a classical framework for reproducing many quantum phenomena and allows one to rationalize such phenomena mechanistically, from a local realist perspective, obviating the need to appeal to quantum nonlocality. The distinction is drawn between hydrodynamic pilot-wave theory and its quantum counterparts, Bohmian mechanics, the Bohm–Vigier stochastic pilot-wave theory, and de Broglie's theory of the double-solution. Each of these quantum predecessors provide a valuable touchstone as we take the physical picture engendered in the walking droplets and extend it into the quantum realm via theoretical modeling. Emphasis is given to recent developments in the field, both experimental and conceptual, and to forecasting potentially fruitful new directions.
</description>
<pubDate>Mon, 15 Jul 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165040</guid>
<dc:date>2024-07-15T00:00:00Z</dc:date>
</item>
<item>
<title>Automated transient grating spectroscopy mapping and signal control for large samples</title>
<link>https://hdl.handle.net/1721.1/165039</link>
<description>Automated transient grating spectroscopy mapping and signal control for large samples
Weaver, Colin; Stapelberg, Myles; Short, Michael P; Wylie, Angus; Artalejo, Elena Botica
We present developments for the mapping of large areas using transient grating spectroscopy (TGS) that allow for smoother, larger, autonomous measurements of material samples. The addition of a precise linear stage in the direction parallel to laser sampling coupled with signal optimizing control allows for hands free, self-correcting measurements. In addition, the simplification of the sample holding design to a form that is small enough to mount directly to the linear stage exhibits a straightforward, low-cost solution for automated TGS applications. This capability is demonstrated by taking large uninterrupted maps of gradient wafers, and the results are validated on calibrated tungsten samples and control TGS samples from gradient wafers.
</description>
<pubDate>Wed, 10 Jul 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165039</guid>
<dc:date>2024-07-10T00:00:00Z</dc:date>
</item>
<item>
<title>Manipulating the duration of picoinjection controls the injected volume of individual droplets</title>
<link>https://hdl.handle.net/1721.1/165038</link>
<description>Manipulating the duration of picoinjection controls the injected volume of individual droplets
Thakur, R.; Weitz, D.
The ability to add reagents into droplets is required in many microfluidic workflows. Picoinjection can address this need; however, it is unable to control the injection volume for each individual droplet. Here, we present an improved picoinjection method that can inject controlled volumes into individual droplets. We achieve this by adjusting the injection duration for each picoinjection event. This improved picoinjection method can be used to create complex microfluidic workflows that are able to control the biochemical composition of individual droplets.
</description>
<pubDate>Tue, 02 Jul 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165038</guid>
<dc:date>2024-07-02T00:00:00Z</dc:date>
</item>
<item>
<title>Multitask methods for predicting molecular properties from heterogeneous data</title>
<link>https://hdl.handle.net/1721.1/165037</link>
<description>Multitask methods for predicting molecular properties from heterogeneous data
Fisher, KE; Herbst, MF; Marzouk, YM
Data generation remains a bottleneck in training surrogate models to predict molecular properties. We demonstrate that multitask Gaussian process regression overcomes this limitation by leveraging both expensive and cheap data sources. In particular, we consider training sets constructed from coupled-cluster (CC) and density functional theory (DFT) data. We report that multitask surrogates can predict at CC-level accuracy with a reduction in data generation cost by over an order of magnitude. Of note, our approach allows the training set to include DFT data generated by a heterogeneous mix of exchange–correlation functionals without imposing any artificial hierarchy on functional accuracy. More generally, the multitask framework can accommodate a wider range of training set structures—including the full disparity between the different levels of fidelity—than existing kernel approaches based on Δ-learning although we show that the accuracy of the two approaches can be similar. Consequently, multitask regression can be a tool for reducing data generation costs even further by opportunistically exploiting existing data sources.
</description>
<pubDate>Wed, 03 Jul 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165037</guid>
<dc:date>2024-07-03T00:00:00Z</dc:date>
</item>
<item>
<title>Automated Flow Synthesis of Artificial Heme Enzymes for Enantiodivergent Biocatalysis</title>
<link>https://hdl.handle.net/1721.1/165023</link>
<description>Automated Flow Synthesis of Artificial Heme Enzymes for Enantiodivergent Biocatalysis
Fittolani, Giulio; Kutateladze, Dennis A; Loas, Andrei; Buchwald, Stephen L; Pentelute, Bradley L
The remarkable efficiency with which enzymes catalyze small-molecule reactions has driven their widespread application in organic chemistry. Here, we employ automated fast-flow solid-phase synthesis to access catalytically active full-length enzymes without restrictions on the number and structure of noncanonical amino acids incorporated. We demonstrate the total syntheses of iron-dependent Bacillus subtilis myoglobin (BsMb) and sperm whale myoglobin (SwMb). The synthetic enzymes displayed excellent enantioselectivity and yield in carbene transfer reactions. Absolute control over enantioselectivity in styrene cyclopropanation was achieved using synthetic L- and D-BsMb mutants, which delivered each enantiomer of cyclopropane product in identical and opposite enantiomeric enrichment. BsMb mutants outfitted with noncanonical amino acids were used to facilitate detailed structure–activity relationship studies, revealing a previously unrecognized hydrogen-bonding interaction as the primary driver of enantioselectivity in styrene cyclopropanation. We anticipate that our approach will advance biocatalysis by providing reliable and rapid access to fully synthetic enzymes possessing noncanonical amino acids.
</description>
<pubDate>Wed, 22 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165023</guid>
<dc:date>2025-01-22T00:00:00Z</dc:date>
</item>
<item>
<title>Development of a Ligand for Cu-Catalyzed Amination of Base-Sensitive (Hetero)aryl Chlorides</title>
<link>https://hdl.handle.net/1721.1/165022</link>
<description>Development of a Ligand for Cu-Catalyzed Amination of Base-Sensitive (Hetero)aryl Chlorides
Ai, Han-Jun; Mai, Binh Khanh; Liu, Cecilia; Liu, Peng; Buchwald, Stephen L
We report a new N1,N2-diarylbenzene-1,2-diamine ligand, L6, that supports a copper catalyst capable of coupling base-sensitive aryl chlorides and amines that were previously unsuccessful substrates for Cu-catalyzed C–N coupling. A detailed structure–activity relationship study, combined with density functional theory (DFT) calculations, was used to uncover two key structural features that contribute to the efficacy of the catalyst derived from L6. First, steric repulsion caused by a methyl substituent induces a conformational change that opens up additional space for ligand deprotonation and oxidative addition. Second, the trifluoromethyl groups create electrostatic interactions between the ligand and aryl chloride substrates that facilitate oxidative addition via through-space ligand–substrate interaction.
</description>
<pubDate>Mon, 13 Oct 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165022</guid>
<dc:date>2025-10-13T00:00:00Z</dc:date>
</item>
<item>
<title>Ligand Design Enables Cu-Catalyzed Etherification of Aryl Bromides Using Mild Bases</title>
<link>https://hdl.handle.net/1721.1/165021</link>
<description>Ligand Design Enables Cu-Catalyzed Etherification of Aryl Bromides Using Mild Bases
Strauss, Michael J; Greaves, Megan E; Kim, Seoung-Tae; Schmidt, Michael A; Scola, Paul M; Buchwald, Stephen L
We report a Cu-catalyzed method for the efficient coupling of base-sensitive aryl bromides and alcohols utilizing a newly developed N1,N2-diarylbenzene-1,2-diamine ligand, L15. This ligand was developed to increase the Lewis acidity of the Cu center, thereby permitting the use of a substantially milder base (NaOTMS or NaOPh) relative to those required in a previous iteration of this methodology (NaOMe or NaOt-Bu). Under the optimized reaction conditions, several classes of previously incompatible aryl bromides were efficiently transformed, including base-sensitive heterocycles and those containing acidic functional groups. Kinetic analyses support that C–O coupling proceeds via a mechanism involving binding/deprotonation of alcohol nucleophiles, that the pKa of the base influences the overall rate law, and that substoichiometric quantities of strong base can be utilized to accelerate ligand activation and thereby increase the overall rate of the transformation.
</description>
<pubDate>Mon, 05 Jan 2026 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165021</guid>
<dc:date>2026-01-05T00:00:00Z</dc:date>
</item>
<item>
<title>Time-Resolved Line Shapes of Single Quantum Emitters via Machine Learned Photon Correlations</title>
<link>https://hdl.handle.net/1721.1/165020</link>
<description>Time-Resolved Line Shapes of Single Quantum Emitters via Machine Learned Photon Correlations
Proppe, Andrew H; Lee, Kin Long Kelvin; Kaplan, Alexander EK; Ginterseder, Matthias; Krajewska, Chantalle J; Bawendi, Moungi G
Solid-state single-photon emitters (SPEs) are quantum light sources that combine atomlike optical properties with solid-state integration and fabrication capabilities. SPEs are hindered by spectral diffusion, where the emitter's surrounding environment induces random energy fluctuations. Timescales of spectral diffusion span nanoseconds to minutes and require probing single emitters to remove ensemble averaging. Photon correlation Fourier spectroscopy (PCFS) can be used to measure time-resolved single emitter line shapes, but is hindered by poor signal-to-noise ratio in the measured correlation functions at early times due to low photon counts. Here, we develop a framework to simulate PCFS correlation functions directly from diffusing spectra that match well with experimental data for single colloidal quantum dots. We use these simulated datasets to train a deep ensemble autoencoder machine learning model that outputs accurate, noiseless, and probabilistic reconstructions of the noisy correlations. Using this model, we obtain reconstructed time-resolved single dot emission line shapes at timescales as low as 10 ns, which are otherwise completely obscured by noise. This enables PCFS to extract optical coherence times on the same timescales as Hong-Ou-Mandel two-photon interference, but with the advantage of providing spectral information in addition to estimates of photon indistinguishability. Our machine learning approach is broadly applicable to different photon correlation spectroscopy techniques and SPE systems, offering an enhanced tool for probing single emitter line shapes on previously inaccessible timescales.
</description>
<pubDate>Fri, 04 Aug 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165020</guid>
<dc:date>2023-08-04T00:00:00Z</dc:date>
</item>
<item>
<title>Uncovering temperature-dependent exciton-polariton relaxation mechanisms in hybrid organic-inorganic perovskites</title>
<link>https://hdl.handle.net/1721.1/165019</link>
<description>Uncovering temperature-dependent exciton-polariton relaxation mechanisms in hybrid organic-inorganic perovskites
Laitz, Madeleine; Kaplan, Alexander EK; Deschamps, Jude; Barotov, Ulugbek; Proppe, Andrew H; García-Benito, Inés; Osherov, Anna; Grancini, Giulia; deQuilettes, Dane W; Nelson, Keith A; Bawendi, Moungi G; Bulović, Vladimir
Hybrid perovskites have emerged as a promising material candidate for&#13;
exciton-polariton (polariton) optoelectronics. Thermodynamically, lowthreshold Bose-Einstein condensation requires efficient scattering to the&#13;
polariton energy dispersion minimum, and many applications demand precise&#13;
control of polariton interactions. Thus far, the primary mechanisms by which&#13;
polaritons relax in perovskites remains unclear. In this work, we perform&#13;
temperature-dependent measurements of polaritons in low-dimensional perovskite wedged microcavities achieving a Rabi splitting of _ΩRabi = 260 ±&#13;
5 meV. We change the Hopfield coefficients by moving the optical excitation&#13;
along the cavity wedge and thus tune the strength of the primary polariton&#13;
relaxation mechanisms in this material. We observe the polariton bottleneck&#13;
regime and show that it can be overcome by harnessing the interplay between&#13;
the different excitonic species whose corresponding dynamics are modified by&#13;
strong coupling. This work provides an understanding of polariton relaxation&#13;
in perovskites benefiting from efficient, material-specific relaxation pathways&#13;
and intracavity pumping schemes from thermally brightened excitonic&#13;
species.
Springer Science and Business Media LLC
</description>
<pubDate>Thu, 27 Apr 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165019</guid>
<dc:date>2023-04-27T00:00:00Z</dc:date>
</item>
<item>
<title>A Self-Directed, Home-Like XR System for Sustained Intangible Cultural Heritage Practice: An Ikebana Case Study</title>
<link>https://hdl.handle.net/1721.1/165017</link>
<description>A Self-Directed, Home-Like XR System for Sustained Intangible Cultural Heritage Practice: An Ikebana Case Study
Wu, Yu; Li, Manxueying; Mai, Gelei
Sustained Intangible Cultural Heritage (ICH) practices for novices depend more on curiosity and creative agency than on procedural training. Yet, most extended reality (XR) systems for ICH emphasize guided instruction or exhibitions, limiting self-direction and continuity beyond the device. Using Ikebana as a case study, we present a self-directed, home-like virtual reality (VR) experience built with 3D Gaussian Splatting (3D GS) and natural hand tracking, complemented by an augmented reality (AR) revisiting feature that exports creations for real-world placement and sharing. In a study with 11 novices, pre-post questionnaires showed gains in interest, likelihood to continue offline, and understanding (p ≤.01). Interviews indicated that domestic realism reduced intimidation, natural gestures supported immersion, and AR revisiting extended reflection and engagement. We contribute (1) a home-like, self-directed XR design for ICH practice and (2) evidence that approachability, autonomy, and cross-reality continuity enhance motivation beyond the virtual world.
VRCAI ’25, Macau, China
</description>
<pubDate>Thu, 05 Feb 2026 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165017</guid>
<dc:date>2026-02-05T00:00:00Z</dc:date>
</item>
<item>
<title>CS Ed. in Prisons and Jails: Evidence of Computer Programming Self-Efficacy Growth Across Multiple Course Offerings</title>
<link>https://hdl.handle.net/1721.1/165016</link>
<description>CS Ed. in Prisons and Jails: Evidence of Computer Programming Self-Efficacy Growth Across Multiple Course Offerings
Fishberg, Andrew; Gaetz, Marisa; Nisser, Martin; Cafferty, Carole; Perlman, Lee; Soicher, Raechel N.; Long, Joshua
Incarcerated students enrolled in education programs in prisons and jails experience a range of benefits, from reduced recidivism to improved psychosocial well-being. With respect to computer science education, little is still known about how courses impact incarcerated students' experiences, though recent work has explored fears and confidence of incarcerated students enrolled in computer science courses. Our work investigates incarcerated students' changes in self-efficacy over multiple iterations of four different classes. Our findings showed that all subscales of computer programming self-efficacy (algorithm, control, cooperation, debugging, and logic), but not generalized self-efficacy, were statistically significantly increased at the end of the courses relative to the beginning (p &lt; 0.001, n = 36). A similar pattern of results across the full sample (n = 188) adds additional support for the veracity of the effects found in the subset of paired data. Additionally, we share students' qualitative data to add nuance to our findings and emphasize the importance of these educational experiences for incarcerated students' personal and professional development.
</description>
<pubDate>Tue, 17 Feb 2026 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165016</guid>
<dc:date>2026-02-17T00:00:00Z</dc:date>
</item>
<item>
<title>HyProf: A Profiler for Programming Students that Offers Hypotheses about Performance Bugs</title>
<link>https://hdl.handle.net/1721.1/165015</link>
<description>HyProf: A Profiler for Programming Students that Offers Hypotheses about Performance Bugs
Dargan, Hope; Hartz, Adam; Miller, Robert
Programming students often struggle to find and fix performance bugs in their code. To provide students additional performance debugging support, as well as expose them to profiling tools, we developed Hypothesis Profiler (HyProf). HyProf automatically profiles a slow student submission and produces a profile visualization suitable for learners. In addition to showing individual function and line times, HyProf shows details about the call graph, lines that made recursive calls or did not execute, and hypotheses about possible causes of slow performance, formulated by comparing the slow profile against fast submissions from other students. We deployed HyProf in a 400-student Python course and evaluated it through web logs, office hour observations, and surveys, which showed that 75% of respondents successfully used HyProf to find or fix a performance issue and 85% would recommend it to others.
SIGCSE TS 2026, St. Louis, MO, USA
</description>
<pubDate>Tue, 17 Feb 2026 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165015</guid>
<dc:date>2026-02-17T00:00:00Z</dc:date>
</item>
<item>
<title>KANELÉ: Kolmogorov–Arnold Networks for Efficient LUT-based Evaluation</title>
<link>https://hdl.handle.net/1721.1/165014</link>
<description>KANELÉ: Kolmogorov–Arnold Networks for Efficient LUT-based Evaluation
Hoang, Duc; Gupta, Aarush; Harris, Philip C
Low-latency, resource-efficient neural network inference on FPGAs is essential for applications demanding real-time capability and low power. Lookup table (LUT)-based neural networks are a common solution, combining strong representational power with efficient FPGA implementation. In this work, we introduce KANELÉ, a framework that exploits the unique properties of Kolmogorov–Arnold Networks (KANs) for FPGA deployment. Unlike traditional multilayer perceptrons (MLPs), KANs employ learnable one-dimensional splines with fixed domains as edge activations, a structure naturally suited to discretization and efficient LUT mapping. We present the first systematic design flow for implementing KANs on FPGAs, co-optimizing training with quantization and pruning to enable compact, high-throughput, and low-latency KAN architectures. Our results demonstrate up to a 2700x speedup and orders of magnitude resource savings compared to prior KAN-on-FPGA approaches. Moreover, KANELÉ matches or surpasses other LUT-based architectures on widely used benchmarks, particularly for tasks involving symbolic or physical formulas, while balancing resource usage across FPGA hardware. Finally, we showcase the versatility of the framework by extending it to real-time, power-efficient control systems.
FPGA ’26, Seaside, CA, USA
</description>
<pubDate>Sat, 21 Feb 2026 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165014</guid>
<dc:date>2026-02-21T00:00:00Z</dc:date>
</item>
<item>
<title>AI Séance: Recounts from designing artificial intelligence for transcendence, interpretive lenses and chance</title>
<link>https://hdl.handle.net/1721.1/165013</link>
<description>AI Séance: Recounts from designing artificial intelligence for transcendence, interpretive lenses and chance
Schroeder, Hope; Smith, Amy; Epstein, Ziv
As AI becomes a prism through which we reflect, see, and make sense of the world, the way we create creative, transcendent experiences around AI can shape our relationship to it. Drawing inspiration from the ritual structures of Spiritualist séances and creative art-making séances of Hilma af Klint, we present reflections from a series of participatory experiments we called AI Séances. These gatherings brought together artists, technologists, and spiritual practitioners to engage with generative models in contexts shaped by ritual, randomness, and collaborative interpretation. We found that creative production with AI can yield transcendent user experiences (TUX), different communities bring distinct interpretive lenses to AI outputs, and increased technical control can paradoxically diminish serendipity and transcendence. Through our experiences, we suggest that reclaiming interpretive agency over AI outputs in the creative and spiritual context, rather than treating models as machines that produce answers, opens up new avenues for critical and creative engagement with these technologies and is critical to preserving our humanity. The AI Séance offers a model for human-centered interaction with generative systems where magic lies not in the machine’s capabilities, but in our collective ability to create meaning.
</description>
<pubDate>Thu, 17 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165013</guid>
<dc:date>2025-07-17T00:00:00Z</dc:date>
</item>
<item>
<title>NeuSE: Neural SE(3)-equivariant embedding for long-term object-based simultaneous localization and mapping</title>
<link>https://hdl.handle.net/1721.1/165010</link>
<description>NeuSE: Neural SE(3)-equivariant embedding for long-term object-based simultaneous localization and mapping
Fu, Jiahui; Du, Yilun; Singh, Kurran; Tenenbaum, Joshua B; Leonard, John J
We present NeuSE, a novel Neural SE(3)-Equivariant Embedding for objects, and illustrate how it supports object-based Simultaneous Localization and Mapping (SLAM) for consistent spatial understanding with long-term scene changes. NeuSE is a set of latent object embeddings created from partial object observations. It serves as a compact point cloud surrogate for complete object models, encoding the full shape, scale, and transform information about an object. In addition, the inferred latent code is both SE(3) and scale equivariant, enabling strong generalization to objects of both unseen sizes and different SE(3) poses. This makes NeuSE particularly effective in real-world scenarios where objects may vary in size or spatial configuration. With NeuSE, relative frame transforms can be directly derived from inferred latent codes. Our proposed SLAM paradigm, using NeuSE for object shape, size, and pose characterization, can operate independently or in conjunction with typical SLAM systems. It directly infers SE(3) camera pose constraints that are compatible with general SLAM pose graph optimization, while maintaining a lightweight, object-centric map that adapts to real-world changes. Our evaluation is conducted on synthetic and real-world sequences with changes in both controlled and uncontrolled settings, featuring multi-category objects of various shapes and sizes. Our approach demonstrates improved localization capability and change-aware mapping consistency when working either independently or as a complement to common SLAM pipelines.
</description>
<pubDate>Thu, 01 Jan 2026 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165010</guid>
<dc:date>2026-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>TravelAgent: Generative agents in the built environment</title>
<link>https://hdl.handle.net/1721.1/165008</link>
<description>TravelAgent: Generative agents in the built environment
Noyman, Ariel; Hu, Kai; Larson, Kent
Understanding human behavior in the built environment is critical for designing highly-functional, human-centered urban spaces. Traditional approaches, such as manual observations, surveys, and simple simulations, often struggle to capture the complexity and nuance of real-world human behavior and experience. Here we introduce TravelAgent, a novel agentic simulation platform that models pedestrian navigation, activity, and human-like decision-making in the built environment. TravelAgent is proposed to help design teams and decision-makers understand how different users might experience diverse built environments under varying environmental conditions. TravelAgent integrates Generative Agents, multi-modal sensory inputs, and virtual environments, enabling agents to perceive, navigate, and interact with their surroundings, with tasks ranging from goal-oriented navigation to free exploration. We share analysis from 200 simulations with 3364 decision points and task completion rate of ∼80%, across diverse spatial layouts and agent archetypes. We present spatial, linguistic, and sentiment analysis, and show how agents react and experience their surroundings. Finally, we suggest TravelAgent as a new paradigm for designing, simulating, and understanding human experiences in urban environments.
</description>
<pubDate>Sun, 01 Feb 2026 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165008</guid>
<dc:date>2026-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Cleaning a dark matter detector: A case of ontological and normative elusiveness</title>
<link>https://hdl.handle.net/1721.1/165007</link>
<description>Cleaning a dark matter detector: A case of ontological and normative elusiveness
de Swart, Jaco; Mol, Annemarie
Laboratory sciences crucially depend on the cleanliness of experiments. But what is clean? In this article, we show that the salience of the valuation clean emerges through its relation to a particular ontological repertoire. Our case is the XENONnT experiment in the Gran Sasso Mountains of Italy, designed to detect dark matter in the form of hypothetical WIMPs (Weakly Interacting Massive Particles). In this experiment, dirt presents a significant disruption, as contaminations can mimic the signals of WIMPs, and electronegative molecules risk erasing such signals. The ideosyncratic cleanliness required makes the practice of cleaning the XENONnT detector exceedingly difficult. So far, the ontological question ‘do WIMPs exist?’ remains open, which means that the normative question ‘is the detector clean enough?’ cannot be answered either. In addition, more cleaning will make the detector sensitive to a background of unremovable neutrinos—hence irredeemably dirty. With the normative goal of a ‘clean detector’ out of reach, the ontological question ‘do WIMPs exist?’ is bound to remain open as well. Alternative experiments therefore hunt for different hypothetical dark matter candidates, with different equipment, requiring different kinds of cleanliness. At the same time, the XENONnT experiment must navigate tensions between its own cleanliness goals and rules meant to ensure the environmental cleanliness of the Gran Sasso National Park. Cleaning turns out to be dirty. This leads us to ask: Which goods deserve to be cherished, and, intertwined with that, which realities deserve to be cared for?
</description>
<pubDate>Sat, 30 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165007</guid>
<dc:date>2025-08-30T00:00:00Z</dc:date>
</item>
<item>
<title>Multi-fidelity reinforcement learning for time-optimal quadrotor re-planning</title>
<link>https://hdl.handle.net/1721.1/165006</link>
<description>Multi-fidelity reinforcement learning for time-optimal quadrotor re-planning
Ryou, Gilhyun; Wang, Geoffrey; Karaman, Sertac
High-speed online trajectory planning for UAVs poses a significant challenge due to the need for precise modeling of complex dynamics while also being constrained by computational limitations. This paper presents a multi-fidelity reinforcement learning method (MFRL) that aims to effectively create a realistic dynamics model and simultaneously train a planning policy that can be readily deployed in real-time applications. The proposed method involves the co-training of a planning policy and a reward estimator; the latter predicts the performance of the policy’s output and is trained efficiently through multi-fidelity Bayesian optimization. This optimization approach models the correlation between different fidelity levels, thereby constructing a high-fidelity model based on a low-fidelity foundation, which enables the accurate development of the reward model with limited high-fidelity experiments. The framework is further extended to include real-world flight experiments in reinforcement learning training, allowing the reward model to precisely reflect real-world constraints and broadening the policy’s applicability to real-world scenarios. We present rigorous evaluations by training and testing the planning policy in both simulated and real-world environments. The resulting trained policy not only generates faster and more reliable trajectories compared to the baseline snap minimization method, but it also achieves trajectory updates in 2 ms on average, while the baseline method takes several minutes.
</description>
<pubDate>Fri, 22 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165006</guid>
<dc:date>2025-08-22T00:00:00Z</dc:date>
</item>
<item>
<title>Designing Distribution Network Tariffs in the US with an Application to Increased Electric Vehicle Adoption</title>
<link>https://hdl.handle.net/1721.1/165005</link>
<description>Designing Distribution Network Tariffs in the US with an Application to Increased Electric Vehicle Adoption
Turk, Graham; Schittekatte, Tim; Duenas-Martinez, Pablo; Joskow, Paul L; Schmalensee, Richard
Time-of-use (TOU) tariffs that vary the cost per kWh to reflect wide variations in generation and wholesale market costs give incentives to shift all electric vehicle (EV) charging to low-price periods. As EV penetration increases, such tariffs would substantially raise the local kW demand in those low-priced periods, which eventually would lead to increasing network expansion costs. A straightforward way to mitigate this problem is to separate energy charges from network charges, with appropriate rate designs for each. This paper uses a realistic case study to investigate the implications of combining TOU energy charges with various network tariff designs in the face of increased EV penetration. Our results provide support for the adoption in the US of ex-ante subscribed capacity tariffs (subscription charges), which give consumers incentives to reduce their peak kW demands. Reducing costs of EV ownership (a priority for many US states) need not be pursued at the expense of broader affordability goals.&#13;
&#13;
JEL classification: L51, L94, L97, Q41, D40
</description>
<pubDate>Sat, 01 Nov 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165005</guid>
<dc:date>2025-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Geopolitical ecologies of cloud capitalism: Territorial restructuring and the making of national computing power in the U.S. and China</title>
<link>https://hdl.handle.net/1721.1/165004</link>
<description>Geopolitical ecologies of cloud capitalism: Territorial restructuring and the making of national computing power in the U.S. and China
Kollar, Justin; Stokols, Andrew
As computing power becomes central to geopolitical rivalry, cloud infrastructure is increasingly framed as critical to national security, economic resilience and technological sovereignty. Current debates often focus on global competition – especially between the U.S. and China – highlighting strategic investments, export controls and infrastructure diplomacy abroad. Yet far less attention has been paid to the domestic territorial transformations that make such geopolitical projection possible. This paper argues that national strategies for AI and cloud dominance depend on the reorganization of land, energy and regulatory systems to sustain large-scale computation. Using a geopolitical ecology framework, we examine how the U.S. and China build national computing power as a strategic economic and military resource. In the U.S., cloud firms operate as state-aligned actors, drawing on fragmented regulatory authority, public subsidies and national security discourse to expand into rural and peri-urban regions. China pursues a more centralized strategy through its East Data, West Computing initiative, redistributing infrastructure to inland provinces under state-led development goals. Through comparative regional analysis, we show how domestic infrastructural expansion underpins geopolitical rivalry, producing new forms of territorial governance and socio-environmental inequality. Far from immaterial, the cloud is grounded in enclosure, extraction and the spatial foundations of techno-industrial power.
</description>
<pubDate>Mon, 15 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165004</guid>
<dc:date>2025-09-15T00:00:00Z</dc:date>
</item>
<item>
<title>Unlikely Organizers: The Rise of Tech Worker Labor Activism</title>
<link>https://hdl.handle.net/1721.1/165003</link>
<description>Unlikely Organizers: The Rise of Tech Worker Labor Activism
Tan, JS; Luka, Natalia; Mazo, Emily
Tech workers—professionals in the technology industry, such as software engineers, product managers, and UX designers—are not normally associated with labor activism. Yet, since 2017, there has been a significant rise in workplace activism over “bread-and-butter” issues among this group. Using an original data set, the authors demonstrate how, in the case of tech workers, periods of intense workplace social activism preceded later periods of heightened labor activism. Regression analysis confirms that participation in social activism increases the likelihood of labor activism six months to one year later at the same company. Extending Rick Fantasia’s cultures of solidarity to professional workers, the authors highlight a new mechanism by which professionals engage in labor organizing: First, tech workers, guided by their professional interest in socially beneficial work, engage in workplace social activism. This action generates solidarity among employee-participants but also creates conflict with management and leads to the emergence of labor activism among professionals.
</description>
<pubDate>Thu, 09 Oct 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165003</guid>
<dc:date>2025-10-09T00:00:00Z</dc:date>
</item>
<item>
<title>Fast detection of liver fibrosis with collagen-binding single-nanometer iron oxide nanoparticles via T1-weighted MRI</title>
<link>https://hdl.handle.net/1721.1/165002</link>
<description>Fast detection of liver fibrosis with collagen-binding single-nanometer iron oxide nanoparticles via T1-weighted MRI
Zhang, Juanye; Ning, Yingying; Zhu, Hua; Rotile, Nicholas J; Wei, He; Diyabalanage, Himashinie; Hansen, Eric C; Zhou, Iris Y; Barrett, Stephen C; Sojoodi, Mozhdeh; Tanabe, Kenneth K; Humblet, Valerie; Jasanoff, Alan; Caravan, Peter; Bawendi, Moungi G
SNIO–CBP, a single-nanometer iron oxide (SNIO) nanoparticle functionalized with a type I collagen-binding peptide (CBP), was developed as a T1-weighted MRI contrast agent with only endogenous elements for fast and noninvasive detection of liver fibrosis. SNIO–CBP exhibits 6.7-fold higher relaxivity compared to a molecular gadolinium-based collagen-binding contrast agent CM-101 on a per CBP basis at 4.7 T. Unlike most iron oxide nanoparticles, SNIO–CBP exhibits fast elimination from the bloodstream with a 5.7 min half-life, high renal clearance, and low, transient liver enhancement in healthy mice. We show that a dose of SNIO–CBP that is 2.5-fold lower than that for CM-101 has comparable imaging efficacy in rapid (within 15 min following intravenous injection) detection of hepatotoxin-induced liver fibrosis using T1-weighted MRI in a carbon tetrachloride–induced mouse liver injury model. We further demonstrate the applicability of SNIO–CBP in detecting liver fibrosis in choline-deficient L-amino acid-defined high-fat diet mouse model of nonalcoholic steatohepatitis. These results provide a platform with potential for the development of high relaxivity, gadolinium-free molecular MRI probes for characterizing chronic liver disease.
</description>
<pubDate>Mon, 24 Apr 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165002</guid>
<dc:date>2023-04-24T00:00:00Z</dc:date>
</item>
<item>
<title>Quantum Shell in a Shell: Engineering Colloidal Nanocrystals for a High-Intensity Excitation Regime</title>
<link>https://hdl.handle.net/1721.1/165001</link>
<description>Quantum Shell in a Shell: Engineering Colloidal Nanocrystals for a High-Intensity Excitation Regime
Harankahage, Dulanjan; Cassidy, James; Beavon, Jacob; Huang, Jiamin; Brown, Niamh; Berkinsky, David B; Marder, Andrew; Kayira, Barbra; Montemurri, Michael; Anzenbacher, Pavel; Schaller, Richard D; Sun, Liangfeng; Bawendi, Moungi G; Malko, Anton V; Diroll, Benjamin T; Zamkov, Mikhail
Many optoelectronic processes in colloidal semiconductor nanocrystals (NCs) suffer an efficiency decline under high-intensity excitation. This issue is caused by Auger recombination of multiple excitons, which converts the NC energy into excess heat, reducing the efficiency and life span of NC-based devices, including photodetectors, X-ray scintillators, lasers, and high-brightness light-emitting diodes (LEDs). Recently, semiconductor quantum shells (QSs) have emerged as a promising NC geometry for the suppression of Auger decay; however, their optoelectronic performance has been hindered by surface-related carrier losses. Here, we address this issue by introducing quantum shells with a CdS-CdSe-CdS-ZnS core-shell-shell-shell multilayer structure. The ZnS barrier inhibits the surface carrier decay, which increases the photoluminescence (PL) quantum yield (QY) to 90% while retaining a high biexciton emission QY of 79%. The improved QS morphology allows demonstrating one of the longest Auger lifetimes reported for colloidal NCs to date. The reduction of nonradiative losses in QSs also leads to suppressed blinking in single nanoparticles and low-threshold amplified spontaneous emission. We expect that ZnS-encapsulated quantum shells will benefit many applications exploiting high-power optical or electrical excitation regimes.
</description>
<pubDate>Tue, 06 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165001</guid>
<dc:date>2023-06-06T00:00:00Z</dc:date>
</item>
<item>
<title>Synthesis of Zwitterionic CsPbBr3 Nanocrystals with Controlled Anisotropy using Surface-Selective Ligand Pairs</title>
<link>https://hdl.handle.net/1721.1/165000</link>
<description>Synthesis of Zwitterionic CsPbBr3 Nanocrystals with Controlled Anisotropy using Surface-Selective Ligand Pairs
Zhu, Hua; Kick, Matthias; Ginterseder, Matthias; Krajewska, Chantalle J; Šverko, Tara; Li, Ruipeng; Lu, Yongli; Shih, Meng‐Chen; Van Voorhis, Troy; Bawendi, Moungi G
Mechanistic studies of the morphology of lead halide perovskite nanocrystals (LHP‐NCs) are hampered by a lack of generalizable suitable synthetic strategies and ligand systems. Here, the synthesis of zwitterionic CsPbBr&lt;jats:sub&gt;3&lt;/jats:sub&gt; NCs is presented with controlled anisotropy using a proposed “surface‐selective ligand pairs” strategy. Such a strategy provides a platform to systematically study the binding affinity of capping ligand pairs and the resulting LHP morphologies. By using zwitterionic ligands (ZwL) with varying structures, majority ZwL‐capped LHP NCs with controlled morphology are obtained, including anisotropic nanoplatelets and nanorods, for the first time. Combining experiments with density functional theory calculations, factors that govern the ligand binding on the different surface facets of LHP‐NCs are revealed, including the steric bulkiness of the ligand, the number of binding sites, and the charge distance between binding moieties. This study provides guidance for the further exploration of anisotropic LHP‐NCs.
</description>
<pubDate>Mon, 24 Jul 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/165000</guid>
<dc:date>2023-07-24T00:00:00Z</dc:date>
</item>
<item>
<title>Theory of Photoluminescence Spectral Line Shapes of Semiconductor Nanocrystals</title>
<link>https://hdl.handle.net/1721.1/164999</link>
<description>Theory of Photoluminescence Spectral Line Shapes of Semiconductor Nanocrystals
Lin, Kailai; Jasrasaria, Dipti; Yoo, Jason J; Bawendi, Moungi; Utzat, Hendrik; Rabani, Eran
Single-molecule photoluminescence (PL) spectroscopy of semiconductor nanocrystals (NCs) reveals the nature of exciton-phonon interactions in NCs. Understanding the homogeneous spectral line shapes and their temperature dependence remains an open problem. Here, we develop an atomistic model to describe the PL spectrum of NCs, accounting for excitonic effects, phonon dispersion relations, and exciton-phonon couplings. We validate our model using single-NC measurements on CdSe/CdS NCs from &lt;i&gt;T&lt;/i&gt; = 4 to 290 K, and we find that the slightly asymmetric main peak at low temperatures is comprised of a narrow zero-phonon line (ZPL) and acoustic phonon sidebands. Furthermore, we identify the specific phonon modes that give rise to the optical phonon sidebands. At temperatures above 200 K, the spectral line width shows a stronger dependence upon the temperature, which we demonstrate to be correlated with higher order exciton-phonon couplings. We also identify the line width dependence upon reorganization energy, NC core sizes, and shell thicknesses.
</description>
<pubDate>Tue, 08 Aug 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164999</guid>
<dc:date>2023-08-08T00:00:00Z</dc:date>
</item>
<item>
<title>Ultrafast dense DNA functionalization of quantum dots and rods for scalable 2D array fabrication with nanoscale precision</title>
<link>https://hdl.handle.net/1721.1/164998</link>
<description>Ultrafast dense DNA functionalization of quantum dots and rods for scalable 2D array fabrication with nanoscale precision
Chen, Chi; Luo, Xin; Kaplan, Alexander EK; Bawendi, Moungi G; Macfarlane, Robert J; Bathe, Mark
Scalable fabrication of two-dimensional (2D) arrays of quantum dots (QDs) and quantum rods (QRs) with nanoscale precision is required for numerous device applications. However, self-assembly–based fabrication of such arrays using DNA origami typically suffers from low yield due to inefficient QD and QR DNA functionalization. In addition, it is challenging to organize solution-assembled DNA origami arrays on 2D device substrates while maintaining their structural fidelity. Here, we reduced manufacturing time from a few days to a few minutes by preparing high-density DNA-conjugated QDs/QRs from organic solution using a dehydration and rehydration process. We used a surface-assisted large-scale assembly (SALSA) method to construct 2D origami lattices directly on solid substrates to template QD and QR 2D arrays with orientational control, with overall loading yields exceeding 90%. Our fabrication approach enables the scalable, high fidelity manufacturing of 2D addressable QDs and QRs with nanoscale orientational and spacing control for functional 2D photonic devices.
</description>
<pubDate>Fri, 11 Aug 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164998</guid>
<dc:date>2023-08-11T00:00:00Z</dc:date>
</item>
<item>
<title>Rational Design of a Chemical Bath Deposition Based Tin Oxide Electron‐Transport Layer for Perovskite Photovoltaics</title>
<link>https://hdl.handle.net/1721.1/164997</link>
<description>Rational Design of a Chemical Bath Deposition Based Tin Oxide Electron‐Transport Layer for Perovskite Photovoltaics
Lu, Yongli; Shih, Meng‐Chen; Tan, Shaun; Grotevent, Matthias J; Wang, Lili; Zhu, Hua; Zhang, Ruiqi; Lee, Joo‐Hong; Lee, Jin‐Wook; Bulović, Vladimir; Bawendi, Moungi G
Chemical bath deposition (CBD) is widely used to deposit tin oxide (SnOx) as an electron-transport layer in perovskite solar cells (PSCs). The conventional recipe uses thioglycolic acid (TGA) to facilitate attachments of SnOx particles onto the substrate. However, nonvolatile TGA is reported to harm the operational stability of PSCs. In this work, a volatile oxalic acid (OA) is introduced as an alternative to TGA. OA, a dicarboxylic acid, functions as a chemical linker for the nucleation and attachment of particles to the substrate in the chemical bath. Moreover, OA can be readily removed through thermal annealing followed by a mild H2O2 treatment, as shown by FTIR measurements. Synergistically, the mild H2O2 treatment selectively oxidizes the surface of the SnOx layer, minimizing nonradiative interface carrier recombination. EELS (electron-energy-loss spectroscopy) confirms that the SnOx surface is dominated by Sn4+, while the bulk is a mixture of Sn2+ and Sn4+. This rational design of a CBD SnOx layer leads to devices with T85 ≈1500 h, a significant improvement over the TGA-based device with T80 ≈250 h. The champion device reached a power conversion efficiency of 24.6%. This work offers a rationale for optimizing the complex parameter space of CBD SnOx to achieve efficient and stable PSCs.
</description>
<pubDate>Tue, 18 Jul 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164997</guid>
<dc:date>2023-07-18T00:00:00Z</dc:date>
</item>
<item>
<title>Reduced recombination via tunable surface fields in perovskite thin films</title>
<link>https://hdl.handle.net/1721.1/164996</link>
<description>Reduced recombination via tunable surface fields in perovskite thin films
deQuilettes, Dane W; Yoo, Jason J; Brenes, Roberto; Kosasih, Felix Utama; Laitz, Madeleine; Dou, Benjia Dak; Graham, Daniel J; Ho, Kevin; Shi, Yangwei; Shin, Seong Sik; Ducati, Caterina; Bawendi, Moungi G; Bulović, Vladimir
The ability to reduce energy loss at semiconductor surfaces through passivation or surface field engineering is an essential step in the manufacturing of efficient photovoltaic (PV) and optoelectronic devices. Similarly, surface modification of emerging halide perovskites with quasi-two-dimensional (2D) heterostructures is now ubiquitous to achieve PV power conversion efficiencies (PCEs) &gt;25%, yet a fundamental understanding to how these treatments function is still generally lacking. Here we use a unique combination of depth-sensitive nanoscale characterization techniques to uncover a tunable passivation strategy and mechanism found in perovskite PV devices that were the first to reach the &gt;25% PCE milestone. Namely, treatment with hexylammonium bromide leads to the simultaneous formation of an iodide-rich 2D layer along with a Br halide gradient that extends from defective surfaces and grain boundaries into the bulk three-dimensional (3D) layer. This interface can be optimized to extend the charge carrier lifetime to record values &gt;30 μs and to reduce interfacial recombination velocities to values as low as &lt;7 cm s−1.
</description>
<pubDate>Wed, 28 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164996</guid>
<dc:date>2024-02-28T00:00:00Z</dc:date>
</item>
<item>
<title>Solution-phase sample-averaged single-particle spectroscopy of quantum emitters with femtosecond resolution</title>
<link>https://hdl.handle.net/1721.1/164995</link>
<description>Solution-phase sample-averaged single-particle spectroscopy of quantum emitters with femtosecond resolution
Shi, Jiaojian; Shen, Yuejun; Pan, Feng; Sun, Weiwei; Mangu, Anudeep; Shi, Cindy; McKeown-Green, Amy; Moradifar, Parivash; Bawendi, Moungi G; Moerner, WE; Dionne, Jennifer A; Liu, Fang; Lindenberg, Aaron M
The development of many quantum optical technologies depends on the availability of single quantum emitters with near-perfect coherence. Systematic improvement is limited by a lack of understanding of the microscopic energy flow at the single-emitter level and ultrafast timescales. Here we utilize a combination of fluorescence correlation spectroscopy and ultrafast spectroscopy to capture the sample-averaged dynamics of defects with single-particle sensitivity. We employ this approach to study heterogeneous emitters in two-dimensional hexagonal boron nitride. From milliseconds to nanoseconds, the translational, shelving, rotational and antibunching features are disentangled in time, which quantifies the normalized two-photon emission quantum yield. Leveraging the femtosecond resolution of this technique, we visualize electron–phonon coupling and discover the acceleration of polaronic formation on multi-electron excitation. Corroborated with theory, this translates to the photon fidelity characterization of cascaded emission efficiency and decoherence time. Our work provides a framework for ultrafast spectroscopy in heterogeneous emitters, opening new avenues of extreme-scale characterization for quantum applications.
</description>
<pubDate>Mon, 08 Apr 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164995</guid>
<dc:date>2024-04-08T00:00:00Z</dc:date>
</item>
<item>
<title>Additive‐Free Oxidized Spiro‐MeOTAD Hole Transport Layer Significantly Improves Thermal Solar Cell Stability</title>
<link>https://hdl.handle.net/1721.1/164994</link>
<description>Additive‐Free Oxidized Spiro‐MeOTAD Hole Transport Layer Significantly Improves Thermal Solar Cell Stability
Grotevent, Matthias J; Lu, Yongli; Šverko, Tara; Shih, Meng‐Chen; Tan, Shaun; Zhu, Hua; Dang, Tong; Mwaura, Jeremiah K; Swartwout, Richard; Beiglböck, Finn; Kothe, Linda; Bulović, Vladimir; Bawendi, Moungi G
Perovskite solar cells are among the most promising new solar technologies, already surpassing polycrystalline silicon solar cell efficiencies. The stability of the highest efficiency devices at elevated temperature is, however, poor. These cells typically use Spiro‐MeOTAD as the hole transporting layer. It is generally believed that additives, required for enhancing electrical conductivity and optimizing energy level alignment, are responsible for the reduced stability—inferring that Spiro‐MeOTAD based hole transporting layers are intrinsically unstable. Here, a reliable noble metal free synthesis of Spiro‐MeOTAD (bis(trifluoromethane)sulfonimide)&lt;jats:sub&gt;4&lt;/jats:sub&gt; is presented which is used as the oxidizing agent. No additives are added to the partially oxidized Spiro‐MeOTAD hole‐transporting layer. Device efficiencies up to 24.2% are achieved. Electrical conductivity is largely developed by the first 1% oxidation. Further oxidation shifts the energy levels away from the vacuum level, which allows tuning of the energy level alignment without the use of additives—contradicting the current understanding of this system. Without additives, devices demonstrate significant improvement in stability at elevated temperatures up to 85 °C under one sun over 1400 h continuous illumination. The remaining degradation is pinpointed to ion migration and reactions in the perovskite layer which may be further suppressed with compositional engineering and adequate ion barrier layers.
</description>
<pubDate>Thu, 06 Jun 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164994</guid>
<dc:date>2024-06-06T00:00:00Z</dc:date>
</item>
<item>
<title>Bright and Fast Emission from Robust Supramolecular J-Aggregate Nanostructures through Silica-Encapsulation</title>
<link>https://hdl.handle.net/1721.1/164993</link>
<description>Bright and Fast Emission from Robust Supramolecular J-Aggregate Nanostructures through Silica-Encapsulation
Thanippuli Arachchi, Dimuthu H; Barotov, Ulugbek; Perkinson, Collin F; Šverko, Tara; Kaplan, Alexander EK; Bawendi, Moungi G
We introduce a two-step silica-encapsulation procedure to optimize both the optical efficiency and structural robustness of 5,5',6,6'-tetrachloro-1,1'-diethyl-3,3'-di(4-sulfobutyl)-benzimidazolocarbocyanine (TDBC), a two-dimensional sheet-like J-aggregate. We report a fluorescence quantum yield of ∼98%, the highest quantum yield recorded for any J-aggregate structure at room temperature, and a fast, emissive lifetime of 234 ps. Silica, as an encapsulating matrix, provides optical transparency, chemical inertness, and robustness to dilution, while rigidifying the J-aggregate structure. Our in situ encapsulation process preserves the excitonic structure in TDBC J-aggregates, maintaining their light absorption and emission properties. The homogeneous silica coating has an average thickness of 0.5-1 nm around J-aggregate sheets. Silica encapsulation permits extensive dilutions of J-aggregates without significant disintegration into monomers. The narrow absorbance and emission line widths exhibit further narrowing upon cooling to 79 K, which is consistent with J-type coupling in the encapsulated aggregates. This silica TDBC J-aggregate construct signifies (1) a bright, fast, and robust fluorophore system, (2) a platform for further manipulation of J-aggregates as building blocks for integration with other optical materials and structures, and (3) a system for fundamental studies of exciton delocalization, transport, and emission dynamics within a rigid matrix.
</description>
<pubDate>Wed, 24 Jul 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164993</guid>
<dc:date>2024-07-24T00:00:00Z</dc:date>
</item>
<item>
<title>Toward biophysical markers of depression vulnerability</title>
<link>https://hdl.handle.net/1721.1/164992</link>
<description>Toward biophysical markers of depression vulnerability
Pinotsis, DA; Fitzgerald, S; See, C; Sementsova, A; Widge, AS
A major difficulty with treating psychiatric disorders is their heterogeneity: different neural causes can lead to the same phenotype. To address this, we propose describing the underlying pathophysiology in terms of interpretable, biophysical parameters of a neural model derived from the electroencephalogram. We analyzed data from a small patient cohort of patients with depression and controls. Using DCM, we constructed biophysical models that describe neural dynamics in a cortical network activated during a task that is used to assess depression state. We show that biophysical model parameters are biomarkers, that is, variables that allow subtyping of depression at a biological level. They yield a low dimensional, interpretable feature space that allowed description of differences between individual patients with depressive symptoms. They could capture internal heterogeneity/variance of depression state and achieve significantly better classification than commonly used EEG features. Our work is a proof of concept that a combination of biophysical models and machine learning may outperform earlier approaches based on classical statistics and raw brain data.
</description>
<pubDate>Tue, 18 Oct 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164992</guid>
<dc:date>2022-10-18T00:00:00Z</dc:date>
</item>
<item>
<title>Understanding Robustness and Generalization of Artificial Neural Networks Through Fourier Masks</title>
<link>https://hdl.handle.net/1721.1/164991</link>
<description>Understanding Robustness and Generalization of Artificial Neural Networks Through Fourier Masks
Karantzas, Nikos; Besier, Emma; Ortega Caro, Josue; Pitkow, Xaq; Tolias, Andreas S.; Patel, Ankit B.; Anselmi, Fabio
Despite the enormous success of artificial neural networks (ANNs) in many disciplines, the characterization of their computations and the origin of key properties such as generalization and robustness remain open questions. Recent literature suggests that robust networks with good generalization properties tend to be biased toward processing low frequencies in images. To explore the frequency bias hypothesis further, we develop an algorithm that allows us to learn modulatory masks highlighting the essential input frequencies needed for preserving a trained network's performance. We achieve this by imposing invariance in the loss with respect to such modulations in the input frequencies. We first use our method to test the low-frequency preference hypothesis of adversarially trained or data-augmented networks. Our results suggest that adversarially robust networks indeed exhibit a low-frequency bias but we find this bias is also dependent on directions in frequency space. However, this is not necessarily true for other types of data augmentation. Our results also indicate that the essential frequencies in question are effectively the ones used to achieve generalization in the first place. Surprisingly, images seen through these modulatory masks are not recognizable and resemble texture-like patterns.
</description>
<pubDate>Tue, 12 Jul 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164991</guid>
<dc:date>2022-07-12T00:00:00Z</dc:date>
</item>
<item>
<title>Enhancement of Cyanobacterial Bloom Monitoring in Lake Taihu Using Dual Red-Edge Bands of GF-6/WFV: Multi-Dimensional Feature Combination and Extraction Accuracy Analysis</title>
<link>https://hdl.handle.net/1721.1/164990</link>
<description>Enhancement of Cyanobacterial Bloom Monitoring in Lake Taihu Using Dual Red-Edge Bands of GF-6/WFV: Multi-Dimensional Feature Combination and Extraction Accuracy Analysis
Sun, Yunxiao; Zhang, Ruolin; Zhao, Chunhong; Meng, Qingyan; Sun, Zhenhui; Wang, Jialong; Wu, Jun; Wang, Yao; Gao, Decai; Guan, Huyi
Cyanobacterial blooms pose a serious threat to freshwater ecosystems, necessitating accurate remote sensing monitoring. Although red-edge bands show potential in terrestrial monitoring, their multi-dimensional features (i.e., spectral, textural, and index-based characteristics) remain underutilized for aquatic blooms. This study leverages the dual red-edge bands (710 nm and 750 nm) of GF-6/WFV to enhance cyanobacterial bloom identification in Lake Taihu. Multi-temporal images from 2019–2023 were used to construct red-edge features in three dimensions: spectral (evaluated via adaptive band selection method) and Jeffries–Matusita–Bhattacharyya distance), texture (based on Gray Level Co-occurrence Matrix and principal component analysis), and indices (nine vegetation indices ranked by Random Forest importance). Twelve feature-combination schemes were designed and implemented with a Random Forest classifier. Results show that red-edge features consistently improve identification accuracy. Quantitatively, compared to the basic four-band (RGBN) combination, the 710 nm band improved spectral separability by an average of 9.63%, whereas the 750 nm band yielded a lower average improvement of 5.69%. Red-edge indices, especially the modified chlorophyll absorption reflectance index 1 (MCARI1) and normalized difference red-edge index (NDRE), exhibited higher importance than non-red-edge indices. All schemes incorporating red-edge features achieved mean overall accuracies of 92.8–94.9% and Kappa coefficients of 0.86–0.94, surpassing the basic four-band scheme. Among these features, red-edge indices contributed most significantly to accuracy gains, increasing the overall accuracy by an average of 0.36–6.06% and the Kappa coefficient by up to 0.06. The enhancement effect of the red-edge 710 nm band features was superior to that of the 750 nm band. This study demonstrates that multi-dimensional red-edge features effectively enhance the identification accuracy of cyanobacterial blooms and provides a methodological reference for operational GF-6 applications in water quality monitoring.
</description>
<pubDate>Fri, 20 Feb 2026 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164990</guid>
<dc:date>2026-02-20T00:00:00Z</dc:date>
</item>
<item>
<title>Biological Activity of Metal Complexes</title>
<link>https://hdl.handle.net/1721.1/164989</link>
<description>Biological Activity of Metal Complexes
Sharma, Vinay K.
Metal complexes play a fundamental role in biological systems and continue to attract sustained interest due to their remarkable potential in therapeutic, diagnostic, and biotechnological applications [1–8]. In recent years, the field of bioinorganic chemistry has advanced rapidly, driven by progress in coordination chemistry, spectroscopy, nanotechnology, and molecular biology [9–22]. These developments have enabled a deeper understanding of how metal ions and complexes interact with biomolecular targets and have opened new avenues for the rational design of metal-based agents for cancer therapy, antimicrobial treatment, imaging, and the study of metal-mediated biochemical processes [23–30].
</description>
<pubDate>Tue, 17 Feb 2026 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164989</guid>
<dc:date>2026-02-17T00:00:00Z</dc:date>
</item>
<item>
<title>Single Parameter Model for Galaxy Rotation Curves</title>
<link>https://hdl.handle.net/1721.1/164988</link>
<description>Single Parameter Model for Galaxy Rotation Curves
Cisneros, Sophia N.; Ott, Rich; Crowley, Meagan; Roberts, Amy; Paz, Marcus
One key piece of evidence for dark matter is the rotation-curve problem: the disagreement between measured galactic rotation curves and their luminous mass. A novel solution to this problem is presented here, in a model that predicts observed Doppler-shifted spectra based only on the luminous matter estimates and one free model parameter &#120572;. This model is applied to fit the rotation curves of the SPARC sample of 175 galaxies, yielding mass-to-light ratios, goodness of fit measurements, and &#120572;. The measured average &#120594;2&#13;
&#119903; =2.24 compares favorably with the Navarro-Frenk-White dark matter model’s average of &#120594;2&#13;
&#119903; =4.19 for the same data, and more galaxies are successfully fit by this model. The model provides a useful formulation linking luminous matter to the observed rotation curves, with the dark matter contribution to galaxies encoded in two transformation terms of the luminous mass. It also offers a lower-parameter characterization of the rotation curve problem, and a power law relationship between &#120572; and galactic photometric quantities is observed, potentially removing the need for the free parameter.
</description>
<pubDate>Sun, 15 Feb 2026 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164988</guid>
<dc:date>2026-02-15T00:00:00Z</dc:date>
</item>
<item>
<title>Non-Clinical Safety of GRAd Vector-Based COVID-19 and HIV Vaccines Supports a Platform Regulatory Approach</title>
<link>https://hdl.handle.net/1721.1/164987</link>
<description>Non-Clinical Safety of GRAd Vector-Based COVID-19 and HIV Vaccines Supports a Platform Regulatory Approach
Paalangara, Reji; Gohin, Stephanie; Menard, Alexis; Amy, Charlotte; Berrabah, Wahiba; Rogue, Alexandra; Getz, Matthew A.; Alrubayyi, Aljawharah; Battella, Simone; Raggioli, Angelo; Gentile, Michela; Di Rita, Anthea; Noto, Alessia; Miselli, Giuseppina; Grazioli, Fabiana; Napolitano, Federico; Sowcik, Dhurata; Soriani, Marco; Chmielewski, Benjamin; Molife, Lebohang
Background/Objectives: The rapid development of safe and efficacious vaccines is often hindered by extensive, mandated non-clinical safety evaluations in animals. With the aim to provide scientific evidence supporting a “vaccine platform approach”, here we present the complete non-clinical studies for two investigational vaccines, GRAd-COV2 and GRAdHIVNE1, based on GRAd, a gorilla-derived group C adenoviral vector. Methods: The biodistribution of GRAd genomes following the intramuscular administration of the vaccines was assessed in rats by a sensitive qPCR method. Local tolerance and systemic toxic effects were evaluated in single- and repeated-dose toxicity studies in rabbits. Results: GRAd-COV2 and GRAdHIVNE1 were well-tolerated. Distribution was highly confined to the injection site and draining lymph nodes, and toxicity profile consisted of transient, non-adverse inflammatory responses, while the expected immune responses to the encoded antigens were successfully induced. Notably, both vaccines demonstrated a consistent safety profile despite transgene and backbone differences, comparable to other replication-defective adenoviral vectors. Conclusions: The established non-clinical safety profile of the GRAd platform provides a robust foundation for a more efficient and streamlined regulatory pathway. By leveraging this prior knowledge, future GRAd-based vaccines can achieve accelerated clinical development while fully adhering to the ethical principles of replacement, reduction, and refinement of animal use in research.
</description>
<pubDate>Fri, 06 Feb 2026 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164987</guid>
<dc:date>2026-02-06T00:00:00Z</dc:date>
</item>
<item>
<title>Design and User-Centered Field Evaluation of an Accessible Precision Irrigation Tool and Its Human–Machine Interaction on a Jordanian Farm</title>
<link>https://hdl.handle.net/1721.1/164986</link>
<description>Design and User-Centered Field Evaluation of an Accessible Precision Irrigation Tool and Its Human–Machine Interaction on a Jordanian Farm
Van de Zande, Georgia D.; Sheline, Carolyn; Pratt, Shane R.; Winter V, Amos G.
This work aims to demonstrate the successful, long-term human use of an automatic scheduling-manual operation (AS-MO) precision irrigation tool by farmers on a medium-scale Jordanian farm. Innovation in low-cost, accessible, and water-efficient irrigation technologies is critical as water resources become scarce, especially on resource-constrained farms in the drought-prone Middle East and North Africa (MENA) region. Prior work has shown that a proposed AS-MO decision support tool could bridge the gap between fully manual irrigation—a common practice on many MENA farms—and existing precision agriculture solutions, which are often too expensive or complex for medium-scale farmers to adopt. Recent developments have also demonstrated that the scheduling theory behind the proposed AS-MO tool uses up to 44% less water compared to fully manual irrigation. However, a functional design of the AS-MO tool has not been realized nor has it been demonstrated on a farm with farmer users. This work documents the detailed design of an AS-MO tool’s human–machine interaction (HMI) and validates the human execution of the tool in context. Through an 11-week case study conducted on a Jordanian farm, we show that farmers used a functional prototype of the AS-MO tool as intended. The functional tool prototype was designed to deliver a long-term AS-MO user experience to study participants. The prototype monitored local weather conditions, generated water-efficient schedules using an existing scheduling theory, and notified users’ phones when they should manually open or close valves. The irrigation practices of participants using the AS-MO prototype were measured, and participants demonstrated successful use of the tool. Users correctly confirmed 93% of the scheduled events using the tool’s HMI. Despite manual operation, a majority of confirmed irrigation event durations fell within 15% of the automatically scheduled durations; relative to the length of scheduled irrigation event durations, the medians of confirmed and scheduled durations were 102% and 88%, respectively. These results demonstrate the success of the tool’s decision support ability. Feedback from study participants can support the AS-MO tool’s next design iteration and can inform the development of other decision support systems designed for resource-constrained, medium-scale farms. This work presents an important step towards developing a precision irrigation tool that, if adopted at scale, could increase the adoption of water-efficient irrigation practices on resource-constrained farms that are not served by existing technology, improving sustainable agriculture in MENA.
</description>
<pubDate>Wed, 04 Feb 2026 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164986</guid>
<dc:date>2026-02-04T00:00:00Z</dc:date>
</item>
<item>
<title>The Effect of Genipin Matrix Augmentation on the Retention of Glycosaminoglycans in the Intervertebral Disc—A Pilot Study</title>
<link>https://hdl.handle.net/1721.1/164985</link>
<description>The Effect of Genipin Matrix Augmentation on the Retention of Glycosaminoglycans in the Intervertebral Disc—A Pilot Study
Hedman, Thomas; Brown, Matthew; Slusarewicz, Pawel
The degradation of intervertebral disc proteoglycans, including the loss or shortening of their hydrophilic glycosaminoglycan chains, causes a loss of disc hydration, leading to an increase in solid matrix stresses. This illustrates one aspect of the complex multifactorial relationship between tissue degradation and the resulting mechanical dysfunction. Genipin matrix augmentation has previously been evaluated with regard to its ability to improve mechanical properties of the disc, increasing joint stability and permeability. The study aim was to evaluate the ability of genipin augmentation to increase retention of glycosaminoglycans in disc specimens subjected to free swelling. Three different models were utilized: whole bovine caudal discs, partial annulus specimens from bovine, and human thoracic discs. Total glycosaminoglycan release to a surrounding bath was quantified using a modified dimethyl-methylene blue assay. Genipin solution injections reduced glycosaminoglycan loss by 44.0% in intact bovine discs compared to buffer-only controls (p = 0.027), by 75.8% in partial bovine annulus specimens (p = 0.0004), and by 51.9% in human annulus specimens (p = 0.017). The combination of increased permeability and glycosaminoglycans retention may produce beneficial effects on nutritional flow, diurnal irrigation, and reduction of matrix solid phase stress. Combining these effects with the ability to improve joint stability and augment tissue mechanical properties suggests this nano-scale device may be capable of arresting ongoing degeneration.
</description>
<pubDate>Mon, 02 Feb 2026 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164985</guid>
<dc:date>2026-02-02T00:00:00Z</dc:date>
</item>
<item>
<title>A Novel Recurrent Neural Network Framework for Prediction and Treatment of Oncogenic Mutation Progression</title>
<link>https://hdl.handle.net/1721.1/164984</link>
<description>A Novel Recurrent Neural Network Framework for Prediction and Treatment of Oncogenic Mutation Progression
Parthasarathy, Rishab; Bhowmik, Achintya K.
Despite significant medical advancements, cancer remains the second leading cause of death in the US, causing over 600,000 deaths per year. One emerging field, pathway analysis, is promising but still relies on manually derived wet lab data, which is time-consuming to acquire. This work proposes an efficient, effective, end-to-end framework for Artificial Intelligence (AI)-based pathway analysis that predicts both cancer severity and mutation progression in order to recommend possible treatments. The proposed technique involves a novel combination of time-series machine learning models and pathway analysis. First, mutation sequences were isolated from The Cancer Genome Atlas (TCGA) Database. Then, a novel preprocessing algorithm was used to filter key mutations by mutation frequency. This data was fed into a Recurrent Neural Network (RNN) that predicted cancer severity. The model probabilistically used the RNN predictions, information from the preprocessing algorithm, and multiple drug-target databases to predict future mutations and recommend possible treatments. This framework achieved robust results and Receiver Operating Characteristic (ROC) curves (a key statistical metric) with accuracies greater than 60%, similar to existing cancer diagnostics. In addition, preprocessing played a key role in isolating a few hundred key driver mutations per cancer stage, consistent with current research. Heatmaps based on predicted gene frequency were also generated, highlighting key mutations in each cancer. Overall, this work is the first to propose an efficient, cost-effective end-to-end framework for projecting cancer prognosis and providing possible treatments without relying on expensive, time-consuming wet lab work.
</description>
<pubDate>Mon, 02 Feb 2026 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164984</guid>
<dc:date>2026-02-02T00:00:00Z</dc:date>
</item>
<item>
<title>All-Pay Auctions with Different Forfeits</title>
<link>https://hdl.handle.net/1721.1/164983</link>
<description>All-Pay Auctions with Different Forfeits
Kang, Benjamin; Unwin, James
In an auction, each party bids a certain amount, and the one who bids the highest is the winner. Interestingly, auctions can also be used as models for other real-world systems. In an all-pay auction all parties must pay a forfeit for bidding. In the most commonly studied all-pay auction, parties forfeit their entire bid, and this has been considered as a model for expenditure on political campaigns. Here, we consider a number of alternative forfeits that might be used as models for different real-world competitions, such as preparing bids for defense or infrastructure contracts.
</description>
<pubDate>Fri, 09 Jan 2026 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164983</guid>
<dc:date>2026-01-09T00:00:00Z</dc:date>
</item>
<item>
<title>Spontaneous formation of robust two-dimensional perovskite phases</title>
<link>https://hdl.handle.net/1721.1/164982</link>
<description>Spontaneous formation of robust two-dimensional perovskite phases
Tan, Shaun; Shih, Meng-Chen; Lu, Yongli; Choi, Seung-Gu; Dong, Yifan; Lee, Joo-Hong; Yavuz, Ilhan; Larson, Bryon W; Park, So Yeon; Kodalle, Tim; Zhang, Ruiqi; Grotevent, Matthias J; Lin, Yu-Kuan; Zhu, Hua; Bulović, Vladimir; Sutter-Fella, Carolin M; Park, Nam-Gyu; Beard, Matthew C; Lee, Jin-Wook; Zhu, Kai; Bawendi, Moungi G
The two-dimensional on three-dimensional (2D/3D) perovskite bilayer heterostructure can improve the stability and performance of perovskite solar cells. We show that the 2D/3D perovskite stack in a device evolves dynamically during its end-of-life decomposition. Initially phase-pure 2D interlayers can evolve differently, resulting in different device stabilities. We show that a robust 2D interlayer can be formed using mixed solvents to regulate its crystallinity and phase purity. The resulting 2D/3D devices achieved 25.9% efficiency and had good durability, retaining 91% of their initial performance after 1074 hours at 85°C using maximum power point tracking.
</description>
<pubDate>Thu, 08 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164982</guid>
<dc:date>2025-05-08T00:00:00Z</dc:date>
</item>
<item>
<title>Third-order photon correlations extract single-nanocrystal multiexciton properties in solution</title>
<link>https://hdl.handle.net/1721.1/164981</link>
<description>Third-order photon correlations extract single-nanocrystal multiexciton properties in solution
Horowitz, Jonah R; Berkinsky, David B; Bendekgey, Henry C; Tye, Oliver J; Šverko, Tara; Shulenberger, Katherine E; Bawendi, Moungi G
Colloidal semiconductor nanocrystals are considered promising materials for high-flux optical applications, including lasing, light-emitting diodes, biological imaging, and quantum optics. In high-flux applications, multiexcitons can significantly contribute to emission, influencing its brightness, spectral purity, and kinetics. As a result, understanding and controlling multiexciton emission in colloidal nanocrystal materials is of the utmost importance. In the past, single-nanocrystal photon correlation methods have been applied to understand biexciton and triexciton efficiencies, lifetimes, and spectra. While powerful, such methods suffer from user selection bias and require stable emission from single nanocrystals. To compensate for this shortcoming, second-order correlation methods were developed to extract sample-averaged biexciton properties from a solution of nanocrystals. Until now, however, the analogous third-order solution photon correlation methods remained unexplored. In this work, we present a pair of third-order photon correlation techniques to obtain the sample-averaged single-nanocrystal triexciton quantum yield and lifetime in a solution-phase experiment. These techniques derive from the relationship between the Poisson probability of nanocrystal photon absorption and the intrinsic probability of nanocrystal photon emission. We validate the theoretical background of these techniques by creating a numerical model to simulate the diffusion and emission of many nanocrystals in solution. Our simulations confirm that the average triexciton quantum yield and triexciton lifetime can be extracted from a solution of nanocrystals. These techniques will enable researchers to gain a better understanding of the fundamental multiexciton properties of colloidal nanocrystals.
</description>
<pubDate>Mon, 28 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164981</guid>
<dc:date>2025-07-28T00:00:00Z</dc:date>
</item>
<item>
<title>Challenges of II‐VI and III‐V Blue Quantum Dot Light‐Emitting Diodes</title>
<link>https://hdl.handle.net/1721.1/164980</link>
<description>Challenges of II‐VI and III‐V Blue Quantum Dot Light‐Emitting Diodes
Tan, Shaun; Horowitz, Jonah R; Tye, Oliver J; Bawendi, Moungi G
Quantum dot light-emitting diodes (QD-LEDs) are electroluminescent devices where the emissive layer consists of inorganic colloidal quantum dots. Recent breakthroughs have enabled the development of bright and efficient blue-emitting QD-LEDs based on heavy metal-free compositions. However, challenges remain that hinder their practical application in electroluminescent displays and lighting technologies. The primary obstacle is their limited operational lifetimes which remain significantly below practical requirement standards, especially in comparison to the red- and green-emitting QD-LEDs. Another important issue is the low color purity and broad spectral linewidths of heavy metal-free blue quantum dot compositions. Additional problems include transient electroluminescent behaviors such as fluorescence intermittency and positive aging effects. This review examines the current understanding of the physical mechanisms underlying these challenges faced by blue QD-LEDs. Often, contradictory explanations are proposed to account for the same phenomenon. Here, potential interpretations are suggested that may help reconcile the conflicting reports. Recent advances are further examined that have contributed to the development of state-of-the-art blue QD-LEDs.
</description>
<pubDate>Mon, 22 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164980</guid>
<dc:date>2025-09-22T00:00:00Z</dc:date>
</item>
<item>
<title>Impact of Processing Environment on Anti-Solvent Free FAPbI3 Films and Solar Cells</title>
<link>https://hdl.handle.net/1721.1/164979</link>
<description>Impact of Processing Environment on Anti-Solvent Free FAPbI3 Films and Solar Cells
Wall, Elizabeth M; Lin, Yu‐Kuan; Bawendi, Moungi; Burlingame, Quinn C; Loo, Yueh‐Lin
As perovskite solar cells approach commercialization, understanding the environmental sensitivities of perovskites during fabrication becomes increasingly important. In this work, the humidity-dependence of each deposition and annealing step in the anti-solvent-free two-step formamidinium lead iodide fabrication process is investigated in air and N2. In-situ grazing-incidence wide-angle X-ray scattering measurements during spin-coating indicate that humidity affects the formation and dynamics of intermediate phases in perovskite precursor films. These differences, and those induced by annealing in humidity, impact the structure, morphology, and composition of resultant perovskite films, though the initial performance of solar cells fabricated using these active layers is relatively insensitive to humidity across the range studied. In contrast, stability is maximized in devices with dry-processed active layers and those terminally annealed in humidity. Spin-coating of PbI2 is most environmentally sensitive—needle-like structures precipitate while spin-coating in 40% relative humidity leading to significantly reduced photovoltaic performance and device stability. Additionally, films and solar cells fabricated in air appear virtually identical to those fabricated in N2. Collectively, these results show that optimal performance and stability of two-step processed formamidinium lead iodide solar cells is achieved when fabricating active layers in a dry atmosphere or with some humidity during the final anneal.
</description>
<pubDate>Wed, 17 Dec 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164979</guid>
<dc:date>2025-12-17T00:00:00Z</dc:date>
</item>
<item>
<title>Cognitive Reinforcement: Capturing Tacit Knowledge and Enhancing Expertise with a Biofeedback Interface for Visual Attention</title>
<link>https://hdl.handle.net/1721.1/164978</link>
<description>Cognitive Reinforcement: Capturing Tacit Knowledge and Enhancing Expertise with a Biofeedback Interface for Visual Attention
Armengol-Urpi, Alexandre; Salazar-Gomez, Andres F.; Sinha, Pawan; Sarma, Sanjay E.
Objective. Tacit or implicit knowledge refers to know-how that experts&#13;
possess but often cannot articulate, codify, or explicitly transfer to&#13;
others. This can present a significant challenge for learning, skill acquisition, and knowledge transfer across various domains, including those&#13;
that rely on apprenticeships, craftsmanship, sports, and medical imaging diagnosis. This study explores whether expert tacit knowledge can&#13;
be accessed and leveraged using an EEG and gaze-informed biofeedback interface to enhance expertise transfer and training. Approach.&#13;
We designed an image classification task where novices were trained&#13;
until they implicitly learned to classify images correctly, despite being&#13;
unaware of which image regions or features guided their decisions. The&#13;
task involved images with a hidden spatial asymmetry that even trained&#13;
participants did not explicitly recognize. Using combined eye-tracking&#13;
and EEG measures, we tracked both overt and covert visual attention to determine whether individuals unconsciously internalized this&#13;
asymmetry during learning. We then investigated whether providing&#13;
explicit gaze-informed feedback on their own implicit attention biases&#13;
could further improve task performance of trained participants. Main Results. Our findings reveal that as participants became trained, their&#13;
attention patterns —both overt and covert— consistently reflected an&#13;
unconscious awareness of image asymmetry, with attention biased toward&#13;
task-relevant image regions. Moreover, trained individuals who received&#13;
explicit feedback derived from their own gaze behavior showed additional improvements in classification performance compared to an equally&#13;
trained control group. Significance. These results open the door to novel&#13;
uses of biofeedback interfaces to facilitate new forms of expertise transfer, training, and collective intelligence. By extracting and conveying&#13;
tacit expert knowledge—ordinarily difficult to externalize—our interface&#13;
enables its transmission to novices, trained individuals, or even machine&#13;
learning systems. We refer to this process as cognitive reinforcement.
</description>
<pubDate>Wed, 04 Mar 2026 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164978</guid>
<dc:date>2026-03-04T00:00:00Z</dc:date>
</item>
<item>
<title>Host–Guest Complexation by β-Cyclodextrin Enhances the Solubility of an Esterified Protein</title>
<link>https://hdl.handle.net/1721.1/164975</link>
<description>Host–Guest Complexation by β-Cyclodextrin Enhances the Solubility of an Esterified Protein
Cheah, Keith M; Jun, Joomyung V; Wittrup, K Dane; Raines, Ronald T
The carboxyl groups of a protein can be esterified by reaction with a diazo compound, 2-diazo-2-(p-methylphenyl)-N,N-dimethylacetamide. This esterification enables the entry of the protein into the cytosol of a mammalian cell, where the nascent ester groups are hydrolyzed by endogenous esterases. The low aqueous solubility of the ensuing esterified protein is, however, a major practical challenge. Solubility screening revealed that β-cyclodextrin (β-CD) is an optimal solubilizing agent for esterified green fluorescent protein (est-GFP). Its addition can increase the recovery of est-GFP by 10-fold. α-CD, γ-CD, and cucurbit-7-uril are less effective excipients. 1H NMR titration experiments revealed that β-CD encapsulates the hydrophobic tolyl group of ester conjugates with Ka = 321 M–1. Combining l-arginine and sucrose with β-CD enables the nearly quantitative recovery of est-GFP. Thus, the insolubility of esterified proteins can be overcome with excipients.
</description>
<pubDate>Mon, 29 Aug 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164975</guid>
<dc:date>2022-08-29T00:00:00Z</dc:date>
</item>
<item>
<title>Quantifying the Role of Kinematic and Behavioral Features in Driver-Pedestrian Interaction across Environments: An Inverse Reinforcement Learning Approach</title>
<link>https://hdl.handle.net/1721.1/164974</link>
<description>Quantifying the Role of Kinematic and Behavioral Features in Driver-Pedestrian Interaction across Environments: An Inverse Reinforcement Learning Approach
Noonan, T Zach; Gershon, Pnina; Domeyer, Josh; Mehler, Bruce; Reimer, Bryan
This study examined real-world driver-pedestrian encounters to identify key interaction features and assess how the importance of these features is mediated by protection afforded by the environment. Using inverse reinforcement learning, we estimated the utility functions to evaluate the relative importance of different aspects of the interaction for each road user and how they differ between undesignated (e.g., jaywalking) and designated (e.g., zebra crossings) crossings. Pedestrian pausing behavior and dynamic features like acceleration changes and time gaps were important at designated crossings, whereas undesignated crossings relied on distances and bidirectional gaze, highlighting reliance on non-verbal cues. This work builds on previous studies analyzing the role of environmental features on interaction, communication, and negotiation between drivers and pedestrians. Understanding driver-pedestrian communication and identifying the most important interaction features may enhance the design of effective and coordinated driver-pedestrian interaction strategies, especially in the context of automated driving systems.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164974</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Preoperative Function, Previous SERM Treatment, and Triple-Negative Tumor Status are Independently Associated With 3-Month Postoperative Function After Surgical Decompression of Metastatic Breast Cancer</title>
<link>https://hdl.handle.net/1721.1/164973</link>
<description>Preoperative Function, Previous SERM Treatment, and Triple-Negative Tumor Status are Independently Associated With 3-Month Postoperative Function After Surgical Decompression of Metastatic Breast Cancer
Siraj, Layla; Duvall, Julia B.; Massaad, Elie; Fourman, Mitchell S.; Shin, John H.
Background:&#13;
The most common cancer in women worldwide, breast cancer most often metastasizes to the bone. Improved chemo- and radiotherapies and novel molecular therapies have prolonged survival in women with osseous metastatic breast cancer, but spinal metastases often cause cord compression that degrades their functional independence.&#13;
Purpose:&#13;
In women with breast cancer metastasized to the spine, we sought to (1) identify independent predictors of a functional deficit 3 months after surgical management and (2) assess the utility of existing metrics at highlighting patients at risk of a postoperative functional deficit.&#13;
Methods:&#13;
We performed a single-institution, retrospective analysis of 92 patients meeting our inclusion criteria between 2004 and 2021. Patients were classified by 3-month postoperative Eastern Cooperative Oncology Group (ECOG) scores into good/independent (ECOG 0 to 2) and poor/dependent (ECOG 3 to 5) functional outcome groups. Univariate and multivariate analyses were performed to identify patient and tumor factors associated with good vs. poor 3-month ECOG scores.&#13;
Results:&#13;
Preoperative use of selective estrogen receptor modulators (SERMs) was significantly associated with good postoperative functional outcomes. Poor preoperative function, the presence of visceral metastases at the time of surgery, and triple-negative primary or metastatic tumor status were independently associated with poor 3-month postoperative function. Host characteristics, sociodemographic factors, and indicators of surgical complexity, including estimated blood loss, front/back surgery, and corpectomy reconstruction, were not associated with 3-month ECOG score. A multivariate model including these significant univariate associations and normalized for patient demographics identified preoperative SERM use, poor preoperative function (ECOG score), and triple-negative primary or metastatic tumor status as independently associated with functional status 3 months after surgery.&#13;
Conclusions:&#13;
Our retrospective analysis found that preoperative SERM use was significantly associated with improved postoperative functional outcomes, while poor preoperative function and triple-negative tumor status were significantly associated with poor function 3 months after surgery. These factors may serve as indicators of function and independence after surgery for patients with metastatic breast cancer to the spine.&#13;
Level of Evidence:&#13;
Level IV: Prognostic Study
</description>
<pubDate>Tue, 04 Nov 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164973</guid>
<dc:date>2025-11-04T00:00:00Z</dc:date>
</item>
<item>
<title>The Influence of Prior Semantic Knowledge in Noisy Channel Interpretation</title>
<link>https://hdl.handle.net/1721.1/164972</link>
<description>The Influence of Prior Semantic Knowledge in Noisy Channel Interpretation
Chen, Sihan; Washington, Lia; Gibson, Edward
How do comprehenders interpret semantically implausible sentences? Previous studies proposed a noisy-channel framework of sentence comprehension, where communication between a speaker and a comprehender happens in a noisy channel. The comprehender rationally adopts an interpretation of a sentence based on how likely the interpretation is (the semantic prior) and how likely is the interpretation corrupted into the perceived sentence because of noise (the likelihood). The theory predicted that comprehenders would be more likely to adopt a literal interpretation of an implausible sentence if their prior of implausible sentences were higher. To test this hypothesis, Gibson et al. manipulated the proportion of implausible test sentences in two sets of experiments, where participants read a number of sentences and answer a comprehension question following each sentence. Although their results supported the hypothesis, the experiment could be confounded (a) by participants’ adaptation effect (due to different experiment lengths) and (b) by different participants having different strategies to do the task (due to the between-subject design). In our study, we manipulated the semantic prior and controlled for these potential confounds. We found participants exposed to more implausible sentences were indeed more likely to interpret implausible sentences literally. Our results hence offer additional support for the noisy-channel framework.
</description>
<pubDate>Thu, 25 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164972</guid>
<dc:date>2025-09-25T00:00:00Z</dc:date>
</item>
<item>
<title>The Politics of Engagement in Platform Governance</title>
<link>https://hdl.handle.net/1721.1/164971</link>
<description>The Politics of Engagement in Platform Governance
Lewis, Becca; Christin, Angèle
In recent years, the concept of user engagement has dominated debate over the governance of online platforms, and critics use the term to assign crass commercial interests to social media companies. We argue that social media engagement is a multifaceted ideal that serves both economic and ideological functions for platforms. We show how Facebook’s early leadership used the concept to reconcile the competing demands of expansion, revenue generation, and community-building. In doing so, they synthesized three distinct ideas: the Silicon Valley belief that network expansion correlated with network strength, the ad industry’s contention that media should promote emotional investment from viewers, and the academic claim that civic participation is the most important democratic virtue. Even as the contradictions that these claims yield have come to the foreground, the multiple logics of engagement have proven difficult to evade, and it continues to shape discussions of platform governance.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164971</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Impact of Electrification and Partial Automation on Driver Speeding Behavior</title>
<link>https://hdl.handle.net/1721.1/164970</link>
<description>The Impact of Electrification and Partial Automation on Driver Speeding Behavior
Gershon, Pnina; Noonan, T Zach; Lenneman, John
As electric vehicles (EVs) and partial automation systems become increasingly prevalent, their impact on everyday driving behavior remains underexplored. This study utilizes real-world naturalistic data to examine how vehicle type, an electric versus an internal combustion engine (ICE), and the use of partial automation are associated with speeding behavior. Data were collected from 24 drivers over the course of a month each, comparing Tesla Model 3s with Autopilot (EV) and Cadillac CT6s with Super Cruise (ICE), covering about 38,000 miles of driving. Results indicate that EV drivers tended to speed for shorter durations on arterial roads but exhibited higher speeding magnitudes on residential and controlled access roads after their first week of driving. Notably, driving with partial automation, regardless of powertrain, was associated with significantly longer speeding durations and slightly greater speeding magnitudes compared to manual driving. These findings suggest that both electrification and automation contribute to evolving driver behaviors, changing speeding behavior in specific driving contexts. As drivers adapt to new vehicle technologies, understanding how these systems shape behavior is important. Insights from this study may inform the design of future in-vehicle systems and guide driver education strategies to promote safe driving practices in an evolving transportation landscape.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164970</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Cloud Capitalism and the AI Transition</title>
<link>https://hdl.handle.net/1721.1/164969</link>
<description>Cloud Capitalism and the AI Transition
Tan, JS; Thelen, Kathleen
This article explores the origins and implications of a new cloud business model that is powering the advance of AI. We document how this model emerged within a handful of the most dominant IT firms whose reach into all corners of the economy makes them a powerful node or “choke point” in the political economy as a whole. We then elaborate how the features of the cloud business model differ from the traditional platform model out of which it grew, as it evolved from asset-light to asset-heavy, from hierarchical organization to semivertical integration, from domination over to collaboration with partner firms, and from embracing consumer- to enterprise-facing strategies. A final section considers the technological, political, and distributional impacts of the rise of this new business model—showing how the current race to artificial general intelligence (AGI) has reinforced and accelerated its underlying dynamics (above all, intensifying the drive for scale and ever-greater asset intensity), analyzing the new techno-nationalist alliance between industry leaders and the state that the model's development has inspired, and considering the new power-distributional dynamics this model has produced.
</description>
<pubDate>Fri, 26 Dec 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164969</guid>
<dc:date>2025-12-26T00:00:00Z</dc:date>
</item>
<item>
<title>Hexamethylbenzene Elimination Enables the Generation of Transient, Sterically Unhindered Multiply Bonded Boron Species</title>
<link>https://hdl.handle.net/1721.1/164967</link>
<description>Hexamethylbenzene Elimination Enables the Generation of Transient, Sterically Unhindered Multiply Bonded Boron Species
Zhang, Chonghe; Dabringhaus,  Philipp; Tra,  Bi Youan E.; Gilliard, Robert J. Jr; Cummins, Christopher C.
We present a method for the generation of boron-containing unsaturated small molecules via hexamethylbenzene elimination. The fragmentation precursors are obtained through bond insertion into phenyl boranorbornadiene (PhB(C6Me6), 1). Compound 1 undergoes 1,1-insertion with 2,6-xylyl isocyanide, affording a boron-doped bicyclo[2.2.2]octa-2,5-diene 2. Heating 2 in toluene results in the formation of a base-stabilized boraketenimine PhB(CNxyl)2 (i.e., borylene diisocyanide) as an intermediate via retro-Diels–Alder reaction. Surprisingly, PhB(CNxyl)2 dimerizes to give a boron-doped 6-membered ring (PhB)2C4(CNxyl)64. The reaction of 1 with trimethylamine N-oxide and phenyl azide yields triphenyl boroxine and a BN4 ring, respectively, implying the involvement of transient oxoborane (PhB[triple bond, length as m-dash]O) and iminoborane intermediates (PhB[triple bond, length as m-dash]NPh), respectively. Furthermore, boranorbornadiene also undergoes 2,3-insertion with mesityl isocyanate (MesNCO), affording a fused 6/5-membered heterocycle 11. This insertion profile is analogous to the insertion of phenyl azide into 1.
</description>
<pubDate>Fri, 16 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164967</guid>
<dc:date>2025-05-16T00:00:00Z</dc:date>
</item>
<item>
<title>Mechanisms and Scale-up Potential of 3D Solar Interfacial-Evaporators</title>
<link>https://hdl.handle.net/1721.1/164966</link>
<description>Mechanisms and Scale-up Potential of 3D Solar Interfacial-Evaporators
Zhang,  James H.; Mittapally, Rohith; Oluwade,  Abimbola; Chen, Gang
Evaporation fluxes from porous evaporators under sunlight have been reported to exceed the solar-thermal limit, determined by relating the incoming solar energy to the latent and sensible heat of water, for applications in desalination and brine pond drying. Although flat two-dimensional (2D) evaporators exceeding the solar limit imply a non-thermal process, tall three-dimensional (3D) solar evaporators can exceed it by absorbing additional environmental heat into its cold sidewalls. Through modeling, we explain the physics and identify the critical heights in which a fin transitions from 2D to 3D evaporation and exceeds the solar-thermal limit. Our analyses illustrate that environmental heat absorption in 3D evaporators is determined by the ambient relative humidity and the airflow velocity. The model is then coarse-grained into a large-scale fin array device on the meters scale to analyze their scalability. We identify that these devices are unlikely to scale favorably in closed environment settings such as solar stills. Our modeling clearly illustrates the benefits and limitations of 3D evaporating arrays and pinpoints design choices in previous works that hinder the device's overall performance. This work illustrates the importance in distinguishing 2D from 3D evaporation for mechanisms underlying interfacial evaporation exceeding the solar-thermal limit.
</description>
<pubDate>Thu, 24 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164966</guid>
<dc:date>2025-04-24T00:00:00Z</dc:date>
</item>
<item>
<title>Decarbonization Approaches for Ethylene Production: Comparative Techno-Economic and Life-Cycle Analysis</title>
<link>https://hdl.handle.net/1721.1/164965</link>
<description>Decarbonization Approaches for Ethylene Production: Comparative Techno-Economic and Life-Cycle Analysis
Shin, Woojae; Lin, Bosong; Lai, Haoxiang; Ibrahima, Gasim; Zang, Guiyan
Ethylene, a building block of the chemical industry, significantly contributes to global greenhouse gas (GHG) emissions, prompting interest in decarbonization approaches to align with recent carbon neutrality initiatives. This paper presents a comprehensive techno-economic analysis (TEA) and life cycle analysis (LCA) of GHG emissions, comparing conventional ethane-based ethylene plants with three decarbonization approaches. The study was conducted within the context of the U.S. average, with sensitivity analysis to identify key drivers affecting well-to-gate (WTG) GHG emissions and the levelized cost of ethylene (LCOE). The conventional plant exhibited a GHG emission of 869 kgCO2e per tonne-ethylene and a LCOE of $746 per tonne-ethylene. Substituting external natural gas fuels with grid or renewable electricity decreased the emissions to 806 and 717 kgCO2e per tonne-ethylene, respectively. The emissions of the grid-powered or renewable-powered electrically heated cracker that exports co-produced hydrogen to substitute conventional gray hydrogen were 1031 and −163 kgCO2e per tonne-ethylene, respectively. The application of CCS to purge gas showed 703 and 514 kgCO2e per tonne-ethylene emissions, respectively. The electric cracker showed lower emissions than the conventional plant below 380 kgCO2e per MW h electricity upstream, and at 60 kgCO2e per MW h, it achieved carbon neutrality. Regarding LCOE, when using a grid electricity source, no external natural gas, electric cracker, and adding CCS to purge gas showed $743, 833, and 771 per tonne-ethylene, respectively. When these plants adopt renewable electricity, their LCOEs will be $737, 746 and 757 per tonne-ethylene. Below $41.1 per MW h electricity price, the electric cracker had the lowest value among all cases. With hydrogen prices of $0.5–3.0 per kg-H2, the electric cracker's LCOE ranged from −$45(cost)–128(saving) per tonne-ethylene compared to the conventional concept.
</description>
<pubDate>Tue, 18 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164965</guid>
<dc:date>2025-02-18T00:00:00Z</dc:date>
</item>
<item>
<title>Archerfish: A Retrofitted 3D Printer for High-throughput Combinatorial Experimentation via Continuous Printing</title>
<link>https://hdl.handle.net/1721.1/164964</link>
<description>Archerfish: A Retrofitted 3D Printer for High-throughput Combinatorial Experimentation via Continuous Printing
Alexander E. Siemenn,   Basita Das, Eunice Aissi, Fang Sheng, Lleyton Elliott, Blake Hudspeth, Marilyn Meyers, James Serdy and Tonio Buonassisi
The maturation of 3D printing technology has enabled low-cost, rapid prototyping capabilities for mainstreaming accelerated product design. The materials research community has recognized this need, but no universally accepted rapid prototyping technique currently exists for material design. Toward this end, we develop Archerfish, a 3D printer retrofitted to dispense liquid with in situ mixing capabilities for performing high-throughput combinatorial printing (HTCP) of material compositions. Using this HTCP design, we demonstrate continuous printing throughputs of up to 250 unique compositions per minute, 100× faster than similar tools such as Opentrons that utilize stepwise printing with ex situ mixing. We validate the formation of these combinatorial “prototype” material gradients using hyperspectral image analysis and energy-dispersive X-ray spectroscopy. Furthermore, we describe hardware challenges to realizing reproducible, accurate, and precise composition gradients with continuous printing, including those related to precursor dispensing, mixing, and deposition. Despite these limitations, the continuous printing and low-cost design of Archerfish demonstrate promising accelerated materials screening results across a range of materials systems from nanoparticles to perovskites.
</description>
<pubDate>Fri, 31 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164964</guid>
<dc:date>2025-01-31T00:00:00Z</dc:date>
</item>
<item>
<title>Borinine-FLP Ring Expansion: Isolation of Eight-Membered B-P Rings Bridged by µ2 Chalcogenide and Chloronium Ions</title>
<link>https://hdl.handle.net/1721.1/164963</link>
<description>Borinine-FLP Ring Expansion: Isolation of Eight-Membered B-P Rings Bridged by µ2 Chalcogenide and Chloronium Ions
Frey, Nathan C.; Sarkar,  Samir Kumar; Dickie, Diane A.; Molinoa, Andrew; Gilliard, Robert J. Jr
Boron–phosphorus (B–P) frustrated Lewis pairs (FLPs) are an important class of compounds for activating various small molecules. Utilizing the ring expansion reactivity of 9-chloro-9-borafluorene, a borinine-based FLP was synthesized. Various five-membered main-group element heterocycles were obtained via the reaction of the FLP with Me3NO, S8, and Se. Subsequent reduction of these species yielded the ring-expanded compounds, each featuring bridging B–E–B (E = O, S, Se) bonds. Similarly, halide abstraction from the FLP with AgNTf2 led to the formation of a cationic ring-expanded compound with a bridging B–Cl–B motif. This motif constitutes one of the first examples of a boron-stabilized chloronium ion, as verified using in-depth bonding analysis methods. Mechanistic pathways for the reduction- and halide abstraction-mediated ring expansion reactions are proposed with the aid of density functional theory. Electronic structure computations were performed to determine the best representation of bonding interactions in each compound, suggesting phosophorus(V)–chalcogen double bonding and chalcogen–boron(III) dative interactions within the heterocycles.
</description>
<pubDate>Sat, 10 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164963</guid>
<dc:date>2025-05-10T00:00:00Z</dc:date>
</item>
<item>
<title>High-resolution structure of Zn3(HOTP)2 (HOTP = hexaoxidotriphenylene), a three-dimensional conductive MOF</title>
<link>https://hdl.handle.net/1721.1/164962</link>
<description>High-resolution structure of Zn3(HOTP)2 (HOTP = hexaoxidotriphenylene), a three-dimensional conductive MOF
Zhang,  Kimberly J.; Chen, Tianyang; Oppenheim, Julius J.; Yang,  Luming; Palatinus,  Lukáš; Müller,  Peter; Van Voorhisa,  Troy; Dincă,  Mircea
Although two-dimensional (2D) electrically conducting metal–organic frameworks (cMOFs) have become prominent due to their numerous potential applications, their structures are often implied or assumed from rather crude powder X-ray diffraction data. Indeed, exceedingly few examples exist of atomic-level structural details coming from single crystal diffraction experiments. Most widely studied among cMOFs are materials based on triphenylene ligands, in particular M3(HOTP)2 (M = Cu, Zn) and [M3(HOTP)2][M3(HOTP)]2 (M = Mg, Ni, Co; H6HOTP = 2,3,6,7,10,11-hexahydroxytriphenylene), which are invariably described as 2D van der Waals materials with sheets of ligands connected by square planar or octahedral metal ions. Here, we employ electron diffraction to show that, unlike the Mg, Co, Ni, and Cu analogs, Zn3(HOTP)2 crystallizes into a three-dimensional network that is analogous to the structures of the lanthanide-based HOTP MOFs. Moreover, similar to the lanthanide frameworks, Zn3(HOTP)2 exhibits incommensurate modulation, likely originating from a frustration between the preferred π–π stacking distance and the Zn–O bond lengths, or from a Peierls distortion. This work reinforces the importance of employing single crystal diffraction measurements for the characterization of conductive MOFs, especially when trying to correlate electronic properties to structural details.
</description>
<pubDate>Mon, 02 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164962</guid>
<dc:date>2025-06-02T00:00:00Z</dc:date>
</item>
<item>
<title>Intratumoral nanobody–IL-2 fusions that bind the tumor extracellular matrix suppress solid tumor growth in mice</title>
<link>https://hdl.handle.net/1721.1/164961</link>
<description>Intratumoral nanobody–IL-2 fusions that bind the tumor extracellular matrix suppress solid tumor growth in mice
Lutz, Emi A; Jailkhani, Noor; Momin, Noor; Huang, Ying; Sheen, Allison; Kang, Byong H; Wittrup, K Dane; Hynes, Richard O
Confining cytokine exposure to the tumors would greatly enhance cancer immunotherapy safety and efficacy. Immunocytokines, cytokines fused to tumor-targeting antibodies, have been developed with this intention, but without significant clinical success to date. A critical limitation is uptake by receptor-expressing cells in the blood, that decreases the dose at the tumor and engenders toxicity. Small-format immunocytokines, constructed with antibody fragments, are hypothesized to improve tumor specificity due to rapid systemic clearance. However, effective design criteria for small-format immunocytokines need further examination. Here, we engineer small interleukin-2 (IL-2) immunocytokines fused to nanobodies with nanomolar to picomolar affinities for the tumor-specific EIIIB domain of fibronectin (also known as EDB). Upon intravenous delivery into immunocompetent mice, such immunocytokines led to similar tumor growth delay as size-matched untargeted IL-2. Intratumoral (i.t.) delivery imparted improved survival dependent on affinity to EIIIB. I.t. administration offers a promising avenue to deliver small-format immunocytokines, given effective affinity for the tumor microenvironment.
</description>
<pubDate>Tue, 01 Nov 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164961</guid>
<dc:date>2022-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Ablative radiotherapy improves survival but does not cure autochthonous mouse models of prostate and colorectal cancer</title>
<link>https://hdl.handle.net/1721.1/164960</link>
<description>Ablative radiotherapy improves survival but does not cure autochthonous mouse models of prostate and colorectal cancer
Schmidt, Daniel R; Gramatikov, Iva Monique T; Sheen, Allison; Williams, Christopher L; Hurwitz, Martina; Dodge, Laura E; Holupka, Edward; Kiger, WS; Cornwall-Brady, Milton R; Huang, Wei; Mak, Howard H; Cormier, Kathleen S; Condon, Charlene; Dane Wittrup, K; Yilmaz, Ömer H; Stevenson, Mary Ann; Down, Julian D; Floyd, Scott R; Roper, Jatin; Vander Heiden, Matthew G
Background&#13;
Genetically engineered mouse models (GEMMs) of cancer are powerful tools to study mechanisms of disease progression and therapy response, yet little is known about how these models respond to multimodality therapy used in patients. Radiation therapy (RT) is frequently used to treat localized cancers with curative intent, delay progression of oligometastases, and palliate symptoms of metastatic disease.&#13;
&#13;
Methods&#13;
Here we report the development, testing, and validation of a platform to immobilize and target tumors in mice with stereotactic ablative RT (SART). Xenograft and autochthonous tumor models were treated with hypofractionated ablative doses of radiotherapy.&#13;
&#13;
Results&#13;
We demonstrate that hypofractionated regimens used in clinical practice can be effectively delivered in mouse models. SART alters tumor stroma and the immune environment, improves survival in GEMMs of primary prostate and colorectal cancer, and synergizes with androgen deprivation in prostate cancer. Complete pathologic responses were achieved in xenograft models, but not in GEMMs.&#13;
&#13;
Conclusions&#13;
While SART is capable of fully ablating xenografts, it is unable to completely eradicate disease in GEMMs, arguing that resistance to potentially curative therapy can be modeled in GEMMs.&#13;
&#13;
Plain language summary&#13;
Mice can be used to model the types of cancer seen in people to investigate the effects of cancer therapies, such as radiation. Here, we apply radiation therapy treatments that are able to cure cancer in humans to mice that have cancer of the prostate or colorectum. We show that the mice do not experience many side effects and that the tumours reduce in size, but in some cases show progression after treatment. Our study demonstrates that mice can be used to better understand how human cancers respond to radiation treatment, which can lead to the development of improved treatments and treatment schedules.
</description>
<pubDate>Wed, 09 Aug 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164960</guid>
<dc:date>2023-08-09T00:00:00Z</dc:date>
</item>
<item>
<title>Anti–PD-1 and Extended Half-life IL2 Synergize for Treatment of Murine Glioblastoma Independent of Host MHC Class I Expression</title>
<link>https://hdl.handle.net/1721.1/164959</link>
<description>Anti–PD-1 and Extended Half-life IL2 Synergize for Treatment of Murine Glioblastoma Independent of Host MHC Class I Expression
Tritz, Zachariah P; Ayasoufi, Katayoun; Wolf, Delaney M; Owens, Carley A; Malo, Courtney S; Himes, Benjamin T; Fain, Cori E; Goddery, Emma N; Yokanovich, Lila T; Jin, Fang; Hansen, Michael J; Parney, Ian F; Wang, Chensu; Moynihan, Kelly D; Irvine, Darrell J; Wittrup, K Dane; Diaz Marcano, Rosa M; Vile, Richard G; Johnson, Aaron J
Glioblastoma (GBM) is the most common malignant brain tumor in adults, responsible for approximately 225,000 deaths per year. Despite preclinical successes, most interventions have failed to extend patient survival by more than a few months. Treatment with anti—programmed cell death protein 1 (anti–PD-1) immune checkpoint blockade (ICB) monotherapy has been beneficial for malignant tumors such as melanoma and lung cancers but has yet to be effectively employed in GBM. This study aimed to determine whether supplementing anti–PD-1 ICB with engineered extended half-life IL2, a potent lymphoproliferative cytokine, could improve outcomes. This combination therapy, subsequently referred to as enhanced checkpoint blockade (ECB), delivered intraperitoneally, reliably cures approximately 50% of C57BL/6 mice bearing orthotopic GL261 gliomas and extends median survival of the treated cohort. In the CT2A model, characterized as being resistant to CBI, ECB caused a decrease in CT2A tumor volume in half of measured animals similar to what was observed in GL261-bearing mice, promoting a trending survival increase. ECB generates robust immunologic responses, features of which include secondary lymphoid organ enlargement and increased activation status of both CD4 and CD8 T cells. This immunity is durable, with long-term ECB survivors able to resist GL261 rechallenge. Through employment of depletion strategies, ECB's efficacy was shown to be independent of host MHC class I–restricted antigen presentation but reliant on CD4 T cells. These results demonstrate ECB is efficacious against the GL261 glioma model through an MHC class I–independent mechanism and supporting further investigation into IL2-supplemented ICB therapies for tumors of the central nervous system.
</description>
<pubDate>Fri, 02 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164959</guid>
<dc:date>2023-06-02T00:00:00Z</dc:date>
</item>
<item>
<title>Collagen-Anchored Interleukin-2 and Interleukin-12 Safely Reprogram the Tumor Microenvironment in Canine Soft-Tissue Sarcomas</title>
<link>https://hdl.handle.net/1721.1/164958</link>
<description>Collagen-Anchored Interleukin-2 and Interleukin-12 Safely Reprogram the Tumor Microenvironment in Canine Soft-Tissue Sarcomas
Stinson, Jordan A; Sheen, Allison; Momin, Noor; Hampel, Jordan; Bernstein, Rebecca; Kamerer, Rebecca; Fadl-Alla, Bahaa; Samuelson, Jonathan; Fink, Elizabeth; Fan, Timothy M; Wittrup, K Dane
Purpose:&#13;
Cytokine therapies such as IL2 and IL12 suffer from impractically small therapeutic windows driven by their on-target, off-tumor activity, limiting their clinical potential despite potent antitumor effects. We previously engineered cytokines that bind and anchor to tumor collagen following intratumoral injection, and sought to test their safety and biomarker activity in spontaneous canine soft-tissue sarcomas (STS).&#13;
&#13;
Experimental Design:&#13;
Collagen-binding cytokines were canine-ized to minimize immunogenicity and were used in a rapid dose-escalation study in healthy beagles to identify a maximum tolerated dose. Ten client-owned pet dogs with STS were then enrolled into trial, receiving cytokines at different intervals prior to surgical tumor excision. Tumor tissue was analyzed through IHC and NanoString RNA profiling for dynamic changes within treated tumors. Archived, untreated STS samples were analyzed in parallel as controls.&#13;
&#13;
Results:&#13;
Intratumorally administered collagen-binding IL2 and IL12 were well tolerated by STS-bearing dogs, with only Grade 1/2 adverse events observed (mild fever, thrombocytopenia, neutropenia). IHC revealed enhanced T-cell infiltrates, corroborated by an enhancement in gene expression associated with cytotoxic immune function. We found concordant increases in expression of counter-regulatory genes that we hypothesize would contribute to a transient antitumor effect, and confirmed in mouse models that combination therapy to inhibit this counter-regulation can improve responses to cytokine therapy.&#13;
&#13;
Conclusions:&#13;
These results support the safety and activity of intratumorally delivered, collagen-anchoring cytokines for inflammatory polarization of the canine STS tumor microenvironment. We are further evaluating the efficacy of this approach in additional canine cancers, including oral malignant melanoma.&#13;
&#13;
Translational Relevance&#13;
Successful translation of novel cancer therapies could be accelerated through the inclusion of tumor models that accurately recapitulate natural evolution and malignant transformation processes operative in human tumor development. Spontaneous cancer in pet dogs provides an underutilized opportunity to assess the safety and activity of investigational cancer therapies in tumors that arise following years of immunoediting. Particularly for the evaluation of immunotherapies, canine tumors enable the assessment of clinical potential in the context of an experienced, and often senescent, immune background. Beyond efficacy, such evaluation provides meaningful insight into tumor resistance mechanisms that could influence eventual human clinical success. Herein, we characterize immune activities generated by intratumoral injections of engineered collagen-binding cytokines IL2 and IL12 into naturally occurring canine soft-tissue sarcomas, and demonstrate through comparative assessment in mouse tumors the differential learnings from each model and their combined role in guiding rational design of treatment combinations with greater expected efficacy.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164958</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Both intratumoral regulatory T cell depletion and CTLA-4 antagonism are required for maximum efficacy of anti-CTLA-4 antibodies</title>
<link>https://hdl.handle.net/1721.1/164957</link>
<description>Both intratumoral regulatory T cell depletion and CTLA-4 antagonism are required for maximum efficacy of anti-CTLA-4 antibodies
Lax, Brianna M; Palmeri, Joseph R; Lutz, Emi A; Sheen, Allison; Stinson, Jordan A; Duhamel, Lauren; Santollani, Luciano; Kennedy, Alan; Rothschilds, Adrienne M; Spranger, Stefani; Sansom, David M; Wittrup, K Dane
Anti-CTLA-4 antibodies have successfully elicited durable tumor regression in the clinic; however, long-term benefit is limited to a subset of patients for select cancer indications. The incomplete understanding of their mechanism of action has hindered efforts at improvement, with conflicting hypotheses proposing either antagonism of the CTLA-4:B7 axis or Fc effector-mediated regulatory T cell (Treg) depletion governing efficacy. Here, we report the engineering of a nonantagonistic CTLA-4 binding domain (b1s1e2) that depletes intratumoral Tregs as an Fc fusion. Comparison of b1s1e2-Fc to 9d9, an antagonistic anti-CTLA-4 antibody, allowed for interrogation of the separate contributions of CTLA-4 antagonism and Treg depletion to efficacy. Despite equivalent levels of intratumoral Treg depletion, 9d9 achieved more long-term cures than b1s1e2-Fc in MC38 tumors, demonstrating that CTLA-4 antagonism provided additional survival benefit. Consistent with prior reports that CTLA-4 antagonism enhances priming, treatment with 9d9, but not b1s1e2-Fc, increased the percentage of activated T cells in the tumor-draining lymph node (tdLN). Treg depletion with either construct was restricted to the tumor due to insufficient surface CTLA-4 expression on Tregs in other compartments. Through intratumoral administration of diphtheria toxin in Foxp3-DTR mice, we show that depletion of both intratumoral and nodal Tregs provided even greater survival benefit than 9d9, consistent with Treg-driven restraint of priming in the tdLN. Our data demonstrate that anti-CTLA-4 therapies require both CTLA-4 antagonism and intratumoral Treg depletion for maximum efficacy—but that potential future therapies also capable of depleting nodal Tregs could show efficacy in the absence of CTLA-4 antagonism.
</description>
<pubDate>Mon, 24 Jul 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164957</guid>
<dc:date>2023-07-24T00:00:00Z</dc:date>
</item>
<item>
<title>Overcoming lung cancer immunotherapy resistance by combining nontoxic variants of IL-12 and IL-2</title>
<link>https://hdl.handle.net/1721.1/164956</link>
<description>Overcoming lung cancer immunotherapy resistance by combining nontoxic variants of IL-12 and IL-2
Horton, Brendan L; D’Souza, Alicia D; Zagorulya, Maria; McCreery, Chloe V; Abhiraman, Gita C; Picton, Lora; Sheen, Allison; Agarwal, Yash; Momin, Noor; Wittrup, K Dane; White, Forest M; Garcia, K Christopher; Spranger, Stefani
Engineered cytokine-based approaches for immunotherapy of cancer are poised to enter the clinic, with IL-12 being at the forefront. However, little is known about potential mechanisms of resistance to cytokine therapies. We found that orthotopic murine lung tumors were resistant to systemically delivered IL-12 fused to murine serum albumin (MSA, IL12-MSA) because of low IL-12 receptor (IL-12R) expression on tumor-reactive CD8+ T cells. IL2-MSA increased binding of IL12-MSA by tumor-reactive CD8+ T cells, and combined administration of IL12-MSA and IL2-MSA led to enhanced tumor-reactive CD8+ T cell effector differentiation, decreased numbers of tumor-infiltrating CD4+ regulatory T cells, and increased survival of lung tumor-bearing mice. Predictably, the combination of IL-2 and IL-12 at therapeutic doses led to significant dose-limiting toxicity. Administering IL-12 and IL-2 analogs with preferential binding to cells expressing Il12rb1 and CD25, respectively, led to a significant extension of survival in mice with lung tumors while abrogating dose-limiting toxicity. These findings suggest that IL-12 and IL-2 represent a rational approach to combination cytokine therapy whose dose-limiting toxicity can be overcome with engineered cytokine variants.
</description>
<pubDate>Tue, 05 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164956</guid>
<dc:date>2023-09-05T00:00:00Z</dc:date>
</item>
<item>
<title>Intratumoral aluminum hydroxide–anchored IL-12 drives potent antitumor activity by remodeling the tumor microenvironment</title>
<link>https://hdl.handle.net/1721.1/164955</link>
<description>Intratumoral aluminum hydroxide–anchored IL-12 drives potent antitumor activity by remodeling the tumor microenvironment
Battula, Sailaja; Papastoitsis, Gregory; Kaufman, Howard L; Wittrup, K Dane; Schmidt, Michael M
IL-12 is a potent cytokine that can promote innate and adaptive anticancer immunity, but its clinical development has been limited by toxicity when delivered systemically. Intratumoral (i.t.) administration can expand the therapeutic window of IL-12 and other cytokines but is in turn limited by rapid drug clearance from the tumor, which reduces efficacy, necessitates frequent administration, and increases systemic accumulation. To address these limitations, we developed an anchored IL-12 designated ANK-101, composed of an engineered IL-12 variant that forms a stable complex with the FDA-approved vaccine adjuvant aluminum hydroxide (Alhydrogel). Following i.t. administration of murine ANK-101 (mANK-101) in early intervention syngeneic mouse tumors, the complex formed a depot that was locally retained for weeks as measured by IVIS or SPECT/CT imaging, while unanchored protein injected i.t. was cleared within hours. One or 2 i.t. injections of mANK-101 induced single-agent antitumor activity across a diverse range of syngeneic tumors, including models resistant to checkpoint blockade at doses where unanchored IL-12 had no efficacy. Local treatment with mANK-101 further induced regressions of noninjected lesions, especially when combined with systemic checkpoint blockade. Antitumor activity was associated with remodeling of the tumor microenvironment, including prolonged IFN-γ and chemokine expression, recruitment and activation of T and NK cells, M1 myeloid cell skewing, and increased antigen processing and presentation. Subcutaneous administration of ANK-101 in cynomolgus macaques was well tolerated. Together, these data demonstrate that ANK-101 has an enhanced efficacy and safety profile and warrants future clinical development.
</description>
<pubDate>Fri, 08 Dec 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164955</guid>
<dc:date>2023-12-08T00:00:00Z</dc:date>
</item>
<item>
<title>CD8+ T cell priming that is required for curative intratumorally anchored anti-4-1BB immunotherapy is constrained by Tregs</title>
<link>https://hdl.handle.net/1721.1/164954</link>
<description>CD8+ T cell priming that is required for curative intratumorally anchored anti-4-1BB immunotherapy is constrained by Tregs
Palmeri, Joseph R; Lax, Brianna M; Peters, Joshua M; Duhamel, Lauren; Stinson, Jordan A; Santollani, Luciano; Lutz, Emi A; Pinney, William; Bryson, Bryan D; Dane Wittrup, K
Although co-stimulation of T cells with agonist antibodies targeting 4-1BB (CD137) improves antitumor immune responses in preclinical studies, clinical success has been limited by on-target, off-tumor activity. Here, we report the development of a tumor-anchored ɑ4-1BB agonist (ɑ4-1BB-LAIR), which consists of a ɑ4-1BB antibody fused to the collagen-binding protein LAIR. While combination treatment with an antitumor antibody (TA99) shows only modest efficacy, simultaneous depletion of CD4+ T cells boosts cure rates to over 90% of mice. Mechanistically, this synergy depends on ɑCD4 eliminating tumor draining lymph node regulatory T cells, resulting in priming and activation of CD8+ T cells which then infiltrate the tumor microenvironment. The cytotoxic program of these newly primed CD8+ T cells is then supported by the combined effect of TA99 and ɑ4-1BB-LAIR. The combination of TA99 and ɑ4-1BB-LAIR with a clinically approved ɑCTLA-4 antibody known for enhancing T cell priming results in equivalent cure rates, which validates the mechanistic principle, while the addition of ɑCTLA-4 also generates robust immunological memory against secondary tumor rechallenge. Thus, our study establishes the proof of principle for a clinically translatable cancer immunotherapy.
</description>
<pubDate>Fri, 01 Mar 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164954</guid>
<dc:date>2024-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Targeting of the CD161 inhibitory receptor enhances T-cell–mediated immunity against hematological malignancies</title>
<link>https://hdl.handle.net/1721.1/164952</link>
<description>Targeting of the CD161 inhibitory receptor enhances T-cell–mediated immunity against hematological malignancies
Alvarez Calderon, Francesca; Kang, Byong H; Kyrysyuk, Oleksandr; Zheng, Shiwei; Wang, Hao; Mathewson, Nathan D; Luoma, Adrienne M; Ning, Xiaohan; Pyrdol, Jason; Cao, Xuan; Suvà, Mario L; Yuan, Guo-Cheng; Wittrup, K Dane; Wucherpfennig, Kai W
The CD161 inhibitory receptor is highly upregulated by tumor-infiltrating T cells in multiple human solid tumor types, and its ligand, CLEC2D, is expressed by both tumor cells and infiltrating myeloid cells. Here, we assessed the role of the CD161 receptor in hematological malignancies. Systematic analysis of CLEC2D expression using the Cancer Cell Line Encyclopedia revealed that CLEC2D messenger RNA was most abundant in hematological malignancies, including B-cell and T-cell lymphomas as well as lymphocytic and myelogenous leukemias. CLEC2D protein was detected by flow cytometry on a panel of cell lines representing a diverse set of hematological malignancies. We, therefore, used yeast display to generate a panel of high-affinity, fully human CD161 monoclonal antibodies (mAbs) that blocked CLEC2D binding. These mAbs were specific for CD161 and had a similar affinity for human and nonhuman primate CD161, a property relevant for clinical translation. A high-affinity CD161 mAb enhanced key aspects of T-cell function, including cytotoxicity, cytokine production, and proliferation, against B-cell lines originating from patients with acute lymphoblastic leukemia, diffuse large B-cell lymphoma, and Burkitt lymphoma. In humanized mouse models, this CD161 mAb enhanced T-cell–mediated immunity, resulting in a significant survival benefit. Single cell RNA-seq data demonstrated that CD161 mAb treatment enhanced expression of cytotoxicity genes by CD4 T cells as well as a tissue-residency program by CD4 and CD8 T cells that is associated with favorable survival outcomes in multiple human cancer types. These fully human mAbs, thus, represent potential immunotherapy agents for hematological malignancies.
</description>
<pubDate>Thu, 21 Mar 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164952</guid>
<dc:date>2024-03-21T00:00:00Z</dc:date>
</item>
<item>
<title>Upcycling spent medium-Ni cathodes via novel liquified salts sourcing</title>
<link>https://hdl.handle.net/1721.1/164951</link>
<description>Upcycling spent medium-Ni cathodes via novel liquified salts sourcing
Yoon, Moonsu; Park, Jin-Sung; Chen, Weiyin; Huang,  Yimeng; Dai,  Tao; Lee,  Yumin; Shin,  Jungmin; Lee,  Seungmi; Kim,  Yongil; Lee, Dongsoo; Shin, Daiha; Cho, Jaephil; Dong,  Yanhao; Li, Ju
The rapid growth in lithium-ion battery technology underscores the urgent need for sustainable recycling to address the environmental and economic challenges of battery waste. This study introduces a liquified-salts-assisted upcycling approach to transform spent medium-Ni cathodes into high-performance single-crystalline Ni-rich cathodes. Utilizing the LiOH–LiNO3–Ni(NO3)2·6H2O eutectic, this method leverages planetary centrifugal mixing to create a liquid-like environment for accelerated elemental diffusion and microstructural refinement. The in situ liquefaction of these salts ensures seamless precursor integration, achieving compositional uniformity and minimizing impurity formation. Compared to conventional solid-state methods, our method significantly suppresses rock-salt phase formation, and improves electrochemical performance with superior cycling stability and rate capability. The environmental and economic advantages of our approach highlight its potential to reduce greenhouse gas emissions and energy consumption. This scalable, energy-efficient strategy provides a transformative solution for battery waste management, paving the way for the sustainable production of next-generation cathode materials.
</description>
<pubDate>Wed, 02 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164951</guid>
<dc:date>2025-04-02T00:00:00Z</dc:date>
</item>
<item>
<title>CLN-617 Retains IL2 and IL12 in Injected Tumors to Drive Robust and Systemic Immune-Mediated Antitumor Activity</title>
<link>https://hdl.handle.net/1721.1/164950</link>
<description>CLN-617 Retains IL2 and IL12 in Injected Tumors to Drive Robust and Systemic Immune-Mediated Antitumor Activity
Mehta, Naveen K; Rakhra, Kavya; Meetze, Kristan A; Li, Bochong; Momin, Noor; Chang, Jason YH; Wittrup, K Dane; Baeuerle, Patrick A; Michaelson, Jennifer S
Despite clinical evidence of antitumor activity, the development of cytokine therapies has been hampered by a narrow therapeutic window and limited response rates. Two cytokines of high interest for clinical development are interleukin 2 (IL2) and interleukin 12 (IL12), which potently synergize to promote the activation and proliferation of T cells and NK cells. However, the only approved human IL2 therapy, Proleukin, is rarely used in the clinic due to systemic toxicities, and no IL12 product has been approved to date due to severe dose-limiting toxicities. Here, we describe CLN-617, a first-in-class therapeutic for intratumoral (IT) injection that co-delivers IL2 and IL12 on a single molecule in a safe and effective manner. CLN-617 is a single-chain fusion protein comprised of IL2, leukocyte-associated immunoglobulin-like receptor 2 (LAIR2), human serum albumin (HSA), and IL12. LAIR2 and HSA function to retain CLN-617 in the treated tumor by binding collagen and increasing molecular weight, respectively. We found that IT administration of a murine surrogate of CLN-617, mCLN-617, eradicated established treated and untreated tumors in syngeneic models, significantly improved response to anti-PD1 checkpoint therapy, and generated a robust abscopal response dependent on cellular immunity and antigen cross-presentation. CLN-617 is being evaluated in a clinical trial in patients with advanced solid tumors (NCT06035744).
</description>
<pubDate>Thu, 01 Aug 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164950</guid>
<dc:date>2024-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Porous Organic Materials-Based Atomically Dispersed Metal Electrocatalysts</title>
<link>https://hdl.handle.net/1721.1/164949</link>
<description>Porous Organic Materials-Based Atomically Dispersed Metal Electrocatalysts
Zhang, Hao; Wang,  Suwen; Lv,  Enmin; Qi, Menghui; He, Chengchao; Dong, Xinglong; Qiu,  Jieshan; Wang, Yong; Wen,  Zhenhai
The transition to renewable energy sources and the need for efficient energy conversion technologies have led to the development of various types of catalysts, among which atomically dispersed metal catalysts (ADMCs) supported by porous organic materials (POMs) have attracted attention for their high catalytic efficiency and stability. This review focuses on the development and application of ADMCs supported by POMs, such as MOFs, COFs, and HOFs, which offer catalytic performance due to their high atomic utilization, stability, and selectivity. This paper systematically explores various strategies for synthesizing ADMCs, including the use of organic linkers, metal nodes, and pore spaces within POMs to stabilize metal atoms and prevent aggregation. Key applications highlighted include energy conversion and storage technologies, such as fuel cells, water splitting, CO2 reduction and nitrogen reduction, where ADMCs demonstrate the potential to replace noble metals. Despite the progress, challenges remain in achieving high metal loading, long-term stability, and cost-effective large-scale production. This study underscores the importance of advanced characterization techniques and computational models to deepen the understanding of ADMCs’ catalytic mechanisms and guide future material design, paving the way for their broader application in sustainable energy technologies.
</description>
<pubDate>Tue, 25 Mar 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164949</guid>
<dc:date>2025-03-25T00:00:00Z</dc:date>
</item>
<item>
<title>Quantifying Mining Requirement and Waste for Energy Sustainability</title>
<link>https://hdl.handle.net/1721.1/164948</link>
<description>Quantifying Mining Requirement and Waste for Energy Sustainability
Ermakova, Dinara; Sen,  Drishti; Wainwright, Haruko; Bae,  Jin Whan; Chene, Lisha; Vujic, Jasmina
This study demonstrates the life-cycle assessment of different energy sources-coal, natural gas, solar, wind, nuclear, and hydro-particularly focused on mining activities and waste per given electricity capacity and generation. It also includes carbon dioxide emissions generated during the transportation of raw materials to build and operate electricity generating systems and their environmental impacts in the US from 2023 to 2050. We identify the raw material and metal requirements for the U.S.-based typical systems in each energy type and synthesize datasets on typical ore fraction and material recycling factors, while taking into account the capacity factor of the power plants. We then compute the total mass and volume of material requirements and waste mass and volume for the front-end (i.e., mining, material needed for construction), operation (i.e., fuel, maintenance), and back-end (i.e., decommissioning) activities. The key findings are that (1) the energy transition from fossil fuel to low-carbon energy sources would reduce mining waste as well as the shipping carbon footprint; (2) the difference in capacity and actual electricity generation is significant for the life-cycle assessment due to low capacity factors of solar and wind energy; (3) several key metals with low abundance or high requirements dominate mining waste, which highlights the need for recycling and establishing a circular economy; (4) mining of critical minerals becomes important during the clean energy transition and (5) nuclear energy generates least waste and contributes least to shipping emissions among the low-carbon sources due to the high energy density and capacity factor and the small mass of materials it requires. Although the waste mass may not necessarily be equal to the environmental impact due to different waste isolation technologies, we aim to highlight the importance of considering mining and decommissioning waste, which are often ignored but important for accounting for the environmental impacts and addressing energy justice issues.
</description>
<pubDate>Sun, 13 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164948</guid>
<dc:date>2025-04-13T00:00:00Z</dc:date>
</item>
<item>
<title>The Scholarly Knowledge Ecosystem: Challenges and Opportunities for the Field of Information</title>
<link>https://hdl.handle.net/1721.1/164947</link>
<description>The Scholarly Knowledge Ecosystem: Challenges and Opportunities for the Field of Information
Altman, Micah; Cohen, Philip N
The scholarly knowledge ecosystem presents an outstanding exemplar of the challenges of understanding, improving, and governing information ecosystems at scale. This article draws upon significant reports on aspects of the ecosystem to characterize the most important research challenges and promising potential approaches. The focus of this review article is the fundamental scientific research challenges related to developing a better understanding of the scholarly knowledge ecosystem. Across a range of disciplines, we identify reports that are conceived broadly, published recently, and written collectively. We extract the critical research questions, summarize these using quantitative text analysis, and use this quantitative analysis to inform a qualitative synthesis. Three broad themes emerge from this analysis: the need for multi-sectoral cooperation and coordination, for mixed methods analysis at multiple levels, and interdisciplinary collaboration. Further, we draw attention to an emerging consensus that scientific research in this area should by a set of core human values.
</description>
<pubDate>Mon, 31 Jan 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164947</guid>
<dc:date>2022-01-31T00:00:00Z</dc:date>
</item>
<item>
<title>Investigation of L2/Ln Pragmatic Competence: Its Core and Route Map</title>
<link>https://hdl.handle.net/1721.1/164946</link>
<description>Investigation of L2/Ln Pragmatic Competence: Its Core and Route Map
Mao, Tiaoyuan
How to use language properly and acquire the capacity for language use has become the focus of linguists and philosophers for centuries. Therefore, pragmatic competence underlying language use arouses enormous interests of language acquisition practitioners. This study reveals the core properties of various models or theories of pragmatic competence, such as the communicative componential models, the form-function mapping proposal of the functionalist, the tripartite cognitive model, and the current integrated model of pragmatic competence. The common core includes (but not limited to) integration of thought and communication, one uniform pragmatic mechanism, dynamic form-function mapping, and complementarity between grammatical and pragmatic competences. With the findings as a departure, a brief outline for further investigation of pragmatic competence is proposed finally, including pathological and neurobiological examination of pragmatic competence.
</description>
<pubDate>Tue, 17 Aug 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164946</guid>
<dc:date>2021-08-17T00:00:00Z</dc:date>
</item>
<item>
<title>Investigating Structural Biophysical Features for Antigen-Binding Fragment Crystallization via Machine Learning</title>
<link>https://hdl.handle.net/1721.1/164945</link>
<description>Investigating Structural Biophysical Features for Antigen-Binding Fragment Crystallization via Machine Learning
Chattaraj, Krishna Gopal; Ferreira,  Joana; Myerson, Allan S.; Trout,  Bernhardt L.
Antibody-based therapeutics continue to be an important pharmaceutical development modality. Crystallization of antibodies is important for structural characterization, but in addition has the potential for use as a separation method and for use as a dosage form. Nevertheless, bringing about controlled crystallization of an antibody remains a challenging task due to its large size, high degree of segmental flexibility, and the intricacy of all the occurring interactions (e.g., protein–protein interactions, protein–solvent interactions, etc.). Methods to predict important contact sites could help to develop such crystallization methods. However, limited data and understanding have hitherto not allowed the development of such robust methods. This study employs machine learning combined with in silico modelling of crystal structures using available experimental structures to identify the crucial physicochemical features necessary for successful antibody crystallization in an attempt to remedy that gap. The developed method can with good accuracy distinguish crystal-site residues from non-crystal-site residues. A set of 510 descriptors is utilized to characterize each residue, which is treated as a distinct data point. Moreover, new algorithms have been developed to design novel descriptors that improve the model's predictive capabilities. Fragment antigen-binding (Fab) regions are investigated due to the scarcity of full-length monoclonal antibodies (mAbs) crystal structures. The current findings show that the extreme gradient boosting (XGBoost) algorithm effectively identifies crystal site residues, as evidenced by an AUPRC value that is more than 3-fold higher than that of the baseline model. The top-ranked descriptors indicate that crystal-site residues are primarily characterized by solvent-exposed residues with high spatial aggregation propensity (SAP), signifying hydrophobic patches, and their immediate surface-exposed neighbors. Moreover, these high SAP residues are often surrounded by other solvent-exposed residues that are either polar, charged, or both. In contrast, residues not involved in crystal interfaces generally lack these essential features, though some might be excluded due to specific crystal lattice arrangements. Additionally, reducing the feature set from 510 to the top 15% in the XGBoost model yields similar performance while significantly simplifying the model.
</description>
<pubDate>Fri, 28 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164945</guid>
<dc:date>2025-02-28T00:00:00Z</dc:date>
</item>
<item>
<title>Modeling the Role of Supramolecular Clustering in Multivalent Assembly</title>
<link>https://hdl.handle.net/1721.1/164944</link>
<description>Modeling the Role of Supramolecular Clustering in Multivalent Assembly
Sbalbi,  Nicholas; Petrov, Artem; Sass, Jacob; Ye,  Matthew; Alexander-Katz, Alfredo; Macfarlane, Robert J.
In self-assembled systems, a combination of multiple weak supramolecular interactions is often utilized to enable strong yet reversible binding. When modeling the behavior of these multivalent interfaces, it is commonly assumed that binding pairs are independent, i.e., the probability of a pair being bound is unaffected by the bound state of neighboring pairs. Inspired by recent experimental work, we report that for a variety of systems this assumption may not hold, leading to the formation of clusters at the binding interface. Through a series of analytical and numerical models of end-functionalized brushes, we reveal the role of cluster size on binding thermodynamics, detail how entropic contributions from polymer chains provide tunable control of cluster size, and provide predictions for cluster size as a function of system architecture. Investigation of these models yields surprising results: within the melting window, the enthalpy of binding of multivalent interfaces is predicted to depend only on cluster size and not on the overall valency of the multivalent system. Moreover, clustering is predicted to be significant even in systems with only weak dipole and dispersion interactions between neighboring groups. Combined, this work brings to light the potential impacts of clustering on multivalent self-assembly, providing theoretical justification for previous experimental observations and paving the way for future work in this area.
</description>
<pubDate>Wed, 23 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164944</guid>
<dc:date>2025-04-23T00:00:00Z</dc:date>
</item>
<item>
<title>Elucidating the effect of Fe substitution on structural and redox stability of Na2Mn3O7</title>
<link>https://hdl.handle.net/1721.1/164943</link>
<description>Elucidating the effect of Fe substitution on structural and redox stability of Na2Mn3O7
Smith, Hugh B.; Lee,  Gi-Hyeok; Kumar,  Bachu Sravan; Penn, Aubrey N.; Venturi, Victor; Gao, Yifan; Davis,  Ryan C.; Stone, Kevin Hunter; Hunt,  Adrian; Waluyo, Iradwikanari; Stavitski,  Eli; Yangb, Wanli; Abate, Iwnetim I.
Sodium-ion batteries have the potential to meet the growing demand for energy storage due to their low costs stemming from natural resource abundances, but their cathode energy densities must be improved to be comparable to those of lithium-ion batteries. One strategy is accessing high voltage capacity through high-valent redox reactions. Such reactions usually cause instability in cathode materials, but Na2Mn3O7 (NMO) has demonstrated excellent performance and reversibility in the high-valent regime due to its unique lattice structure with ordered Mn vacancies. This work expands the universality of the ordered vacancy as a design principle and increases the material candidates with such exceptional electrochemical behavior. Our approach involves synergizing cationic ordered vacancies with tunable metal–ligand hybridization through partial metal substitution. In particular, we successfully incorporated Fe3+ for Mn4+ in NMO to make Na2.25Mn2.75Fe0.25O7 and achieved improved high-valent redox behavior. Fe substitution leads to larger specific capacities (171 vs. 159 mA h g−1 first cycle), enhanced cycle stability (97 vs. 60 mA h g−1 after 50 cycles), and superior rate performance. This study lays the foundation for developing new cathode materials with stable high-valent redox through substitution of redox-active transition metals by employing cationic ordered vacancies and partial transition metal substitution as design principles in tandem.
</description>
<pubDate>Tue, 11 Mar 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164943</guid>
<dc:date>2025-03-11T00:00:00Z</dc:date>
</item>
<item>
<title>Progress in Computational Methods and Mechanistic Insights on the Growth of Carbon Nanotubes</title>
<link>https://hdl.handle.net/1721.1/164942</link>
<description>Progress in Computational Methods and Mechanistic Insights on the Growth of Carbon Nanotubes
Wang,  Linzheng; Tricard, Nicolas; Chen, Zituo; Deng, Sili
Carbon nanotubes (CNTs), as a promising nanomaterial with broad applications across various fields, are continuously attracting significant research attention. Despite substantial progress in understanding their growth mechanisms, synthesis methods, and post-processing techniques, two major goals remain challenging: achieving property-targeted growth and efficient mass production. Recent advancements in computational methods driven by increased computational resources, the development of platforms, and the refinement of theoretical models, have significantly deepened our understanding of the mechanisms underlying CNT growth. This review aims to comprehensively examine the latest computational techniques that shed light on various aspects of CNT synthesis. The first part of this review focuses on progress in computational methods. Beginning with atomistic simulation approaches, we introduce the fundamentals and advancements in density functional theory (DFT), molecular dynamics (MD) simulations, and kinetic Monte Carlo (kMC) simulations. We discuss the applicability and limitations of each method in studying mechanisms of CNT growth. Then, the focus shifts to multiscale modeling approaches, where we demonstrate the coupling of atomic-scale simulations with reactor-scale multiphase flow models. Given that CNT growth inherently spans multiple temporal and spatial scales, the development and application of multiscale modeling techniques are poised to become a central focus of future computational research in this field. Furthermore, this review emphasizes the growing role played by machine learning in CNT growth research. Compared with traditional physics-based simulation methods, data-driven machine learning approaches have rapidly emerged in recent years, revolutionizing research paradigms from molecular simulation to experimental design. In the second part of this review, we highlight the latest advancements in CNT growth mechanisms and synthesis methods achieved through computational techniques. These include novel findings across fundamental growth stages, i.e., from nucleation to elongation and ultimately termination. We also examine the dynamic behaviors of catalyst nanoparticles and chirality-controlled growth processes, emphasizing how these insights contribute to advancing the field. Finally, in the concluding section, we propose future directions for advancements of computational approaches toward deeper understanding of CNT growth mechanisms and better support of CNT manufacturing.
</description>
<pubDate>Wed, 19 Mar 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164942</guid>
<dc:date>2025-03-19T00:00:00Z</dc:date>
</item>
<item>
<title>Superthermal Solar Interfacial Evaporation is not due to Reduced Latent Heat of Water</title>
<link>https://hdl.handle.net/1721.1/164941</link>
<description>Superthermal Solar Interfacial Evaporation is not due to Reduced Latent Heat of Water
Zhang, James H.; Mittapally,  Rohith; Lva, Guangxin; Chen, Gang
To explain reported solar interfacial-evaporation rates from porous materials beyond an apparent 100% efficiency using the thermal evaporation mechanism, many publications hypothesize that intermediate water inside porous materials has a reduced latent heat. Key supporting evidence is that water-only surfaces have lower natural evaporation rates than porous evaporators, with the ratio of the two rates taken as the latent heat reduction. Through simulations and experiments, we study natural evaporation of water and show that reported differences in evaporation rates between porous materials and water are likely due to experimental error from recessed evaporating surfaces. A few millimeter recession of the water surface relative to the container lip can drop evaporation rates by over 50% due to a stagnant air layer, suggesting that the comparative experiments are prone to error. Furthermore, in the reduced latent heat picture, interfacial cooling must occur at the porous sample–water interface due to the enthalpy difference between bulk water and intermediate water. Our transport modeling shows that reduced latent heat cannot explain superthermal evaporation and that new mechanistic directions need to be pursued.
</description>
<pubDate>Mon, 13 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164941</guid>
<dc:date>2025-01-13T00:00:00Z</dc:date>
</item>
<item>
<title>Generative design and molecular mechanics characterization of silk proteins based on unfolding behavior</title>
<link>https://hdl.handle.net/1721.1/164940</link>
<description>Generative design and molecular mechanics characterization of silk proteins based on unfolding behavior
Lu, Wei; Buehler, Markus J.
Spider silk exhibits exceptional mechanical properties, biocompatibility, and biodegradability, making it a promising material for bioengineered applications. However, the complexity and diversity of silk proteins, coupled with limited experimental data, have hindered the rational design of silk-based biomaterials. Furthermore, the mechanobiology of these proteins and their impact on silk fiber properties remain underexplored. In this study, we introduce a series of novel silk protein sequences and characterize their nonlinear unfolding behavior and mechanical properties through molecular dynamics (MD) simulations. Focusing on major ampullate spidroin (MaSp) silk proteins, we curate a dataset that integrates experimentally acquired sequences with synthetic sequences generated by SilkomeGPT, a generative model for silk-inspired proteins. Structural predictions are performed using OmegaFold, from which high-fidelity regions are extracted and analyzed. Their unfolding responses are assessed via implicit all-atom MD simulations, enabling characterization of their mechanical behavior. This computationally efficient framework facilitates the rational design of spider silk proteins by linking atomistic and sequence features to larger-scale properties. The developed dataset systematically captures structural uncertainties, while simulations provide atomic-level insights into how protein mechanics contribute to fiber properties, advancing the mechanobiological understanding of spider silk and supporting diverse applications in biomaterials design.
</description>
<pubDate>Fri, 02 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164940</guid>
<dc:date>2025-05-02T00:00:00Z</dc:date>
</item>
<item>
<title>Tumor-Localized Interleukin-2 and Interleukin-12 Combine with Radiation Therapy to Safely Potentiate Regression of Advanced Malignant Melanoma in Pet Dogs</title>
<link>https://hdl.handle.net/1721.1/164939</link>
<description>Tumor-Localized Interleukin-2 and Interleukin-12 Combine with Radiation Therapy to Safely Potentiate Regression of Advanced Malignant Melanoma in Pet Dogs
Stinson, Jordan A; Barbosa, Matheus Moreno P; Sheen, Allison; Momin, Noor; Fink, Elizabeth; Hampel, Jordan; Selting, Kim A; Kamerer, Rebecca L; Bailey, Keith L; Wittrup, Karl D; Fan, Timothy M
Purpose:&#13;
Cytokines IL2 and IL12 exhibit potent anticancer activity but suffer a narrow therapeutic window due to off-tumor immune cell activation. Engineering cytokines with the ability to bind and associate with tumor collagen after intratumoral injection potentiated response without toxicity in mice and was previously safe in pet dogs with sarcoma. Here, we sought to test the efficacy of this approach in dogs with advanced melanoma.&#13;
&#13;
Patients and Methods:&#13;
This study examined 15 client-owned dogs with histologically or cytologically confirmed malignant melanoma that received a single 9-Gy fraction of radiotherapy, followed by six cycles of combined collagen-anchored IL2 and IL12 therapy every 2 weeks. Cytokine dosing followed a 3 + 3 dose escalation design, with the initial cytokine dose chosen from prior evaluation in canine sarcomas. No exclusion criteria for tumor stage or metastatic burden, age, weight, or neuter status were applied for this trial.&#13;
&#13;
Results:&#13;
Median survival regardless of the tumor stage or dose level was 256 days, and 10/13 (76.9%) dogs that completed treatment had CT-measured tumor regression at the treated lesion. In dogs with metastatic disease, 8/13 (61.5%) had partial responses across their combined lesions, which is evidence of locoregional response. Profiling by NanoString of treatment-resistant dogs revealed that B2m loss was predictive of poor response to this therapy.&#13;
&#13;
Conclusions:&#13;
Collectively, these results confirm the ability of locally administered tumor-anchored cytokines to potentiate responses at regional disease sites when combined with radiation. This evidence supports the clinical translation of this approach and highlights the utility of comparative investigation in canine cancers.
</description>
<pubDate>Fri, 13 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164939</guid>
<dc:date>2024-09-13T00:00:00Z</dc:date>
</item>
<item>
<title>Bivalent target-binding bioPROTACs induce potent degradation of oncogenic SHP2</title>
<link>https://hdl.handle.net/1721.1/164938</link>
<description>Bivalent target-binding bioPROTACs induce potent degradation of oncogenic SHP2
Hoffman, Megan; Krum, David; Wittrup, K Dane
Targeted protein degradation is an emergent and rapidly evolving therapeutic strategy. In particular, biologics-based targeted degradation modalities (bioPROTACs) are relatively under explored compared to small molecules. Here, we investigate how target affinity, cellular localization, and valency of bioPROTACs impact efficacy of targeted degradation of the oncogenic phosphatase src-homology 2 containing protein tyrosine phosphatase-2 (SHP2). We identify bivalent recruitment of SHP2 by bioPROTACs as a broadly applicable strategy to improve potency. Moreover, we demonstrate that SHP2-targeted bioPROTACs can effectively counteract gain-of-function SHP2 mutants present in cancer, which are otherwise challenging to selectively target with small molecule constructs. Overall, this study demonstrates the utility of bioPROTACs for challenging targets, and further explicates design principles for therapeutic bioPROTACs.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164938</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Local delivery of cell surface-targeted immunocytokines programs systemic antitumor immunity</title>
<link>https://hdl.handle.net/1721.1/164937</link>
<description>Local delivery of cell surface-targeted immunocytokines programs systemic antitumor immunity
Santollani, Luciano; Maiorino, Laura; Zhang, Yiming J; Palmeri, Joseph R; Stinson, Jordan A; Duhamel, Lauren R; Qureshi, Kashif; Suggs, Jack R; Porth, Owen T; Pinney, William; Msari, Riyam Al; Walsh, Agnes A; Wittrup, K Dane; Irvine, Darrell J
Systemically administered cytokines are potent immunotherapeutics but can cause severe dose-limiting toxicities. To overcome this challenge, cytokines have been engineered for intratumoral retention after local delivery. However, despite inducing regression of treated lesions, tumor-localized cytokines often elicit only modest responses at distal untreated tumors. In the present study, we report a localized cytokine therapy that safely elicits systemic antitumor immunity by targeting the ubiquitous leukocyte receptor CD45. CD45-targeted immunocytokines have lower internalization rates relative to wild-type counterparts, leading to sustained downstream cis and trans signaling between lymphocytes. A single intratumoral dose of αCD45-interleukin (IL)-12 followed by a single dose of αCD45-IL-15 eradicated treated tumors and untreated distal lesions in multiple syngeneic mouse tumor models without toxicity. Mechanistically, CD45-targeted cytokines reprogrammed tumor-specific CD8+ T cells in the tumor-draining lymph nodes to have an antiviral transcriptional signature. CD45 anchoring represents a broad platform for protein retention by host immune cells for use in immunotherapy.
</description>
<pubDate>Wed, 07 Aug 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164937</guid>
<dc:date>2024-08-07T00:00:00Z</dc:date>
</item>
<item>
<title>Tumor Integrin-Targeted Glucose Oxidase Enzyme Promotes ROS-Mediated Cell Death that Combines with Interferon Alpha Therapy for Tumor Control</title>
<link>https://hdl.handle.net/1721.1/164936</link>
<description>Tumor Integrin-Targeted Glucose Oxidase Enzyme Promotes ROS-Mediated Cell Death that Combines with Interferon Alpha Therapy for Tumor Control
Stinson, Jordan A; Sheen, Allison; Lax, Brianna M; Yang, Grace N; Duhamel, Lauren; Santollani, Luciano; Fink, Elizabeth; Palmeri, Joseph R; Wittrup, Karl Dane
Although heightened intratumoral levels of reactive oxygen species (ROS) are typically associated with a suppressive tumor microenvironment, under certain conditions ROS contribute to tumor elimination. Treatment approaches, including some chemotherapy and radiation protocols, increase cancer cell ROS levels that influence their mechanism of cell death and subsequent recognition by the immune system. Furthermore, activated myeloid cells rapidly generate ROS upon encounter with pathogens or infected cells to eliminate disease, and recently, this effector function has been noted in cancer contexts as well. Collectively, ROS-induced cancer cell death may help initiate adaptive antitumor immune responses that could synergize with current approved immunotherapies, for improved control of solid tumors. In this work, we explore the use of glucose oxidase, an enzyme which produces hydrogen peroxide, a type of ROS, to therapeutically mimic the endogenous oxidative burst from myeloid cells to promote antigen generation within the tumor microenvironment. We engineer the enzyme to target pan-tumor-expressed integrins both as a tumor-agnostic therapeutic approach and as a strategy to prolong local enzyme activity following intratumoral administration. We found the targeted enzyme potently induced cancer cell death and enhanced cross-presentation by dendritic cells in vitro and further combined with interferon alpha for long-term tumor control in murine MC38 tumors in vivo. Optimizing the single-dose administration of this enzyme overcomes limitations with immunogenicity noted for other prooxidant enzyme approaches. Overall, our results suggest ROS-induced cell death can be harnessed for tumor control and highlight the potential use of designed enzyme therapies alongside immunotherapy against cancer.
</description>
<pubDate>Thu, 02 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164936</guid>
<dc:date>2025-01-02T00:00:00Z</dc:date>
</item>
<item>
<title>Yeast as a tool for exploring disulfide-rich peptides</title>
<link>https://hdl.handle.net/1721.1/164935</link>
<description>Yeast as a tool for exploring disulfide-rich peptides
Yap, Kuok; Porth, Owen T; Xie, Jing; Wang, Conan K; Durek, Thomas; Wittrup, K Dane; Craik, David J
Cyclic disulfide-rich peptides have become increasingly popular in drug development because their structures enhance molecular stability and allow for mutagenesis to introduce non-native functions. This review focuses on yeast-based platform technologies and their utility in advancing cyclic disulfide-rich peptides as drug modalities and for large-scale biomanufacturing. These technologies include yeast surface display which facilitates the screening of large libraries to develop peptide binders with strong affinity and selectivity for protein targets, while maintaining the innate high stability of the peptide scaffold via protease-based selection pressure. We also describe a recently developed platform that leverages yeast’s ability to secrete correctly folded disulfide-rich peptides while simultaneously displaying peptide or protein tags on their surfaces. In combination with microfluidics technology, the platform creates single-cell yeast-in-droplets reactors, enabling the screening of large libraries based on functional output rather than solely on binding affinity. After identifying cyclic peptide candidates through library-based discovery, these candidates can be produced using a versatile yeast-based bioproduction platform. Traditionally, cyclic disulfide-rich peptides are produced through solid-phase synthesis, a method that generates significant amounts of toxic waste. In contrast, yeast-based bioproduction offers an environmentally sustainable alternative. It has the capability to produce structurally distinct peptides with minimal adjustments and is easily scalable using microbial fermenters, making it an ideal choice for large-scale production.
</description>
<pubDate>Thu, 18 Dec 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164935</guid>
<dc:date>2025-12-18T00:00:00Z</dc:date>
</item>
<item>
<title>Aligning supply chain design for boosting resilience</title>
<link>https://hdl.handle.net/1721.1/164934</link>
<description>Aligning supply chain design for boosting resilience
Sáenz, María Jesús; Revilla, Elena; Acero, Beatriz
Many researchers have analyzed the effect of disruptive events, such as natural disasters and economic and market forces, on global supply chains. However, there is a lack of consensus on delineating a universal collection of supply chain risk management practices that will help companies operate in a global market with large-scale disruptions. In this article, we present an analysis, in conjunction with a worldwide online survey, based on successful global brands and their supply chains. We propose a framework that deploys the dynamics of building supply chain resilience, first linking the design of the supply chain portfolio (local versus global scope, as well as strategic responsiveness versus cost reduction) with supply chain vulnerabilities (external versus internal). We describe the transition between different supply chain structures as a way of coping with disruptions and thus proactively developing resilience. In this article, we introduce both a supply chain risk management approach and the reactive-by-deployment mode, as illustrated by successful global company examples.
</description>
<pubDate>Tue, 01 May 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164934</guid>
<dc:date>2018-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Directed evolution-based discovery of ligands for in vivo restimulation of chimeric antigen receptor T cells</title>
<link>https://hdl.handle.net/1721.1/164932</link>
<description>Directed evolution-based discovery of ligands for in vivo restimulation of chimeric antigen receptor T cells
Grzywa, Tomasz M; Neeser, Alexandra; Ramasubramanian, Ranjani; Romanov, Anna; Tannir, Ryan; Mehta, Naveen K; Cossette, Benjamin; Morgan, Duncan M; Goncalves, Beatriz; Sukaj, Ina; Bergaggio, Elisa; Kadauke, Stephan; Myers, Regina M; Paruzzo, Luca; Ghilardi, Guido; Cozzone, Austin; Schuster, Stephen J; Frey, Noelle; Zhang, Libin; Yousefpour, Parisa; Abraham, Wuhbet; Suh, Heikyung; Ruella, Marco; Grupp, Stephan A; Chiarle, Roberto; Wittrup, K Dane; Ma, Leyuan; Irvine, Darrell J
Chimeric antigen receptor (CAR) T cell therapy targeting CD19 elicits remarkable clinical efficacy in B cell malignancies, but many patients relapse owing to failed expansion and/or progressive loss of CAR-T cells. We recently reported a strategy to potently restimulate CAR-T cells in vivo, enhancing their functionality by administration of a vaccine-like stimulus comprised of surrogate peptide ligands for a CAR linked to a lymph node-targeting amphiphilic PEG-lipid (amph-vax). Here we demonstrate a general strategy to discover and optimize peptide mimotopes enabling amph-vax generation for any CAR. We use yeast surface display to identify peptide binders to FMC63 (the scFv used in clinical CD19 CARs), which are then subsequently affinity matured by directed evolution. CAR-T vaccines using these optimized mimotopes triggered marked expansion and memory development of CD19 CAR-T cells in both syngeneic and humanized mouse models of B-acute lymphoblastic leukaemia/lymphoma, and enhanced control of disease progression compared with CD19 CAR-T-only-treated mice. This approach enables amph-vax boosting to be applied to any clinically relevant CAR-T cell product.
</description>
<pubDate>Mon, 25 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164932</guid>
<dc:date>2025-08-25T00:00:00Z</dc:date>
</item>
<item>
<title>Machine learning prediction of antibody aggregation and viscosity for high concentration formulation development of protein therapeutics</title>
<link>https://hdl.handle.net/1721.1/164931</link>
<description>Machine learning prediction of antibody aggregation and viscosity for high concentration formulation development of protein therapeutics
Lai, Pin-Kuang; Gallegos, Austin; Mody, Neil; Sathish, Hasige A; Trout, Bernhardt L
Machine learning has been recently used to predict therapeutic antibody aggregation rates and viscosity at high concentrations (150 mg/ml). These works focused on commercially available antibodies, which may have been optimized for stability. In this study, we measured accelerated aggregation rates at 45°C and viscosity at 150 mg/ml for 20 preclinical and clinical-stage antibodies. Features obtained from molecular dynamics simulations of the full-length antibody and sequences were used for machine learning model construction. We found a k-nearest neighbors regression model with two features, spatial positive charge map on the CDRH2 and solvent-accessible surface area of hydrophobic residues on the variable fragment, gives the best performance for predicting antibody aggregation rates (r = 0.89). For the viscosity classification model, the model with the highest accuracy is a logistic regression model with two features, spatial negative charge map on the heavy chain variable region and spatial negative charge map on the light chain variable region. The accuracy and the area under precision recall curve of the classification model from validation tests are 0.86 and 0.70, respectively. In addition, we combined data from another 27 commercial mAbs to develop a viscosity predictive model. The best model is a logistic regression model with two features, number of hydrophobic residues on the light chain variable region and net charges on the light chain variable region. The accuracy and the area under precision recall curve of the classification model are 0.85 and 0.6, respectively. The aggregation rates and viscosity models can be used to predict antibody stability to facilitate pharmaceutical development.
</description>
<pubDate>Tue, 25 Jan 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164931</guid>
<dc:date>2022-01-25T00:00:00Z</dc:date>
</item>
<item>
<title>Enhanced O-glycosylation site prediction using explainable machine learning technique with spatial local environment</title>
<link>https://hdl.handle.net/1721.1/164930</link>
<description>Enhanced O-glycosylation site prediction using explainable machine learning technique with spatial local environment
Hong, Seokyoung; Chattaraj, Krishna Gopal; Guo, Jing; Trout, Bernhardt L; Braatz, Richard D
Motivation: The accurate prediction of O-GlcNAcylation sites is crucial for understanding disease mechanisms and developing effective treatments. Previous machine learning (ML) models primarily relied on primary or secondary protein structural and related properties, which have&#13;
limitations in capturing the spatial interactions of neighboring amino acids. This study introduces local environmental features as a novel approach that incorporates three-dimensional spatial information, significantly improving model performance by considering the spatial context&#13;
around the target site. Additionally, we utilize sparse recurrent neural networks to effectively capture sequential nature of the proteins and to&#13;
identify key factors influencing O-GlcNAcylation as an explainable ML model.&#13;
Results: Our findings demonstrate the effectiveness of our proposed features with the model achieving an F1 score of 28.3%, as well as feature selection capability with the model using only the top 20% of features achieving the highest F1 score of 32.02%, a 1.4-fold improvement&#13;
over existing PTM models. Statistical analysis of the top 20 features confirmed their consistency with literature. This method not only boosts&#13;
prediction accuracy but also paves the way for further research in understanding and targeting O-GlcNAcylation.&#13;
Availability and implementation: The entire code, data, features used in this study are available in the GitHub repository: https://github.com/&#13;
pseokyoung/o-glcnac-
</description>
<pubDate>Tue, 04 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164930</guid>
<dc:date>2025-02-04T00:00:00Z</dc:date>
</item>
<item>
<title>Leveraging microtopography to pattern multi-oriented muscle actuators</title>
<link>https://hdl.handle.net/1721.1/164929</link>
<description>Leveraging microtopography to pattern multi-oriented muscle actuators
Rossy, Tamara; Schwendeman, Laura; Kohli, Sonika; Bawa,  Maheera; Umashankar,  Pavankumar; Habba, Roi; Tchaicheeyan, Oren; Lesmanbc,  Ayelet; Raman, Ritu
Engineering skeletal muscle tissue with precisely defined alignment is of significant importance for applications ranging from drug screening to biohybrid robotics. Aligning 2D contractile muscle monolayers, which are compatible with high-content imaging and can be deployed in planar soft robots, typically requires micropatterned cues. However, current protocols for integrating microscale topographical features in extracellular matrix hydrogels require expensive microfabrication equipment and multi-step procedures involving error-prone manual handling steps. To address this challenge, we present STAMP (simple templating of actuators via micro-topographical patterning), an easily accessible and cost-effective one-step method to pattern microtopography of various sizes and configurations on the surface of hydrogels using reusable 3D printed stamps. We demonstrate that STAMP enables precisely controlling the alignment of mouse and human skeletal muscle fibers without negatively impacting their maturation or function. To showcase the versatility of our technique, we designed a planar soft robot inspired by the iris, which leverages spatially segregated regions of concentric and radial muscle fibers to control pupil dilation. Optogenetic skeletal muscle fibers grown on a STAMPed iris substrates formed a multi-oriented actuator, and selective light stimulation of the radial and concentric fibers was used to control the function of the iris, including pupil constriction. Computational modeling of the biohybrid robot as an active bilayer matched experimental outcomes, showcasing the robustness of our STAMP method for designing, fabricating, and testing planar biohybrid robots capable of complex multi-DOF motion.
</description>
<pubDate>Fri, 14 Mar 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164929</guid>
<dc:date>2025-03-14T00:00:00Z</dc:date>
</item>
<item>
<title>Enhancing Spray Retention Using Cloaked Droplets to Reduce Pesticide Pollution</title>
<link>https://hdl.handle.net/1721.1/164928</link>
<description>Enhancing Spray Retention Using Cloaked Droplets to Reduce Pesticide Pollution
Jayaprakash, Vishnu; Rufer, Simon; Panata, Sreedath; Varanasi,  Kripa K.
Enhancing agrochemical spray retention on plant surfaces would have tremendous benefits to global health and the environment. The bouncing of sprayed pesticide droplets from hydrophobic leaves is a major source of water and soil pollution, and the resultant overuse of pesticides is a human health hazard and a financial burden for farmers. Here we report on the development of sustainable agricultural sprays consisting of cloaked droplets that significantly enhance droplet retention on plant surfaces. By leveraging wetting dynamics, we create cloaked droplets that consist of an ultra-thin food and environmentally safe oil layer (&lt;1% by volume) that encapsulates water droplets. We develop a fundamental understanding of the dynamics of cloaked droplet impact and retention on superhydrophobic surfaces. Using high-speed imaging, we capture how the oil cloak transforms into a wetting ridge that pins the droplets and suppresses their rebound. We span a wide range of impact conditions, oils, oil viscosities, and oil volume fractions to demonstrate the robustness of the approach. By considering a balance of kinetic energy, the work of adhesion, and viscous dissipation in this four-phase system, we develop a physical model that allows us to establish a regime map for rebound suppression. Finally, these findings are implemented into a prototype sprayer which leads to a ∼5-fold reduction in spray waste on crop leaves. We believe that our spray approach can greatly reduce agrochemical pollution as well as pesticide and surfactant usage.
</description>
<pubDate>Tue, 25 Mar 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164928</guid>
<dc:date>2025-03-25T00:00:00Z</dc:date>
</item>
<item>
<title>Total Synthesis and 13C NMR Revision of Nagelamide C</title>
<link>https://hdl.handle.net/1721.1/164927</link>
<description>Total Synthesis and 13C NMR Revision of Nagelamide C
Tong, Guanghu; Nguyen, Long V.; Jamison, Timothy F.
Nagelamide C (1), a dimeric pyrrole–imidazole alkaloid, exhibits antimicrobial and antibacterial activities. We demonstrate herein the first total synthesis of nagelamide C. This concise work was enabled by a series of significant transformations featuring: an imidazole benzylic Wittig olefination, a site selective bromination, and a regioselective trans-hydrostannylation/Stille coupling to construct a unique trisubstituted olefin. In addition, we show the original 13C NMR data of nagelamide C to be in error and revise the data.
</description>
<pubDate>Wed, 04 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164927</guid>
<dc:date>2025-06-04T00:00:00Z</dc:date>
</item>
<item>
<title>Symmetry-Constrained Generation of Diverse Low-Bandgap Molecules with Monte Carlo Tree Search</title>
<link>https://hdl.handle.net/1721.1/164926</link>
<description>Symmetry-Constrained Generation of Diverse Low-Bandgap Molecules with Monte Carlo Tree Search
Subramanian,  Akshay; Damewood,  James; Nam,  Juno; Greenman,  Kevin P.; Singhal, Avni P.; Gómez-Bombarelli, Rafael
Organic optoelectronic materials are a promising avenue for next-generation electronic devices due to their solution processability, mechanical flexibility, and tunable electronic properties. In particular, near-infrared (NIR) sensitive molecules have unique applications in night-vision equipment and biomedical imaging. Molecular engineering has played a crucial role in developing non-fullerene acceptors (NFAs) such as the Y-series molecules, which feature a rigid fused-ring electron donor core flanked by electron-deficient end groups, leading to strong intramolecular charge-transfer and extended absorption into the NIR region. However, systematically designing molecules with targeted optoelectronic properties while ensuring synthetic accessibility remains a challenge. To address this, we leverage structural priors from domain-focused, patent-mined datasets of organic electronic molecules using a symmetry-aware fragment decomposition algorithm and a fragment-constrained Monte Carlo Tree Search (MCTS) generator. Our approach generates candidates that retain symmetry constraints from the patent dataset, while also exhibiting red-shifted absorption, as validated by TD-DFT calculations.
</description>
<pubDate>Mon, 12 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164926</guid>
<dc:date>2025-05-12T00:00:00Z</dc:date>
</item>
<item>
<title>Molecular analysis and design using generative artificial intelligence via multi-agent modeling</title>
<link>https://hdl.handle.net/1721.1/164925</link>
<description>Molecular analysis and design using generative artificial intelligence via multi-agent modeling
Stewart, Isabella; Buehler, Markus J.
We report the use of a multiagent generative artificial intelligence framework, the X-LoRA-Gemma large language model (LLM), to analyze, design and test molecular design. The X-LoRA-Gemma model, inspired by biological principles and featuring 7 billion parameters, dynamically reconfigures its structure through a dual-pass inference strategy to enhance its problem-solving abilities across diverse scientific domains. The model is used to first identify molecular engineering targets through a systematic human–AI and AI–AI self-driving multi-agent approach to elucidate key targets for molecular optimization to improve interactions between molecules. Next, a multi-agent generative design process is used that includes rational steps, reasoning and autonomous knowledge extraction. Target properties of the molecule are identified either using a principal component analysis (PCA) of key molecular properties or sampling from the distribution of known molecular properties. The model is then used to generate a large set of candidate molecules, which are analyzed via their molecular structure, charge distribution, and other features. We validate that as predicted, increased dipole moment and polarizability is indeed achieved in the designed molecules. We anticipate an increasing integration of these techniques into the molecular engineering workflow, ultimately enabling the development of innovative solutions to address a wide range of societal challenges. We conclude with a critical discussion of challenges and opportunities of the use of multi-agent generative AI for molecular engineering, analysis and design.
</description>
<pubDate>Fri, 24 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164925</guid>
<dc:date>2025-01-24T00:00:00Z</dc:date>
</item>
<item>
<title>Automated fast-flow synthesis of the immune checkpoint receptors PD-1 and PD-L1</title>
<link>https://hdl.handle.net/1721.1/164924</link>
<description>Automated fast-flow synthesis of the immune checkpoint receptors PD-1 and PD-L1
Fittolani,  Giulio; Callahan, Alex J.; Loas, Andrei; Pentelute, Bradley L.
Programmed cell death protein 1 (PD-1) and programmed cell death ligand 1 (PD-L1) are key targets for cancer therapy. Here, we use automated fast-flow peptide synthesis (AFPS) to rapidly produce these challenging β-sheet-rich proteins in their active forms following oxidative refolding protocols. The methods presented here provide rapid access to synthetic, air-stable mutants of PD-1 and PD-L1 in which L-methionine residues are substituted with L-norleucine, potentially enabling investigation of post-translational modifications and mirror-image analogs for drug discovery.
</description>
<pubDate>Mon, 17 Mar 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164924</guid>
<dc:date>2025-03-17T00:00:00Z</dc:date>
</item>
<item>
<title>GeoXCP: uncertainty quantification of spatial explanations in explainable AI</title>
<link>https://hdl.handle.net/1721.1/164912</link>
<description>GeoXCP: uncertainty quantification of spatial explanations in explainable AI
Lou, Xiayin; Luo, Peng; Li, Ziqi; Gao, Song; Meng, Liqiu
Understanding and explaining complex geographic phenomena—ranging from climate change to socioeconomic disparities—is a central focus in both geography and the broader scientific community. Various methods have been developed to elucidate relationships between variables, from coefficient estimates in linear regression models to the increasingly dominant use of feature attribution scores in Explainable AI (XAI) techniques. However, explanations generated by XAI methods often carry uncertainty, stemming from the model itself and the data used to train the model. Despite the critical importance of accounting for such uncertainty, this issue remains largely overlooked in the geospatial domain. In this study, we developed an uncertainty quantification framework for XAI explanations based on conformal prediction, termed Geospatial eXplanation Conformal Prediction (GeoXCP). By incorporating spatial dependence into the modeling process, GeoXCP produced spatially adaptive explanations with calibrated uncertainty estimates. We validated the effectiveness of GeoXCP through extensive simulation experiments and real-world datasets. The results demonstrated that GeoXCP provided reliable explanations while effectively quantifying uncertainty across diverse geospatial scenarios. Our approach represented a significant advancement in explainable geospatial machine learning, enabling decision-makers to better assess the trustworthiness of model-driven insights. The proposed framework was implemented in a python package, named GeoXCP.
</description>
<pubDate>Mon, 27 Oct 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164912</guid>
<dc:date>2025-10-27T00:00:00Z</dc:date>
</item>
<item>
<title>Open-source device for high sensitivity magnetic particle spectroscopy, relaxometry, and hysteresis loop tracing</title>
<link>https://hdl.handle.net/1721.1/164911</link>
<description>Open-source device for high sensitivity magnetic particle spectroscopy, relaxometry, and hysteresis loop tracing
Mattingly, E.; Barksdale, A. C.; Śliwiak, M.; Chacon-Caldera, J.; Mason, E. E.; Wald, L. L.
Magnetic nanoparticles (MNPs) are used extensively across numerous disciples, with applications including Magnetic Particle Imaging (MPI), targeted hyperthermia, deep brain stimulation, immunoassays, and thermometry. The assessment of MNPs, especially those being designed for MPI, is performed with magnetic particle spectrometers, relaxometers, loop tracers, or similar devices. Despite the many applications and the need for particle assessment, there are few consolidated resources for designing or building such a MNP assessment system. Here, we describe the design and performance of an open-source device capable of spectroscopy, relaxometry, and loop tracing. We show example measurements from the device and quantify the detection sensitivity by measuring a dilution series of Synomag-D 70 nm (from 0.5 mg Fe/ml to 7 ng Fe/ml) with a 10 mT drive field at 23.8 kHz. The device measures 260 pg Fe with SNR = 1 and 1.3 ng at SNR = 5 in spectroscopy mode in under one second of measurement time. The system has a dynamic range of 60 μg to 260 pg Fe without changing the hardware configuration. As an example application, we characterize Synomag-D’s relaxation time constant for drive fields 2–18 mT and compare the magnetization responses of two commonly used MNPs.
</description>
<pubDate>Wed, 26 Jun 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164911</guid>
<dc:date>2024-06-26T00:00:00Z</dc:date>
</item>
<item>
<title>Precise Fermi level engineering in a topological Weyl semimetal via fast ion implantation</title>
<link>https://hdl.handle.net/1721.1/164910</link>
<description>Precise Fermi level engineering in a topological Weyl semimetal via fast ion implantation
Mandal, Manasi; Chotrattanapituk, Abhijatmedhi; Woller, Kevin; Wu, Lijun; Xu, Haowei; Hung, Nguyen Tuan; Mao, Nannan; Okabe, Ryotaro; Boonkird, Artittaya; Nguyen, Thanh; Drucker, Nathan C; Chen, Xiaoqian M; Momiki, Takashi; Li, Ju; Kong, Jing; Zhu, Yimei; Li, Mingda
The precise controllability of the Fermi level is a critical aspect of quantum materials. For topological Weyl semimetals, there is a pressing need to fine-tune the Fermi level to the Weyl nodes and unlock exotic electronic and optoelectronic effects associated with the divergent Berry curvature. However, in contrast to two-dimensional materials, where the Fermi level can be controlled through various techniques, the situation for bulk crystals beyond laborious chemical doping poses significant challenges. Here, we report the milli-electron-volt (meV) level ultra-fine-tuning of the Fermi level of bulk topological Weyl semimetal tantalum phosphide using accelerator-based high-energy hydrogen implantation and theory-driven planning. By calculating the desired carrier density and controlling the accelerator profiles, the Fermi level can be experimentally fine-tuned from 5 meV below, to 3.8 meV below, to 3.2 meV above the Weyl nodes. High-resolution transmission electron microscopy reveals the crystalline structure is largely maintained under irradiation, while electrical transport indicates that Weyl nodes are preserved and carrier mobility is also largely retained. Our work demonstrates the viability of this generic approach to tune the Fermi level in semimetal systems and could serve to achieve property fine-tuning for other bulk quantum materials with ultrahigh precision.
</description>
<pubDate>Tue, 25 Jun 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164910</guid>
<dc:date>2024-06-25T00:00:00Z</dc:date>
</item>
<item>
<title>A facility for cryogenic ion irradiation and in situ characterization of rare-earth barium copper oxide superconducting tapes</title>
<link>https://hdl.handle.net/1721.1/164909</link>
<description>A facility for cryogenic ion irradiation and in situ characterization of rare-earth barium copper oxide superconducting tapes
Devitre, AR; Fischer, DX; Woller, KB; Clark, BC; Short, MP; Whyte, DG; Hartwig, ZS
Superconducting magnets based on Rare Earth Barium Copper Oxides (REBCO) offer transformative capabilities in the fields of fusion energy, high energy physics, and space exploration. A challenge shared by these applications is the limited lifetime of REBCO due to radiation damage sustained during operation. Here we present a new ion-beam facility that enables simultaneous cryogenic irradiation and in situ characterization of commercial REBCO tapes. The ion source provides spatially uniform fluxes up to 1018 protons/m2s with kinetic energies up to 3.4 MeV, in addition to helium and higher-Z species. Using this facility, we can induce uniform damage profiles in the first 10–20 µm of REBCO tapes with less than 0.25 appm of hydrogen implanted in REBCO after a dose of 1020 protons/m2. The tape can be held between 20 and 300 K with an accuracy of ±0.1 K and is connected to a four-point probe measuring the critical current, Ic, and critical temperature, Tc, before, during, and after irradiation with transport current ranging from 100 nA to 100 A, and a typical voltage noise less than 0.1 μV. These capabilities are presently used to study the effect of irradiation temperature on REBCO performance change during and after proton bombardment, to assess the possibility of Ic and Tc recovery after irradiation through thermal annealing, and to explore the instantaneous and recoverable suppression of Ic and Tc observed during irradiation.
</description>
<pubDate>Wed, 26 Jun 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164909</guid>
<dc:date>2024-06-26T00:00:00Z</dc:date>
</item>
<item>
<title>High temperature stability of regrown and alloyed Ohmic contacts to AlGaN/GaN heterostructure up to 500 °C</title>
<link>https://hdl.handle.net/1721.1/164908</link>
<description>High temperature stability of regrown and alloyed Ohmic contacts to AlGaN/GaN heterostructure up to 500 °C
Niroula, John; Xie, Qingyun; Rajput, Nitul S; Darmawi-Iskandar, Patrick K; Rahman, Sheikh Ifatur; Luo, Shisong; Palash, Rafid Hassan; Sikder, Bejoy; Yuan, Mengyang; Yadav, Pradyot; Micale, Gillian K; Chowdhury, Nadim; Zhao, Yuji; Rajan, Siddharth; Palacios, Tomás
This Letter reports the stability of regrown and alloyed Ohmic contacts to AlGaN/GaN-on-Si high electron mobility transistors (HEMTs) for high temperature applications up to 500 °C. Transfer length method (TLM) measurements from 25 to 500 °C in air show that the regrown contacts appear to be stable up to 500 °C during short term (approximately 1 h) testing, while alloyed contacts appear to decrease in contact resistance from 300 to 500 °C though increases in the error bounds due to increase sheet resistance make it difficult to conclude definitely. Additionally, longer term testing shows both technologies remain stable at least up to 48 h at 500 °C, after which the large increase in sheet resistance makes the measurement uncertainty too large to conclude definitively. Advanced microscopy images indicate both the regrown and alloyed contact regions remain structurally intact after prolonged high temperature exposure with no visible degradation in crystallinity or metal composition.
</description>
<pubDate>Wed, 15 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164908</guid>
<dc:date>2024-05-15T00:00:00Z</dc:date>
</item>
<item>
<title>Optical-pump–terahertz-probe spectroscopy in high magnetic fields with kHz single-shot detection</title>
<link>https://hdl.handle.net/1721.1/164907</link>
<description>Optical-pump–terahertz-probe spectroscopy in high magnetic fields with kHz single-shot detection
Dastrup, Blake S; Miedaner, Peter R; Zhang, Zhuquan; Nelson, Keith A
We demonstrate optical pump–THz probe (OPTP) spectroscopy with a variable external magnetic field (0–9 T), in which the time-dependent THz signal is measured by echelon-based single-shot detection at a repetition rate of 1 kHz. The method reduces data acquisition times by more than an order of magnitude compared to conventional electro-optic sampling using a scanning delay stage. The approach illustrates the wide applicability of the single-shot measurement approach to non-equilibrium systems that are studied through OPTP spectroscopy, especially in cases where parameters such as magnetic field strength (B) or other experimental parameters are varied. We demonstrate the capabilities of our measurement by performing cyclotron resonance experiments in bulk silicon, where we observe B-field-dependent carrier relaxation and distinct relaxation rates for different carrier types. We use a pair of economical linear array detectors to measure 500 time points on each shot, offering an equivalent performance to camera-based detection with possibilities for higher repetition rates.
</description>
<pubDate>Tue, 12 Mar 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164907</guid>
<dc:date>2024-03-12T00:00:00Z</dc:date>
</item>
<item>
<title>Validation of the OpenMC Code for Fusion Applications: The FNG-Streaming Benchmark Case</title>
<link>https://hdl.handle.net/1721.1/164906</link>
<description>Validation of the OpenMC Code for Fusion Applications: The FNG-Streaming Benchmark Case
Segantin, Stefano; Ebiwonjumi, Bamidele; Peterson, Ethan
In this work, we benchmark OpenMC against the FNG-ITER streaming experiment. FNG-ITER streaming, a high-quality experiment carried out at the ENEA laboratories in Frascati, Italy, was initially included in SINBAD (Shielding Integral Benchmark Archive and Database). More recently, the benchmark was included in the Compilation of Nuclear Data Experiments for Radiation Characterization as well. It consists of a neutron shielding experiment with a rather complex geometry that constitutes an appropriate validation study for the use of weight windows within OpenMC. Measurements include flux detection via four different types of activation foils divided into three batches and a set of thermoluminescent detectors for nuclear heating. The OpenMC results are in very good agreement with those of MCNP and the experimental measurements, with the majority of the discrepancies within the combined statistical error and experimental uncertainty (less than 10% computed measured discrepancy).
</description>
<pubDate>Fri, 04 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164906</guid>
<dc:date>2025-07-04T00:00:00Z</dc:date>
</item>
<item>
<title>Use of Bayesian decision analysis to maximize value in patient-centered randomized clinical trials in Parkinson’s disease</title>
<link>https://hdl.handle.net/1721.1/164905</link>
<description>Use of Bayesian decision analysis to maximize value in patient-centered randomized clinical trials in Parkinson’s disease
Chaudhuri, Shomesh E; Ben Chaouch, Zied; Hauber, Brett; Mange, Brennan; Zhou, Mo; Christopher, Stephanie; Bardot, Dawn; Sheehan, Margaret; Donnelly, Anne; McLaughlin, Lauren; Caldwell, Brittany; Benz, Heather L; Ho, Martin; Saha, Anindita; Gwinn, Katrina; Sheldon, Murray; Lo, Andrew W
A fixed one-sided significance level of 5% is commonly used to interpret the statistical significance of randomized clinical trial (RCT) outcomes. While it is necessary to reduce the false positive rate, the threshold used could be chosen quantitatively and transparently to specifically reflect patient preferences regarding benefit–risk tradeoffs as well as other considerations. How can patient preferences be explicitly incorporated into RCTs in Parkinson’s disease (PD), and what is the impact on statistical thresholds for device approval? In this analysis, we apply Bayesian decision analysis (BDA) to PD patient preference scores elicited from survey data. BDA allows us to choose a sample size (&#119899;) and significance level (&#120572;) that maximizes the overall expected value to patients of a balanced two-arm fixed-sample RCT, where the expected value is computed under both null and alternative hypotheses. For PD patients who had previously received deep brain stimulation (DBS) treatment, the BDA-optimal significance levels fell between 4.0% and 10.0%, similar to or greater than the traditional value of 5%. Conversely, for patients who had never received DBS, the optimal significance level ranged from 0.2% to 4.4%. In both of these populations, the optimal significance level increased with the severity of the patients’ cognitive and motor function symptoms. By explicitly incorporating patient preferences into clinical trial designs and the regulatory decision-making process, BDA provides a quantitative and transparent approach to combine clinical and statistical significance. For PD patients who have never received DBS treatment, a 5% significance threshold may not be conservative enough to reflect their risk-aversion level. However, this study shows that patients who previously received DBS treatment present a higher tolerance to accept therapeutic risks in exchange for improved efficacy which is reflected in a higher statistical threshold.
</description>
<pubDate>Wed, 03 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164905</guid>
<dc:date>2025-09-03T00:00:00Z</dc:date>
</item>
<item>
<title>Choice denied: impact of income and credit-based tenant screening on the Housing Choice Voucher program</title>
<link>https://hdl.handle.net/1721.1/164904</link>
<description>Choice denied: impact of income and credit-based tenant screening on the Housing Choice Voucher program
So, Wonyoung; Gade, Anisha; Hangen, Forrest
The Housing Choice Voucher program supports over 2.5 million households by subsidizing rent payments within the private housing market. However, challenges arise due to exclusionary practices, undermining the program’s goal of ‘choice.’ Tenant screening practices have been critical in exacerbating these challenges, yet their impact remains understudied. Drawing on tenant screening criteria documents from property management websites and the Survey of Consumer Finances, this study finds that while voucher holders generally meet rent-to-income thresholds due to the subsidies—keeping their rent burden relative to their income, they still face barriers related to credit scores, bankruptcy history, and debt. These criteria, which apply to both voucher and non-voucher renters, may exclude approximately one in ten voucher holders, despite the guaranteed portion of rent covered by public assistance. These findings show an urgent need for policy interventions to the potential exclusionary impacts of tenant screening services.
</description>
<pubDate>Wed, 30 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164904</guid>
<dc:date>2025-04-30T00:00:00Z</dc:date>
</item>
<item>
<title>Three-Dimensional Full-Core BEAVRS Using OpenMOC with Transport Equivalence</title>
<link>https://hdl.handle.net/1721.1/164895</link>
<description>Three-Dimensional Full-Core BEAVRS Using OpenMOC with Transport Equivalence
Giudicelli, G; Forget, B; Smith, K
Using an optimized implementation of the three-dimensional (3D) method of characteristics for neutron transport, along with a novel equivalence method for transport calculations that was designed to correct self-shielding errors from neglecting the angular dependence of resonant group absorption, a 3D full-core light water reactor hybrid stochastic-deterministic eigenvalue calculation was achieved. This paper presents the optimizations developed and compares the transport solutions obtained. For the statepoint, run times near 10 000 CPU hours are achieved—improving on previous works by an order of magnitude—with near 1% error on pin fission to 238U capture ratios and a few dozen pcms on the eigenvalue.
</description>
<pubDate>Fri, 04 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164895</guid>
<dc:date>2025-04-04T00:00:00Z</dc:date>
</item>
<item>
<title>Considering a US-Supported Self-Defense Option for Taiwan</title>
<link>https://hdl.handle.net/1721.1/164894</link>
<description>Considering a US-Supported Self-Defense Option for Taiwan
Glaser, Charles L.
There is wide agreement that Taiwan is the most dangerous issue dividing the United States and China. China believes Taiwan is part of its homeland, views unification with Taiwan as a core interest, and is determined to gain full control of the island. China continues to prefer peaceful unification, but explicitly retains the option of using military forces to achieve unification and seeks to use the threat of military force to strengthen its negotiating hand. Current US policy includes an ambiguous commitment to defend Taiwan if attacked or severely coerced by China—it leaves open whether and how the United States would respond. In addition, the United States provides Taiwan with weapons to improve its ability to defend itself. The United States is pressing Taiwan to deploy smaller mobile weapons that would increase the survivability and lethality of its forces; these forces would support a “porcupine strategy” that makes Taiwan harder to invade and conquer and would, at a minimum, provide time for US forces to arrive to aid Taiwan’s defense.
</description>
<pubDate>Thu, 02 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164894</guid>
<dc:date>2025-01-02T00:00:00Z</dc:date>
</item>
<item>
<title>Needs, Wants . . . and Excuses: What Executives Can Learn from Zig Ziglar About Working with Universities</title>
<link>https://hdl.handle.net/1721.1/164893</link>
<description>Needs, Wants . . . and Excuses: What Executives Can Learn from Zig Ziglar About Working with Universities
Wright, Randall S.
Zig Ziglar was a famous sales trainer, motivational speaker, and author on salesmanship. When he died on November 28, 2012, Kevin Kruse (Citation2024)—best-selling author of Emotional Intelligence: 52 Strategies, coach to Fortune 500 CEOs, Marine Corps generals, and Silicon Valley entrepreneurs—wrote this in Forbes: “Zig Ziglar died today at age 86. A World War II veteran, Zig Ziglar became the top salesperson in several organizations before striking out on his own as a motivational speaker and trainer. With a Southern charm and lessons grounded in Christianity, Ziglar wrote over two dozen books and amassed a following of millions who were encouraged by his lessons for success.”
</description>
<pubDate>Wed, 09 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164893</guid>
<dc:date>2025-04-09T00:00:00Z</dc:date>
</item>
<item>
<title>Computational studies of electric field effects in CO2 methanation on Ni metal surfaces</title>
<link>https://hdl.handle.net/1721.1/164892</link>
<description>Computational studies of electric field effects in CO2 methanation on Ni metal surfaces
Wakamatsu, Katsuhiro; Yasuda, Takaaki; Aratani, Masato; Ogura, Teppei
Non-Faradaic electrochemical modification of catalytic activity (NEMCA) with an electric field (EF) has attracted attention as one of the methods to improve catalyst performance. However, this activation mechanism is not still clear. In this study, we focused on the NEMCA mechanism in CO2 methanation on Ni metal catalyst with solid oxide electrolysis cell (SOEC) and calculated two possible effects of the NEMCA mechanism; direct EF applications and oxygen atom co-adsorptions, using the density functional theory calculations and detailed kinetic simulations. Compared with these effects in terms of kinetic energy changes in the rate-determining steps, it has been revealed that the spillover effect of lattice oxygen toward the catalyst surface is dominant in the NEMCA mechanism. Also, we have found that overall CO2 methanation is promoted in SOEC mode with oxygen atom co-adsorptions in both cases of Ni flat and step sites.
</description>
<pubDate>Thu, 14 Mar 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164892</guid>
<dc:date>2024-03-14T00:00:00Z</dc:date>
</item>
<item>
<title>Equilibrium configurations of line arrays with respect to the deviatoric mean drift forces</title>
<link>https://hdl.handle.net/1721.1/164891</link>
<description>Equilibrium configurations of line arrays with respect to the deviatoric mean drift forces
Tokić, Grgur; Yue, Dick KP
Monochromatic waves incident on an array of structures give rise to nonlinear, time-constant mean drift forces (MDFs). These forces depend on the array's spatial configuration; their magnitude and the direction is, in general, different for every structure in the array. If the spatial configuration of an array is not fixed, as is the case in arrays of individually anchor-moored structures, the time-constant differences in MDF on individual bodies can lead to a change in spatial configuration, which could, in turn, significantly affect both the first-order, time-harmonic response of the array, as well as the downwave component of the MDF. Here, we explore the dependency of these deviatoric forces on array configurations and on the frequency of the incident monochromatic waves. We consider configurations of line arrays (consisting of 2–5 vertical circular cylinders) that are described by 1 or 2 parameters, and we focus on the along-array component of deviatoric forces. Using multiple scattering computational simulations, we identify the array configurations in which the deviatoric drift forces are zero, and we discuss the stability of these equilibrium configurations with respect to class-preserving configuration perturbations. Both stable and unstable equilibria exist, but the relative number of unstable equilibria grows as the number of degrees of freedom of the configuration perturbations increases. Interestingly, the stable configurations experience a generally lower downwave mean drift force on the entire array than the unstable ones. Overall, the variations in the deviatoric and the downwave MDFs between equilibria are significant (on the order of the isolated body MDF).
</description>
<pubDate>Thu, 02 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164891</guid>
<dc:date>2024-05-02T00:00:00Z</dc:date>
</item>
<item>
<title>1000-MW CSP with 100-gigawatt-hour crushed-rock heat storage to replace dispatchable fossil-fuel electricity</title>
<link>https://hdl.handle.net/1721.1/164890</link>
<description>1000-MW CSP with 100-gigawatt-hour crushed-rock heat storage to replace dispatchable fossil-fuel electricity
Forsberg, Charles
We are developing 100-GWh heat-storage systems for use with 1000-MW Concentrated Solar Power (CSP) and nuclear reactor systems with capital cost goals of several dollars per kWh of heat storage—a factor of 50 under lithium ion batteries per unit of electricity. The capabilities of a 100-GWh heat storage system are similar to the Tennessee Valley Authority Raccoon Mountain pumped hydro facility that can provide 1652 MW(e) for 22 hours to address daily to weekly storage. The low capital cost of the Crushed Rock Ultra-large Stored Heat (CRUSH) system is only possible in large-capacity systems; thus, the CSP system average 24/7 heat inputs may exceed 1000 MW to match the heat storage capacity. Hot oil or nitrate salt is pumped from multiple solar farms or towers to the central CRUSH system and associated power block. The peak power block output may be 2 to 4 times average output with large economics of scale relative to the smaller power blocks associated with existing CSP systems. The cost savings from the large storage and the power block exceed the cost of hot oil or hot nitrate salt insulated pipelines over 10+ kilometers. The heat is stored in crushed rock in piles 20 m high and up to 250 m by 250 m on a side within an insulated floor and building structure. The sides of the rock pile are sloped rock that allow rock expansion and contraction with temperature without generating mechanical forces against walls. Heat is transferred from CSP to the crushed rock and then to the power cycle using (1) heat transfer oils for lower-temperature power systems to 400°C or (2) nitrate salts for higher-temperature power systems to 600°C. In charging mode, hot heat transfer fluid is sprayed over crushed rock and drains through the rock to the collection pans at the bottom to be reheated. Sections of rock are heated sequentially. In discharge mode cold heat transfer fluid is sprayed over crushed rock and drains through the rock to the collection pans below to deliver hot fluid to the power cycle. Heat storage costs are minimized by three features. Crushed rock is the lowest-cost storage material. The large building size minimizes the surface-to-volume ratio and thus building, insulation and foundation costs. The inventory and thus cost of oil and nitrate salt is minimized by using these fluids to transfer heat from CSP collectors to storage and then to the power block—but not for heat storage.
</description>
<pubDate>Fri, 06 Oct 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164890</guid>
<dc:date>2023-10-06T00:00:00Z</dc:date>
</item>
<item>
<title>The way forward: The path to monolithic additive manufacture of lower hybrid current drive launchers</title>
<link>https://hdl.handle.net/1721.1/164889</link>
<description>The way forward: The path to monolithic additive manufacture of lower hybrid current drive launchers
Seltzman, AH; Wukitch, SJ
Additive Manufacturing (AM) is a key enabling technology for the rapid production of complex radio-frequency (RF) structures used in lower hybrid current drive (LHCD) launchers. Glenn Research Copper 84 (GRCop-84), a Niobium Chromide (Cr2Nb) 8 at. % Cr, 4 at. % Nb precipitation hardened alloy, is suitable for AM with Laser Powder Bed Fusion (L-PBF), achieving 99.5% density, Ra=3-4 µm surface roughness, yield strength of 470 MPa and an ultimate tensile strength (UTS) of 710 MPa in as-printed condition. AM of a high field side (HFS) lower LHCD launcher from GRCop-84 alloy demonstrated several critical advancements in AM of RF launchers. Waveguides with a pentagonal cross-section were designed to support the top internal waveguide surface with 45-degree chamfers from the sidewall, eliminating collapse of the ceiling, while maintaining RF properties near identical to a rectangular cross section. Hot Isostatic Pressing (HIPing) consolidated residual voids within the material, increasing density from 99.5% to 100%. Chemical-Mechanical Polishing (CMP) reduced residual surface roughness from the L-PBF process to Ra=0.1 µm / Rq=0.4 µm to lower RF losses. Advancements in L-PBF for the AM of copper alloys have increased the maximum build volume from 250x250x300mm on the Concept Laser M2 printer to 400x400x400mm on the EOS M400 printer. This increased build volume now enables monolithic AM of complete LHCD launchers with integrated cooling channels that eliminate the time-consuming laser welding assembly of launcher segments previously required by the smaller build volume.
</description>
<pubDate>Fri, 18 Aug 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164889</guid>
<dc:date>2023-08-18T00:00:00Z</dc:date>
</item>
<item>
<title>Particle-in-cell simulations of parasitic electrostatic wave excitation in the ion cyclotron range of frequencies and high harmonic fast wave regimes</title>
<link>https://hdl.handle.net/1721.1/164888</link>
<description>Particle-in-cell simulations of parasitic electrostatic wave excitation in the ion cyclotron range of frequencies and high harmonic fast wave regimes
Diab, Raymond; Baek, Seung-Gyou; Bonoli, Paul; Jenkins, Thomas G; Ono, Masayuki; Smithe, David
Using the open-source code SMILEI [J. Derouillat et al., Comput. Phys. Commun. 222, 351-373 (2018)], we perform one-dimensional full-f particle-in-cell (PIC) simulations of parasitic electrostatic wave excitation in the Ion Cyclotron Range of Frequencies (ICRF) and High Harmonic Fast Wave (HHFW) regimes in an inhomogeneous plasma. We first study direct coupling from the fast wave to electrostatic waves at the lower hybrid (LH) resonance (S=0). In the ICRF regime, we show that the fast wave can couple to the Ion Bernstein Wave (IBW), which propagates beyond the LH resonance layer. On the other hand, in the HHFW regime, no direct coupling to the IBW is observed, but electrostatic waves, likely to be Hot Ion Plasma Waves (HIPW or HPW), are seen on the low-density side of the LH resonance layer. The coupling efficiency to electrostatic waves is seen to increase with ion temperature. Parametric decay instabilities (PDIs) are then investigated in both regimes. In the ICRF regime, both resonant and non-resonant decay channels are observed and compared with theory. In the HHFW regime, we observe multiple sidebands separated by the ion cyclotron frequency, as measured experimentally on NSTX [J. R. Wilson et al., AIP Conf. Proc. 787, 66 (2005)]. The nature of these waves is discussed. Perpendicular ion heating is also found in the region where PDIs occur, consistent with experimental observations.
</description>
<pubDate>Fri, 18 Aug 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164888</guid>
<dc:date>2023-08-18T00:00:00Z</dc:date>
</item>
<item>
<title>Towards fast, accurate predictions of RF simulations via data-driven modeling: Forward and lateral models</title>
<link>https://hdl.handle.net/1721.1/164887</link>
<description>Towards fast, accurate predictions of RF simulations via data-driven modeling: Forward and lateral models
Wallace, GM; Bai, Z; Bertelli, N; Bethel, EW; Perciano, T; Shiraiwa, S; Wright, JC
Three machine learning techniques (multilayer perceptron, random forest, and Gaussian process) provide fast surrogate models for lower hybrid current drive (LHCD) simulations. A single GENRAY/CQL3D simulation without radial diffusion of fast electrons requires several minutes of wall-clock time to complete, which is acceptable for many purposes, but too slow for integrated modeling and real-time control applications. More accurate simulations with fast electron diffusion are even slower, requiring multiple hours of run time with parallel processing. The machine learning models use a database of 16,000+ GEN-RAY/CQL3D simulations for training, validation, and testing. Latin hypercube sampling methods implemented in πScope ensure that the database covers the range of 9 input parameters (ne0, Te0, Ip, Bt, R0, n∥︀, Ze f f, Vloop, PLHCD) with sufficient density in all regions of parameter space. The surrogate models reduce the computation time from minutes-hours to ms with high accuracy across the input parameter space. Data-driven surrogate models also allow for solving inverse and “lateral” problems. A surrogate model for the inverse problem maps from a desired current drive or power deposition profile to a set of input parameters that would result in such a profile, while a surrogate model for the lateral problem maps from a measured experimental quantity such as hard x-ray emission to a current drive or power deposition profile. The πScope database creation workflow is flexible and applicable to other RF simulation codes such as TORIC.
</description>
<pubDate>Fri, 18 Aug 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164887</guid>
<dc:date>2023-08-18T00:00:00Z</dc:date>
</item>
<item>
<title>Decrypting the mechanisms of wicking and evaporation heat transfer on micro-pillars during the pool boiling of water using high-resolution infrared thermometry</title>
<link>https://hdl.handle.net/1721.1/164886</link>
<description>Decrypting the mechanisms of wicking and evaporation heat transfer on micro-pillars during the pool boiling of water using high-resolution infrared thermometry
Wang, Chi; Rahman, Md Mahamudur; Bucci, Matteo
Surfaces with micrometer-scale pillars have shown great potential in delaying the boiling crisis and enhancing the critical heat flux (CHF). However, physical mechanisms enabling this enhancement remain unclear. This knowledge gap is due to a lack of diagnostics that allow elucidating how micro-pillars affect thermal transport phenomena on the engineered surface. In this study, for the first time, we are able to measure time-dependent temperature and heat flux distributions on a boiling surface with engineered micro-pillars using infrared thermometry. Using these data, we reveal the presence of an intra-pillar liquid layer, created by the nucleation of bubbles and partially refilled by capillary effects. However, contrarily to conventional wisdom, the energy removed by the evaporation of this liquid cannot explain the observed CHF enhancement. Yet, predicting its dry out is the key to delaying the boiling crisis. We achieve this goal using simple analytic models and demonstrate that this process is driven by conduction effects in the boiling substrates and, importantly, in the intra-pillar liquid layer itself. Importantly, these effects also control the wicking flow rate and its penetration length. The boiling crisis occurs when, by coalescing, the size of the intra-pillar liquid layer becomes too large for the wicking flow to reach its innermost region. Our study reveals and quantifies unidentified physical aspects, key to the performance optimization of boiling surfaces for cooling applications.
</description>
<pubDate>Wed, 08 Mar 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164886</guid>
<dc:date>2023-03-08T00:00:00Z</dc:date>
</item>
<item>
<title>Regulating Wait-Driven Requests in Queues</title>
<link>https://hdl.handle.net/1721.1/164881</link>
<description>Regulating Wait-Driven Requests in Queues
Freund, Daniel; Hausman, David; Weng, Wentao
The study of rational queueing has a long and distinguished history focused on individuals' preference to avoid waiting. Surprisingly, there are settings in which some potential arrivals (which we also refer to as requests) derive utility from waiting and disutility from service. Our primary example is the U.S. affirmative asylum process. In this context, applicants obtain a work permit while waiting for an asylum interview; hence, if the (expected) wait is long enough, then even an applicant who knows that their application will be denied and lead to deportation proceedings, may find it in their interest to apply and thus benefit from legally working during the wait. Similar dynamics could occur in other settings like content moderation in social networks.&#13;
The common thread of these examples is the potentially self-exciting queue: when wait times are long, many arrivals are incentivized to join, and wait times become even longer. However, the system designer usually wants to avoid a large backlog. Indeed, the US Citizenship and Immigration Services (USCIS) mostly schedules asylum interviews in a Last-In-First-Out (LIFO) manner with the explicit goal of dissuading applicants with non-meritorious cases trying to exploit the long backlog. Despite this interesting scheduling choice in practice, and the potential prevalence of similar settings in other applications, the existing literature on rational queueing lacks frameworks to study the impact of wait-driven requests.&#13;
Motivated by this gap in the literature, we formalize a dynamical system where in each round, a given scheduling policy and a realized request rate determine the wait time distribution in a fluid queueing system. Observing the expected benefit from waiting in one round, requests update their decisions, setting the request rate for the next round. Assuming a concave benefit function from waiting, alongside general conditions, we prove that, for minimizing the backlog, LIFO is most effective while First-In-First-Out (FIFO) is least effective among all work-conserving policies. Moreover, we show that the dynamical system exhibits metastability: for either FIFO or LIFO, the system converges to either a zero-wait or a congested equilibrium.&#13;
Although some asylum practitioners support the use of LIFO, critics often admonish the real-world use of LIFO for its failure to maintain FIFO's order fairness: earlier requests should get earlier service. Our results demonstrate this trade-off between LIFO and FIFO. But we also show limitations of hybrid policies, which probabilistically follow either LIFO or FIFO, in navigating the trade-off between LIFO's efficiency and FIFO's fairness. Our work formalizes the concept of order fairness in queueing systems with abandonment and demonstrates that hybrid policies can be Pareto-dominated by LIFO: they may have both longer backlog and worse order fairness. Finally, we use real-world data on the scheduling of affirmative asylum applications to evaluate the change in fairness over the past 20 years under different policies.
EC ’25, July 7–10, 2025, Stanford, CA, USA
</description>
<pubDate>Wed, 02 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164881</guid>
<dc:date>2025-07-02T00:00:00Z</dc:date>
</item>
<item>
<title>DeepSeek Inside: Origins, Technology, and Impact</title>
<link>https://hdl.handle.net/1721.1/164880</link>
<description>DeepSeek Inside: Origins, Technology, and Impact
Cusumano, Michael
The release of DeepSeek V3 and R1 in January 2025 caused steep declines in the stock prices of companies that provide generative artificial intelligence (GenAI) infrastructure technology and datacenter services. These two large language models (LLMs) came from a little-known Chinese startup with approximately 200 employees compared to at least 3,500 employees for industry-leader OpenAI. DeepSeek seemed to have developed this powerful technology much more cheaply than previously thought possible. If true, DeepSeek had the potential to disrupt the economics of the entire GenAI ecosystem and the dominance of U.S. companies ranging from OpenAI to Nvidia.
</description>
<pubDate>Wed, 18 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164880</guid>
<dc:date>2025-06-18T00:00:00Z</dc:date>
</item>
<item>
<title>Density-Dependent Graph Orientation and Coloring in Scalable MPC</title>
<link>https://hdl.handle.net/1721.1/164879</link>
<description>Density-Dependent Graph Orientation and Coloring in Scalable MPC
Ghaffari, Mohsen; Grunau, Christoph
This paper presents massively parallel computation (MPC) algorithms in the strongly sublinear memory regime (aka, scalable MPC) for orienting and coloring graphs as a function of its subgraph density. Our algorithms run in poly(log log n) rounds and compute an orientation of the edges with maximum outdegree O (α log log n) as well as a coloring of the vertices with O (α log log n) colors. Here, α denotes the density of the densest subgraph. Our algorithm's round complexity is notable because it breaks the [EQUATION] barrier, which applied to the previously best known density-dependent orientation algorithm [Ghaffari, Lattanzi, and Mitrovic ICML'19] and is common to many other scalable MPC algorithms.
PODC ’25, Huatulco, Mexico
</description>
<pubDate>Fri, 13 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164879</guid>
<dc:date>2025-06-13T00:00:00Z</dc:date>
</item>
<item>
<title>Privacy-Preserving Mechanisms for Coordinating Airspace Usage in Advanced Air Mobility</title>
<link>https://hdl.handle.net/1721.1/164878</link>
<description>Privacy-Preserving Mechanisms for Coordinating Airspace Usage in Advanced Air Mobility
Maheshwari, Chinmay; Mendoza, Maria; Tuck, Victoria; Su, Pan-Yang; Qin, Victor; Seshia, Sanjit; Balakrishnan, Hamsa; Sastry, Shankar
Advanced Air Mobility (AAM) operations are expected to transform air transportation while challenging current air traffic management practices. By introducing a novel market-based mechanism, we address the problem of on-demand allocation of capacity-constrained airspace to AAM vehicles with heterogeneous and private valuations. We model airspace and air infrastructure as a collection of contiguous regions (or sectors) with constraints on the number of vehicles that simultaneously enter, stay, or exit each region. Vehicles request access to airspace with trajectories spanning multiple regions at different times. We use the graph structure of our airspace model to formulate the allocation problem as a path allocation problem on a time-extended graph. To ensure that the cost information of AAM vehicles remains private, we introduce a novel mechanism that allocates each vehicle a budget of "air-credits" (an artificial currency) and anonymously charges prices for traversing the edges of the time-extended graph. We seek to compute a competitive equilibrium that ensures that: (i) capacity constraints are satisfied, (ii) a strictly positive resource price implies that the sector capacity is fully utilized, and (iii) the allocation is integral and optimal for each AAM vehicle given current prices, without requiring access to individual vehicle utilities. However, a competitive equilibrium with integral allocations may not always exist. We provide sufficient conditions for the existence and computation of a fractional-competitive equilibrium, where allocations can be fractional. Building on these theoretical insights, we propose a distributed, iterative, two-step algorithm that: 1) computes a fractional competitive equilibrium,  and 2) derives an integral allocation from this equilibrium. We validate the effectiveness of our approach in allocating trajectories for the emerging urban air mobility service of drone delivery.
</description>
<pubDate>Mon, 30 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164878</guid>
<dc:date>2025-06-30T00:00:00Z</dc:date>
</item>
<item>
<title>Meschers: Geometry Processing of Impossible Objects</title>
<link>https://hdl.handle.net/1721.1/164877</link>
<description>Meschers: Geometry Processing of Impossible Objects
Dodik, Ana; Yu, Isabella; Chandra, Kartik; Ragan-Kelley, Jonathan; Tenenbaum, Joshua; Sitzmann, Vincent; Solomon, Justin
Impossible objects, geometric constructions that humans can perceive but that cannot exist in real life, have been a topic of intrigue in visual arts, perception, and graphics, yet no satisfying computer representation of such objects exists. Previous work embeds impossible objects in 3D, cutting them or twisting/bending them in the depth axis. Cutting an impossible object changes its local geometry at the cut, which can hamper downstream graphics applications, such as smoothing, while bending makes it difficult to relight the object. Both of these can invalidate geometry operations, such as distance computation. As an alternative, we introduce Meschers, meshes capable of representing impossible constructions akin to those found in M.C. Escher's woodcuts.  Our representation has a theoretical foundation in discrete exterior calculus and supports the use-cases above, as we demonstrate in a number of example applications. Moreover, because we can do discrete geometry processing on our representation, we can inverse-render impossible objects. We also compare our representation to cut and bend representations of impossible objects.
</description>
<pubDate>Sun, 27 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164877</guid>
<dc:date>2025-07-27T00:00:00Z</dc:date>
</item>
<item>
<title>Learning-Augmented Competitive Algorithms for Spatiotemporal Online Allocation with Deadline Constraints</title>
<link>https://hdl.handle.net/1721.1/164876</link>
<description>Learning-Augmented Competitive Algorithms for Spatiotemporal Online Allocation with Deadline Constraints
Lechowicz, Adam; Christianson, Nicolas; Sun, Bo; Bashir, Noman; Hajiesmaili, Mohammad; Wierman, Adam; Shenoy, Prashant
We introduce and study spatiotemporal online allocation with deadline constraints (SOAD), a new online problem motivated by emerging challenges in sustainability and energy. In SOAD, an online player completes a workload by allocating and scheduling it on the points of a metric space (X, d) while subject to a deadline T. At each time step, a service cost function is revealed that represents the cost of servicing the workload at each point, and the player must irrevocably decide the current allocation of work to points. Whenever the player moves this allocation, they incur a movement cost defined by the distance metric d(•, •) that captures, e.g., an overhead cost. SOAD formalizes the open problem of combining general metrics and deadline constraints in the online algorithms literature, unifying problems such as metrical task systems and online search. We propose a competitive algorithm for SOAD along with a matching lower bound establishing its optimality. Our main algorithm, ST-CLIP, is a learning-augmented algorithm that takes advantage of predictions (e.g., forecasts of relevant costs) and achieves an optimal consistency-robustness trade-off. We evaluate our proposed algorithms in a simulated case study of carbon-aware spatiotemporal workload management, an application in sustainable computing that schedules a delay-tolerant batch compute job on a distributed network of data centers. In these experiments, we show that ST-CLIP substantially improves on heuristic baseline methods.
SIGMETRICS Abstracts ’25, Stony Brook, NY, USA
</description>
<pubDate>Mon, 09 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164876</guid>
<dc:date>2025-06-09T00:00:00Z</dc:date>
</item>
<item>
<title>Faraday Cage Estimation of Normals for Point Clouds and Ribbon Sketches</title>
<link>https://hdl.handle.net/1721.1/164875</link>
<description>Faraday Cage Estimation of Normals for Point Clouds and Ribbon Sketches
Scrivener, Daniel; Cui, Daniel; Coldren, Ellis; Abulnaga, Mazdak; Bessmeltsev, Mikhail; Chien, Edward
We propose a novel method (FaCE) for normal estimation of unoriented point clouds and VR ribbon sketches that leverages a modeling of the Faraday cage effect. Input points, or a sampling of the ribbons, form a conductive cage and shield the interior from external fields. The gradient of the maximum field strength over external field scenarios is used to estimate a normal at each input point or ribbon. The electrostatic effect is modeled with a simple Poisson system, accommodating intuitive user-driven sculpting via the specification of point charges and Faraday cage points. On inputs sampled from clean, watertight meshes, our method achieves comparable normal quality to existing methods tailored for this scenario. On inputs containing interior structures and artifacts, our method produces superior surfacing output when combined with Poisson Surface Reconstruction. In the case of ribbon sketches, our method accommodates sparser ribbon input while maintaining an accurate geometry, allowing for greater flexibility in the artistic process. We demonstrate superior performance to an existing approach for surfacing ribbon sketches in this sparse setting.
</description>
<pubDate>Fri, 25 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164875</guid>
<dc:date>2025-07-25T00:00:00Z</dc:date>
</item>
<item>
<title>Networking Systems for Video Anomaly Detection: A Tutorial and Survey</title>
<link>https://hdl.handle.net/1721.1/164873</link>
<description>Networking Systems for Video Anomaly Detection: A Tutorial and Survey
Liu, Jing; Liu, Yang; Lin, Jieyu; Li, Jielin; Cao, Liang; Sun, Peng; Hu, Bo; Song, Liang; Boukerche, Azzedine; Leung, Victor
The increasing utilization of surveillance cameras in smart cities, coupled with the surge of online video applications, has heightened concerns regarding public security and privacy protection, which propelled automated Video Anomaly Detection (VAD) into a fundamental research task within the Artificial Intelligence (AI) community. With the advancements in deep learning and edge computing, VAD has made significant progress and advances synergized with emerging applications in smart cities and video internet, which has moved beyond the conventional research scope of algorithm engineering to deployable Networking Systems for VAD (NSVAD), a practical hotspot for intersection exploration in the AI, IoVT, and computing fields. In this article, we delineate the foundational assumptions, learning frameworks, and applicable scenarios of various deep learning-driven VAD routes, offering an exhaustive tutorial for novices in NSVAD. In addition, this article elucidates core concepts by reviewing recent advances and typical solutions and aggregating available research resources accessible at https://github.com/fdjingliu/NSVAD. Lastly, this article projects future development trends and discusses how the integration of AI and computing technologies can address existing research challenges and promote open opportunities, serving as an insightful guide for prospective researchers and engineers.
</description>
<pubDate>Wed, 07 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164873</guid>
<dc:date>2025-05-07T00:00:00Z</dc:date>
</item>
<item>
<title>Analysis and Performance Evaluation of Blockchain Consensus Mechanisms for  Network Sharing</title>
<link>https://hdl.handle.net/1721.1/164872</link>
<description>Analysis and Performance Evaluation of Blockchain Consensus Mechanisms for  Network Sharing
Zeydan, Engin; MANGUES-BAFALLUY, JOSEP; Arslan, Suayb; Turk, Yekta; Antevski, Kiril
The growing demand for mobile data services has made it necessary to find efficient and cost-effective ways to share networks. Blockchain technology is a promising solution to the challenges of network sharing, such as interoperability, trust, and accountability. This paper presents a comprehensive classification and categorization of blockchain-based network sharing scenarios, highlighting their advantages and limitations. Seven network sharing scenarios are identified, ranging from centralized network sharing to fully decentralized spectrum sharing. The suitability of some selected blockchain consensus algorithms (namely Proof-of-Work (PoW) with Ethereum, Proof-of-Authority (PoA) with Ethereum, Practical Byzantine Fault Tolerance (PBFT) with Tendermint and Proof-of-Stake (PoS) with Cosmo) is assessed for selected scenarios through extensive evaluations. This paper also identifies gaps and opportunities in blockchain-based network sharing solutions, and presents future research directions.
</description>
<pubDate>Tue, 27 Jan 2026 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164872</guid>
<dc:date>2026-01-27T00:00:00Z</dc:date>
</item>
<item>
<title>MapTune: Versatile ASIC Technology Mapping via Reinforcement Learning Guided Library Tuning</title>
<link>https://hdl.handle.net/1721.1/164871</link>
<description>MapTune: Versatile ASIC Technology Mapping via Reinforcement Learning Guided Library Tuning
Liu, Mingju; Robinson, Daniel; Li, Yingjie; Maximilian Kuehn, Johannes; Liang, Rongjian; Ren, Haoxing; Yu, Cunxi
Technology mapping involves mapping logical circuits to a library of cells. Traditionally, the full technology library is used, leading to a large search space and potential overhead. Motivated by randomly sampled technology mapping case studies, we propose a MapTune framework that addresses this challenge by utilizing reinforcement learning to make design-specific choices during cell selection. By learning from the environment, MapTune refines the cell selection process, resulting in a reduced search space and potentially improved mapping quality. The effectiveness of MapTune is evaluated on a wide range of benchmarks, different technology libraries, and various technology mappers. The experimental results demonstrate that MapTune achieves higher mapping accuracy and reduces delay/area across diverse circuit designs, technology libraries, and mappers. The paper also discusses the Pareto-Optimal exploration and confirms the perpetual delay-area trade-off. Conducted on benchmark suites ISCAS 85/89, ITC/ISCAS 99, VTR8.0, and EPFL benchmarks, the post-technology mapping and post-sizing quality-of-results (QoR) have been significantly improved, with average Area-Delay Product (ADP) improvement of 16.56\% among all different exploration settings in MapTune. The improvements consistently remained for four different technologies (7nm, 45nm, 130nm, and 180 nm) with various mappers from both state-of-the-art open-source and commercial synthesis tools.
</description>
<pubDate>Fri, 11 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164871</guid>
<dc:date>2025-07-11T00:00:00Z</dc:date>
</item>
<item>
<title>A Two-Stage Approach to Improve Poverty Mapping Spatial Resolution</title>
<link>https://hdl.handle.net/1721.1/164870</link>
<description>A Two-Stage Approach to Improve Poverty Mapping Spatial Resolution
Salas, Joaquín; Zea-Ortiz, Marivel; Vera, Pablo; Wood, Danielle
Global extreme poverty has fallen dramatically over the past two centuries, yet hundreds of millions remain impoverished, underscoring the need for scalable monitoring tools. In Mexico, poverty metrics are available only sporadically in terms of time and space (e.g., every 5 years at the municipal level), making it difficult for decision-makers to access reliable, up-to-date, and sufficiently detailed information, highlighting the need for higher-resolution, timely methods. To address this problem, we propose a two-stage approach that combines socioeconomic and Earth Observations-based data. Initially, a machine learning model maps census variables to official poverty indicators belonging to a multidimensional model, yielding fine-scale poverty estimates. A census-based model trained with eXtreme Gradient Boosting (XGBoost) achieved a determination coefficient (&#119877;2) of approximately 0.842, indicating strong agreement with official poverty figures and providing high-resolution proxies. Afterward, we use features based on remote observations to predict these poverty estimates at a 469 m grid scale. In this case, advanced foundation models outperformed other machine learning (ML) approaches, achieving an &#119877;2 of 0.683. While foundation models enable more accurate, fine-scale poverty mapping and could accelerate poverty assessments, their use comes at a heavy price in terms of carbon emissions.
</description>
<pubDate>Wed, 28 Jan 2026 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164870</guid>
<dc:date>2026-01-28T00:00:00Z</dc:date>
</item>
<item>
<title>Effectiveness of a Participatory Voice Intervention on Psychological Well-Being Among Warehouse Workers: Results From the Fulfillment Center Intervention Study, United States, 2021‒2023</title>
<link>https://hdl.handle.net/1721.1/164869</link>
<description>Effectiveness of a Participatory Voice Intervention on Psychological Well-Being Among Warehouse Workers: Results From the Fulfillment Center Intervention Study, United States, 2021‒2023
Siebach, Kirsten F.; Diaz-Linhart, Yaminette; Kubzansky, Laura D.; Berkman, Lisa F.; Wang, Molin; Ge, Lin; Kowalski, Alexander M.; Rahmandad, Hazhir; Kelly, Erin L.
Objectives. To examine whether a novel workplace intervention designed to increase worker voice can&#13;
reduce psychological distress and improve emotional vitality at 6- and 12-months follow-up.&#13;
Methods. We conducted a cluster-randomized trial in 16 fulfillment centers throughout the United States&#13;
between 2021-2023. Data were collected at three time points; 2813 workers participated in at least one&#13;
survey. Treated fulfillment centers established a new, participatory committee called the Health and&#13;
Well-Being Committee (HaWC). We compared differences in psychological distress and emotional&#13;
vitality and explored differential treatment effects by gender.&#13;
Results. At baseline, moderate or severe psychological distress was 51%. Intervention sites had lower&#13;
average psychological distress at the 6-month follow-up compared to control sites, with no significant&#13;
differences at 12-month follow-up. Gender moderation analyses suggest the HaWC was particularly&#13;
effective in reducing psychological distress among men at 6-month follow-up.&#13;
Conclusions. Our findings suggest that opportunities for workers to share concerns to a committee of&#13;
their peers tasked with identifying solutions can support mental health. Our study contributes important&#13;
experimental evidence on workplace interventions that improve the well-being of low-wage U.S.&#13;
populations.
</description>
<pubDate>Thu, 15 Jan 2026 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164869</guid>
<dc:date>2026-01-15T00:00:00Z</dc:date>
</item>
<item>
<title>Quantum information science and underground facilities</title>
<link>https://hdl.handle.net/1721.1/164817</link>
<description>Quantum information science and underground facilities
Formaggio, Joseph A
As both nuclear physics and particle physics involve the quantum interactions of many sub-atomic particles, there has&#13;
always existed a strong interplay between these fields and the study of quantum physics and quantum information&#13;
systems (QIS). This interplay has accelerated in recent years, particularly with the emergence of new, highly sensitive&#13;
technologies, nascent access to quantum computing environments at the O(10)-O(100)-bit scale, and the use of coherence and entanglement to enhance sensitivity to novel and exotic phenomena. One unusual area of interplay between the two disciplines that has recently emerged is the role of background radiation and background mitigation on highly sensitive systems such as qubits.
</description>
<pubDate>Tue, 05 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164817</guid>
<dc:date>2023-09-05T00:00:00Z</dc:date>
</item>
<item>
<title>Reduced-order model to predict thermal conductivity of dimensionally confined materials</title>
<link>https://hdl.handle.net/1721.1/164816</link>
<description>Reduced-order model to predict thermal conductivity of dimensionally confined materials
Hosseini, S Aria; Greaney, Alex; Romano, Giuseppe
Predicting nanoscale thermal transport in dielectrics requires models, such as the Boltzmann transport equation (BTE), that account for phonon boundary scattering in structures with complex geometries. Although the BTE has been validated against several key experiments, its computational expense limits its applicability. Here, we demonstrate the use of an analytic reduced-order model for predicting the thermal conductivity in dimensionally confined materials, i.e., monolithic and porous thin films, and rectangular and cylindrical nanowires. The approach uses the recently developed “Ballistic Correction Model,” which accounts for materials' full distribution of phonon mean-free-paths. The model is validated against BTE simulations for a selection of base materials, obtaining excellent agreement. By furnishing a precise yet easy-to-use prediction of thermal transport in nanostructures, our work strives to accelerate the identification of materials for energy-conversion and thermal-management applications.
</description>
<pubDate>Tue, 27 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164816</guid>
<dc:date>2023-06-27T00:00:00Z</dc:date>
</item>
<item>
<title>Parametric decay instabilities driven by high power helicon waves in DIII-D</title>
<link>https://hdl.handle.net/1721.1/164815</link>
<description>Parametric decay instabilities driven by high power helicon waves in DIII-D
Porkolab, M; Pinsker, RI; DeGrandchamp, GH; Baek, SG; Compernolle, B Van; Denk, S; Petty, CC; Tang, SX; Thome, KE
High power helicon waves (whistler or very high harmonic fast lower hybrid waves) at a frequency of 476 MHz are being tested for efficient off-axis current drive on DIII-D with the goal of demonstrating profile control in AT plasmas [1-4]. In agreement with earlier theoretical predictions, strong Parametric Decay Instability (PDI) has been observed at injected RF power levels in the range of 0.05-0.5 MW with corresponding electric fields of 10-30 kV/m [5,6]. The dominant driver of the PDI is the E×B and the polarization drift velocity which can drive ion cyclotron quasi-modes and lower hybrid (or IBW) sideband waves unstable [5,6]. Initial experimental results have been obtained with powers up to 0.3 MW showing evidence of strong PDI measured with high-frequency one-turn magnetic probes located at both the outboard and the inboard wall at frequencies set by the usual selection rules [7,8]. Here we review the appropriate analytic formulation to predict such instabilities and present numerical evaluation of frequencies and growth rates relevant to DIII-D plasma parameters. We also assess the convective thresholds for the PDIs, and compared them with experimental observations.
</description>
<pubDate>Fri, 18 Aug 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164815</guid>
<dc:date>2023-08-18T00:00:00Z</dc:date>
</item>
<item>
<title>Experimental footprints of a water-rich depletion layer in the Herschel–Bulkley pipe flow of solidifying polyelectrolytes</title>
<link>https://hdl.handle.net/1721.1/164814</link>
<description>Experimental footprints of a water-rich depletion layer in the Herschel–Bulkley pipe flow of solidifying polyelectrolytes
Nazari, B.; Moghimi, E.; Bousfield, D. W.
A fundamental understanding of the transition from fluid-like to gel-like behavior is critical for a range of applications including personal care, pharmaceuticals, food products, batteries, painting, biomaterials, and concrete. The pipe flow behavior of a Herschel–Bulkley fluid is examined by a combination of rheology, ultrasound imaging velocimetry, and pressure measurements together with modeling. The system is a solution of 0.50 wt. % polyelectrolytes of sulfated polysaccharides in water that solidifies on cooling. Fluids with different ionic strengths were pumped at various rates from a reservoir at 80 °C into a pipe submerged in a bath maintained at 20 °C. The fluid velocity, pressure drop ΔP, and temperature were monitored. The same quantities were extracted by solving continuity, energy, and momentum equations. Moreover, the modeling results demonstrate that the local pressure gradient along the pipe dPdx|x is related to the local yield stress near the pipe wall τywall|x, which explains the variations of dPdx|x along the pipe. Experimental results show much lower values for ΔP compared to those from modeling. This discrepancy is exacerbated at higher ionic strengths and smaller flow rates, where fluid shows a higher degree of solidification. The tabulated experimental ΔP data against the solidification onset length Lonset (where the fluid is cool enough to solidify) along with the ultrasound imaging velocimetry associate these discrepancies between experiments and models to a depletion layer of ∼1 μm, reflecting the lubrication effects caused by the water layer at the wall.
</description>
<pubDate>Fri, 27 Jan 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164814</guid>
<dc:date>2023-01-27T00:00:00Z</dc:date>
</item>
<item>
<title>Radiation pressure of radio frequency waves on turbulence in edge plasmas</title>
<link>https://hdl.handle.net/1721.1/164813</link>
<description>Radiation pressure of radio frequency waves on turbulence in edge plasmas
Ram, Abhay K; Hizanidis, Kyriakos
The scattering of radio frequency (RF) waves – lower hybrid and helicon waves – by a single cylindrical filament, embedded in a background plasma, is studied using a full-wave analytical theory. While a filament can affect the propagation of RF waves, the radiation force exerted by the waves can influence the filament. The force on a filament is determined using the Maxwell stress tensor. The radiation force can either pull the filament towards the RF source or push it away. The radiation force, in the two frequency ranges, is large enough to impact the motion of a filament and could be measured experimentally. Consequently, it may be possible to modify the edge turbulence using RF waves.
</description>
<pubDate>Fri, 18 Aug 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164813</guid>
<dc:date>2023-08-18T00:00:00Z</dc:date>
</item>
<item>
<title>Replies to Moran, Gallois, and Bar-On and Johnson</title>
<link>https://hdl.handle.net/1721.1/164812</link>
<description>Replies to Moran, Gallois, and Bar-On and Johnson
Byrne, Alex
I am very grateful to Dorit Bar-On, Drew Johnson, André Gallois, and Dick Moran for their thoughtful commentaries. Bar-On, Gallois, and Moran are discussed extensively in Transparency and Self-Knowledge (hereafter T&amp;SK), and their work has been an important source of inspiration for my own. In order to make my contribution to this symposium reasonably compact, I have not attempted to reply to every single point. (One especially notable omission is the alternative account of self-knowledge sketched by Bar-On and Johnson.) Instead, I have concentrated on the main objections.
</description>
<pubDate>Sun, 04 Jan 2026 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164812</guid>
<dc:date>2026-01-04T00:00:00Z</dc:date>
</item>
<item>
<title>The end of MAD? Technological innovation and the future of nuclear retaliatory capabilities</title>
<link>https://hdl.handle.net/1721.1/164811</link>
<description>The end of MAD? Technological innovation and the future of nuclear retaliatory capabilities
Glaser, Charles L.
This article motivates the special issue, explaining the new debate over whether emerging technologies – including small satellites, machine learning, cyber weapons, and quantum technologies – will enable major powers to undermine each others’ nuclear retaliatory capabilities. The first article analyzes key relevant emerging technologies. Following articles explore how emerging technologies will influence the vulnerability of mobile missiles, ballistic missile submarines, and nuclear command-and-control; and the ability of missile defenses against intercontinental range missiles. The final article explores China’s views on the requirements of nuclear deterrence. Overall, the articles suggest that U.S. prospects for achieving a damage-limitation capability are poor and declining.
</description>
<pubDate>Thu, 30 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164811</guid>
<dc:date>2025-01-30T00:00:00Z</dc:date>
</item>
<item>
<title>Prototyping longevity services: Tech-driven or human-assisted service?</title>
<link>https://hdl.handle.net/1721.1/164810</link>
<description>Prototyping longevity services: Tech-driven or human-assisted service?
Lee, Sheng-Hung; Coughlin, Joseph F; Yang, Maria
The study investigates the design of longevity services through an experimental comparison of tech-driven and human-assisted service encounters, focusing on six key features: learnability, efficiency, safety, trustworthiness, confidence, and satisfaction. The controlled experiment, which involved 12 gender-balanced participants from Boston, USA, employed four qualitative methods, including surveys, the Think-aloud technique, semi-structured interviews, and transcript analysis supported by computer-assisted qualitative data analysis software (CAQDAS) and its AI-empowered coding function. The study concluded with two insights: 1. Tech-driven services can improve safety, trust, confidence, and satisfaction; and 2. both service encounters are context-sensitive, shaped by participants’ demographics, personality, culture, and environmental factors. Although the small sample size limits the study’s generalizability, the participants’ stories and perceptions offered valuable insights into their implicit needs and subtle behaviors in learning, experiencing, and addressing sensitive, private, and vulnerable topics related to longevity planning.
</description>
<pubDate>Sun, 04 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164810</guid>
<dc:date>2025-05-04T00:00:00Z</dc:date>
</item>
<item>
<title>The Iraq Petroleum Company’s Infrastructure of “Desert Control” during the British Mandate in the Middle East</title>
<link>https://hdl.handle.net/1721.1/164809</link>
<description>The Iraq Petroleum Company’s Infrastructure of “Desert Control” during the British Mandate in the Middle East
Freeman, Margaret
This article discusses the infrastructure of the Iraq Petroleum Company (IPC) in the interwar British Mandatory Middle East as belonging to a larger British imperial project for “desert control” through architecture. Britain’s so-called “desert control” was, more accurately, a programme for control over the pastoralist Bedouin tribespeople who were the primary inhabitants of the Mandatory territories’ desert zones. This article identifies the two pillars of Britain’s “desert control” strategy: the use of Bedouin police forces, and the architectural annexation and restriction of water resources from Bedouin tribes. It argues that Mandate Britain’s “desert control” programme was replicated and adapted by the IPC for its own needs to protect its commercial infrastructural investment, the Iraq–Mediterranean Pipeline, in the British Mandatory territories. It compares two building typologies, the Mandate’s “desert outposts” and the IPC pipeline’s pumping stations, as sites where the Bedouin were alternately welcomed into and excluded from imperial and commercial projects in the interest of controlling them.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164809</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Surrogate modelling of surface roughness for asphalt pavements using artificial neural networks: a mechanistic-empirical approach</title>
<link>https://hdl.handle.net/1721.1/164808</link>
<description>Surrogate modelling of surface roughness for asphalt pavements using artificial neural networks: a mechanistic-empirical approach
Li, Haoran; AzariJafari, Hessam; Kirchain, Randolph; Santos, João; Khazanovich, Lev
Pavement surface smoothness (or roughness) is crucial for traffic safety, driving comfort, and fuel efficiency. As a widely applied roughness indicator, an accurate forecasting of the International Roughness Index (IRI) and its deterioration is essential for the design, maintenance, and management of asphalt pavements. Previous studies have used field measurement data or AASHTOWare Pavement ME Design simulations for the development of machine learning (ML) models to streamline the IRI modelling. However, these models frequently lack the accuracy and robustness of the measurement data or high-fidelity computational simulations they are intended to surrogate. To address this issue, we employed a new adaptive sampling technique to generate an informative yet efficient pavement damage database from Pavement ME simulations. Utilising Artificial Neural Networks (ANNs), we engineered two types of surrogate ML models: (a) Model I, an ANN-based IRI predictive model, and (b) Model II, a hybrid model combining ANN-based predictions of rutting, fatigue damage, and thermal cracking with closed-form relationships between these indicators and IRI. Our findings show that Model II outperforms Model I in IRI modelling accuracy both globally and locally. Moreover, Model II matches IRI simulations of Pavement ME while providing enhanced efficiency and adaptability to a broader spectrum of design considerations.
</description>
<pubDate>Mon, 09 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164808</guid>
<dc:date>2024-12-09T00:00:00Z</dc:date>
</item>
<item>
<title>Honoring practices of community-based educators: lessons learned from the collaborative design of a creative mobile app</title>
<link>https://hdl.handle.net/1721.1/164807</link>
<description>Honoring practices of community-based educators: lessons learned from the collaborative design of a creative mobile app
Rusk, Natalie; Jain, Rupal; Martin, Caitlin K.; Roque, Ricarose; Freitas, João Adriano; Molaodi, Linford
This paper shares reflections and stories from a collaborative design process between the Lifelong Kindergarten group at the MIT Media Lab and a global network of community-based educators to develop a creative coding app called OctoStudio, which supports children and families to create and share interactive projects on mobile devices. The app design is grounded in practices that community-based educators who are primarily from the Global South have developed around strengths, needs, and interests of children and their communities, as well as constraints and affordances of local infrastructure. We use the lens of minimal computing – which focuses on community context and constraints in decisions about technology – to describe our collaborative work on OctoStudio. We describe trade-offs involved in the design decisions, and highlight insights from the process of collaboration to develop tools and practices that are more responsive and meaningful to communities who are often excluded from design decisions that impact them.
</description>
<pubDate>Fri, 29 Nov 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164807</guid>
<dc:date>2024-11-29T00:00:00Z</dc:date>
</item>
<item>
<title>Expected Constant Round Byzantine Broadcast under Dishonest Majority</title>
<link>https://hdl.handle.net/1721.1/164806</link>
<description>Expected Constant Round Byzantine Broadcast under Dishonest Majority
Wan, Jun; Xiao, Hanshen; Shi, Elaine; Devadas, Srinivas
Byzantine Broadcast (BB) is a central question in distributed systems, and an important challenge is to understand its round complexity. Under the  honest majority setting, it is long known that there exist randomized protocols that can achieve BB in expected constant rounds, regardless of the number of nodes n. However, whether we can match the expected constant round complexity in the corrupt majority setting --- or more precisely, when f &gt; n/2 --- remains unknown, where f denotes the number of corrupt nodes.     In this paper, we are the first to resolve this long-standing question. We show how to achieve BB in expected O((n/(n-f))2) rounds. Our results hold under a weakly adaptive adversary who cannot perform ``after-the-fact removal' of messages already sent by a node before it becomes corrupt. We also assume trusted setup and the Decision Linear (DLIN) assumption in bilinear groups.
</description>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164806</guid>
</item>
<item>
<title>Minimum Plane Bichromatic Spanning Trees</title>
<link>https://hdl.handle.net/1721.1/164805</link>
<description>Minimum Plane Bichromatic Spanning Trees
Akitaya, Hugo; Biniaz, Ahmad; Demaine, Erik; Kleist, Linda; Stock, Frederick; T?th, Csaba D.
For a set of red and blue points in the plane, a minimum bichromatic spanning tree (MinBST) is a shortest spanning tree of the points  such that every edge has a red and a blue endpoint. A MinBST can be computed in O(n log n) time where n is the number of points.  In contrast to the standard Euclidean MST, which is always plane (noncrossing), a MinBST may have edges that cross each other.  However, we prove that a MinBST is quasi-plane, that is, it does not contain three pairwise crossing edges, and we determine the  maximum number of crossings.    Moreover, we study the problem of finding a minimum plane bichromatic spanning tree (MinPBST) which is a shortest bichromatic  spanning tree with pairwise noncrossing edges. This problem is known to be NP-hard. The previous best approximation algorithm,  due to Borgelt et al. (2009), has a ratio of O(sqrt(n)). It is also known that the optimum solution can be computed in polynomial time in  some special cases, for instance, when the points are in convex position, collinear, semi-collinear, or when one color class has constant  size. We present an O(log n)-factor approximation algorithm for the general case.
</description>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164805</guid>
</item>
<item>
<title>Nested Dissection Meets IPMs: Planar Min-Cost Flow in Nearly-Linear Time</title>
<link>https://hdl.handle.net/1721.1/164804</link>
<description>Nested Dissection Meets IPMs: Planar Min-Cost Flow in Nearly-Linear Time
Dong, Sally; Gao, Yu; Goranci, Gramoz; Lee, Yin Tat; Sachdeva, Sushant; Peng, Richard; Ye, Guanghao
We present a nearly-linear time algorithm for finding a minimum-cost flow in planar graphs with polynomially bounded integer costs and  capacities. The previous fastest algorithm for this problem is based on interior point methods (IPMs) and works for general sparse graphs in O(n1.5 polylog n)) time [Daitch-Spielman, STOC'.     Intuitively, ?(n1.5) is a natural runtime barrier for IPM-based methods, since they require ?n iterations, each routing a possibly-dense electrical flow. To break this barrier, we develop a new implicit representation for flows based on generalized nested dissection [Lipton-Rose-Tarjan, SINUM'79] and approximate Schur complements [Kyng-Sachdeva, FOCS'. This implicit representation permits us to design a data structure to route an electrical flow with sparse demands in roughly ?n update time, resulting in a total runtime of O(n polylog n).    Our results immediately extend to all families of separable graphs.
</description>
<pubDate>Sat, 26 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164804</guid>
<dc:date>2025-07-26T00:00:00Z</dc:date>
</item>
<item>
<title>Enhancing Electric Vehicle Security and Privacy through Decentralized Identity Management</title>
<link>https://hdl.handle.net/1721.1/164803</link>
<description>Enhancing Electric Vehicle Security and Privacy through Decentralized Identity Management
Aydeger, Abdullah; Zeydan, Engin; Mangues-Bafalluy, Josep; Arslan, Suayb; Turk, Yekta
In the next decade, electric vehicles (EVs) are expected to contribute to reducing climate change and transforming road mobility significantly. However, the security and privacy of EV charging systems present considerable challenges that need to be addressed. This paper introduces a novel approach by integrating blockchain-based self-sovereign identity (SSI) to enhance the security and privacy of EV charging systems. By leveraging decentralized and immutable nature of blockchain, the proposed SSI framework can ensure secure and private data exchanges between EVs, charging stations, and backend systems. This three-way integration addresses the vulnerabilities identified in existing EV charging methods, such as conductive, inductive, and battery swapping, and complies with cybersecurity regulations like UNECE R155. This paper provides a comprehensive analysis, practical case study, and evaluation of the security and privacy enhancements achieved through the proposed SSI framework, offering valuable insights for industry professionals and researchers. We have conducted extensive end-to-end testing to evaluate the performance of our blockchain-based SSI framework in the EV charging ecosystem, focusing on identity verification, credential management and service orchestration. The results show that the system enables fast wallet creation, efficient metadata retrieval and low-latency service deployment, ensuring seamless identity management and service orchestration.
</description>
<pubDate>Fri, 12 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164803</guid>
<dc:date>2025-09-12T00:00:00Z</dc:date>
</item>
<item>
<title>Local Distributed Rounding: Generalized to MIS, Matching, Set Cover, and Beyond</title>
<link>https://hdl.handle.net/1721.1/164802</link>
<description>Local Distributed Rounding: Generalized to MIS, Matching, Set Cover, and Beyond
Faour, Salwa; Ghaffari, Mohsen; Grunau, Christoph; Kuhn, Fabian; Rozho?, V?clav
We develop a general deterministic distributed method for locally rounding fractional solutions of graph problems for which the analysis can be broken down into analyzing pairs of vertices. Roughly speaking, the method can transform fractional/probabilistic label assignments of the vertices into integral/deterministic label assignments for the vertices, while approximately preserving a potential function that is a linear combination of functions, each of which depends on at most two vertices (subject to some conditions usually satisfied in pairwise analyses). The method unifies and significantly generalizes prior work on deterministic local rounding techniques [Ghaffari, Kuhn FOCS'21; Harris FOCS'19; Fischer, Ghaffari, Kuhn FOCS'17; Fischer DISC' to obtain polylogarithmic-time deterministic distributed solutions for combinatorial graph problems. Our general rounding result enables us to locally and efficiently derandomize a range of distributed algorithms for local graph problems, including maximal independent set (MIS), maximum-weight independent set approximation, and minimum-cost set cover approximation.
</description>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164802</guid>
</item>
<item>
<title>Feasibility Study on Heat Pipes for Radio Frequency Antennas</title>
<link>https://hdl.handle.net/1721.1/164801</link>
<description>Feasibility Study on Heat Pipes for Radio Frequency Antennas
Jung, Minuk; Watterson, Amy; Wallace, Gregory M
The applicability of a heat pipe is investigated for the cooling of radio frequency antennas in fusion reactors operating at high temperatures. A heat pipe is a passive cooling device that transfers a large amount of heat through the liquid-vapor phase change and pumps the working fluid by the surface tension of the wick structure without moving parts. As the heat pipe is expected to operate near 1000 K, refractory metals or ceramics should be used for wall materials, and liquid metals are primarily considered as the working fluid. However, liquid metals are electrically conductive, and the strong magnetic field perpendicular to the flow direction imposes significant magnetohydrodynamic (MHD) flow resistance in addition to viscous friction, which impairs heat transfer performance. Since a strong magnetic field is inevitable in magnetic confinement fusion reactors, materials with low electrical conductivity should be applied to wall coatings to reduce the MHD effect. Heat flux limitations at a magnetic field of 10 T and a condenser coolant temperature of 773 K are estimated using COMSOL multiphysics, which can capture the fully developed MHD wick flow, laminar/turbulent vapor flow, and heat transfer simultaneously. For simplicity, the generic heat pipe geometry of a straight horizontal cylinder with a length of 2 ft (0.6096 m) is employed. Optimal geometrical parameters are evaluated to meet radial evaporator/condenser heat fluxes greater than 0.1 MW/m2, even under a strong MHD effect.
</description>
<pubDate>Tue, 17 Feb 2026 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164801</guid>
<dc:date>2026-02-17T00:00:00Z</dc:date>
</item>
<item>
<title>Regional incidence and persistence of high-growth firms: testing ideas from the entrepreneurial ecosystems literature</title>
<link>https://hdl.handle.net/1721.1/164800</link>
<description>Regional incidence and persistence of high-growth firms: testing ideas from the entrepreneurial ecosystems literature
Coad, Alex; Domnick, Clemens; Santoleri, Pietro; Srhoj, Stjepan
Policymakers and scholars often assume that a higher incidence of high-growth firms (HGFs) is synonymous with vibrant regional economic dynamics, and that HGF shares are persistent over time as entrepreneurial ecosystems (EEs) have slowly changing features. In this paper we test these hypotheses, which are deeply rooted in the EE literature. Results do not provide strong support for the hypothesis that more developed regions feature higher HGF shares. We do find evidence consistent with HGF shares displaying persistency over time. However, we show that more developed regions do not have higher persistence in their HGF shares, and that the strength in persistence does not increase across the HGFs distribution, which does not support path-dependency as the main mechanism behind the observed persistence. Overall, we call for a more nuanced interpretation of both regional HGF shares and the EEs literature.
</description>
<pubDate>Wed, 08 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164800</guid>
<dc:date>2025-01-08T00:00:00Z</dc:date>
</item>
<item>
<title>Knives out: response to critics</title>
<link>https://hdl.handle.net/1721.1/164799</link>
<description>Knives out: response to critics
Khoo, Justin
Writing a book can feel like a solitary endeavor. You labor for (in my case) years, sometimes talking about parts of the project with others, but mostly toiling alone to work out the consequences of commitments you made months and years prior. I'm grateful for the opportunity to engage with three brilliant interlocutors about these ideas, which for so long seemed to matter to no one besides myself (and maybe my publisher).
</description>
<pubDate>Sat, 09 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164799</guid>
<dc:date>2025-08-09T00:00:00Z</dc:date>
</item>
<item>
<title>Toward Ontological Alignment: Coordinating Student Ideas with the Representational System of a Computational Modeling Unit for Science Learning</title>
<link>https://hdl.handle.net/1721.1/164798</link>
<description>Toward Ontological Alignment: Coordinating Student Ideas with the Representational System of a Computational Modeling Unit for Science Learning
Wagh, Aditi; Rosenbaum, Leah F.; Fuhrmann, Tamar; Eloy, Adelmo; Blikstein, Paulo; Wilkerson, Michelle
Computational modeling tools present unique opportunities and challenges for student learning. Each tool has a representational system that impacts the kinds of explorations students engage in. Inquiry aligned with a tool’s representational system can support more productive engagement toward target learning goals. However, little research has examined how teachers can make visible the ways students’ ideas about a phenomenon can be expressed and explored within a tool’s representational system. In this paper, we elaborate on the construct of ontological alignment—that is, identifying and leveraging points of resonance between students’ existing ideas and the representational system of a tool. Using interaction analysis, we identify alignment practices adopted by a science teacher and her students in a computational agent-based modeling unit. Specifically, we describe three practices: (1) Elevating student ideas relevant to the tool’s representational system; (2) Exploring and testing links between students’ conceptual and computational models; and (3) Drawing on evidence resonant with the tool’s representational system to differentiate between theories. Finally, we discuss the pedagogical value of ontological alignment as a way to leverage students’ ideas in alignment with a tool’s representational system and suggest the presented practices as exemplary ways to support students’ computational modeling for science learning.
</description>
<pubDate>Mon, 18 Nov 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164798</guid>
<dc:date>2024-11-18T00:00:00Z</dc:date>
</item>
<item>
<title>Stand Up and Split. Desiring Desertion in Jean Giono and Emmanuelle Lambert</title>
<link>https://hdl.handle.net/1721.1/164797</link>
<description>Stand Up and Split. Desiring Desertion in Jean Giono and Emmanuelle Lambert
Perreau, Bruno
To face the powers that be, contemporary French writer Virginie Despentes proposes a straightforward solution: “stand up and split!” But where to go and with whom? How do we stop the proliferation of contested norms if we clear the decks? In a context of ecological crisis, desiring desertion is not rare even if we have only one world to inhabit. This article analyzes the desire to desert from two texts: Le Déserteur et autres récits (Citation1966 [1973]) by Jean Giono and La Désertion (Citation2018a) by Emmanuelle Lambert. It demonstrates that desertion does not make a clean sweep of the past but rather accepts the desert at the heart of existence. That is, both presence and disappearance.
</description>
<pubDate>Sat, 19 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164797</guid>
<dc:date>2024-10-19T00:00:00Z</dc:date>
</item>
<item>
<title>Fuel Behavior Implications of Reactor Design Choices in Pressurized Water SMRs</title>
<link>https://hdl.handle.net/1721.1/164796</link>
<description>Fuel Behavior Implications of Reactor Design Choices in Pressurized Water SMRs
Halimi, Assil; Shirvan, Koroush
Small pressurized water reactors can feature boron-free operation, natural circulation mode, reduced-height assemblies, and/or long refueling cycles. This paper attempts to explore core design optimization for each of these design evolutions. In consequence, five core design layouts are developed incorporating boron-free operation with continuous control rod insertion, natural circulation with low burnup/low power density design, natural circulation with high burnup/low power density design, forced circulation with standard core power density design, and forced circulation with high power density design. These cores’ performance is compared to a standard four-loop pressurized water reactor. The design process aims to improve the fuel cycle cost under safety constraints through core design optimization using the CASMO4E/SIMULATE3 reactor physics codes and the FRAPCON4.1 fuel performance assessment tool. Core modeling assumes standard 17×17 PWR fuel assemblies loaded with low enriched uranium up to 5 wt% or low enriched uranium plus (i.e. below 10 wt% enrichment) pellets with gadolinium oxide as the burnable poison. Satisfactory core and fuel performances are obtained for all the designed cores under steady state and considered overpower transients. For low power density operation, long cycle lengths are achieved reaching 2.5-year and 5-year cycles, and peak rod-average burnup is pushed to 83 MWd/kgU. Other cycle lengths are maintained at 18 months. Boron-free operation exhibits the ability to achieve longer cycle lengths at the cost of higher peaking factors leading to high local power and fuel temperatures, which prevents sizable power uprates and is deemed uneconomical. Fuel assembly height reduction allows coolant velocity retrofit, which enables higher core power density without violating the structural integrity of the fuel assembly. As a result, a core power density of 123 kW/L is reached where total cladding hoop strain becomes the limiting parameter.
</description>
<pubDate>Sun, 03 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164796</guid>
<dc:date>2025-08-03T00:00:00Z</dc:date>
</item>
<item>
<title>Shades of authoritarian digital sovereignty: divergences in Russian and Chinese data localisation regimes</title>
<link>https://hdl.handle.net/1721.1/164795</link>
<description>Shades of authoritarian digital sovereignty: divergences in Russian and Chinese data localisation regimes
Khasanova, Liliya; Tai, Katharin
The concept of sovereignty is now referred to in cyberspace-related policy by a range of governments, both authoritarian and democratic. At the same time, the most prominent proponents of state – or sovereignty-centric models of internet governance are Russia and China, whose positions are often characterised as a shared ‘Sino-Russian’ model. This paper subjects this idea of a shared Sino-Russian approach to empirical scrutiny by conducting a comparative analysis of rules, regulations and policies on data localisation in both countries. By delimiting the research question to regulations on data localisation and cross-border data transfers in both countries, we identify an important set of similarities and differences between the Russian and Chinese approaches. They share some features associated with authoritarian regimes, such as uncertainty around the selective enforcement of broadly formulated rules and a centralised assessment of outbound data transfers. However, we also find significant differences in the level of institutional centralisation, degrees of responsiveness within the policymaking process, and economic logics driving data localisation and cross-border transfer regulations. Based on these findings, we argue that despite a perception that Russia and China adhere to a similar model of authoritarian digital sovereignty, there are significant disparities in their data localisation regimes.
</description>
<pubDate>Tue, 02 Jan 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164795</guid>
<dc:date>2024-01-02T00:00:00Z</dc:date>
</item>
<item>
<title>Race, profit, and algorithms: Neighborhood-level analysis of iBuyers’ profit margin</title>
<link>https://hdl.handle.net/1721.1/164794</link>
<description>Race, profit, and algorithms: Neighborhood-level analysis of iBuyers’ profit margin
So, Wonyoung
iBuyers are firms that use automated valuation models (AVMs), streamline home buying processes, and provide all-cash offers to purchase homes. Although the previous literature has explored the roles and limitations of iBuyers in the housing market, empirical research on the racial implications of these algorithmic home buying processes remains understudied. Using a spatial lag model, this study shows the spatial clustering of iBuyer profit margins, that iBuyers gain more profits when they resell to individuals than institutions, and that some iBuyers have a statistically significant correlation between their profit margins and the proportion of marginalized racial groups within a census tract, while controlling for individual housing characteristics, neighborhood housing quality and demand, and neighborhood amenities and socioeconomic factors. These findings suggest that the more adeptly iBuyers can forecast housing values, the greater the potential to maximize profits from homeowners in communities of color. Consequently, this research contributes to the understanding of how technological mechanisms operate within a purportedly race-neutral framework and advocates for the development and deployment of algorithmic systems guided by the principles of antisubordination, rather than relying solely on notions of “fairness” and anticlassification.
</description>
<pubDate>Tue, 29 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164794</guid>
<dc:date>2024-10-29T00:00:00Z</dc:date>
</item>
<item>
<title>Powering Through the Turn: Finding Time for Concept Exploration Before Industry Stagnation</title>
<link>https://hdl.handle.net/1721.1/164793</link>
<description>Powering Through the Turn: Finding Time for Concept Exploration Before Industry Stagnation
Noble, Connery; Cameron, Bruce G
This study examines how the tension between exploration and exploitation affects early-stage development within the engineering teams of large corporations. Using survey data collected from over 900 system engineers and managers, it was observed that exploration decreased as an organization’s market growth declined, but dire market projections prompted a refocus on exploration. In addition, engineers routinely desire more concept exploration time than they perceive that they have available. The authors argue that engineering teams should more intentionally consider their innovation strategy, and that companies with stagnant market growth should invest in concept exploration before they get to a period of market decline.
</description>
<pubDate>Sat, 15 Mar 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164793</guid>
<dc:date>2025-03-15T00:00:00Z</dc:date>
</item>
<item>
<title>Molecular dynamics simulations and structural bioinformatics of bacterial integral alpha-helical membrane enzymes and their AlphaFold2-predicted water-soluble QTY analogues</title>
<link>https://hdl.handle.net/1721.1/164791</link>
<description>Molecular dynamics simulations and structural bioinformatics of bacterial integral alpha-helical membrane enzymes and their AlphaFold2-predicted water-soluble QTY analogues
Sajeev-Sheeja, Akash; Karagöl, Alper; Karagöl, Taner; Zhang, Shuguang
The study of integral membrane proteins has long been challenging because of their poor solubility in aqueous environments. We previously used QTY code to enhance the hydrophilicity in alpha-helices, beta-barrels, and monoclonal antibodies by systematically pairwise replacing the hydrophobic amino acids L (leucine) to Q (glutamine), V(valine)/I(isoleucine) to T (threonine), and F (phenylalanine) to Y (tyrosine). The superposed AlphaFold2-predicted structures of alpha-helical transmembrane enzyme variants with &gt;41% amino acid substitutions displayed remarkable similarity to native structures (RMSD 0.3Å-0.7 Å). We conducted molecular dynamics (MD) simulations, which revealed that, even in the absence of a lipid bilayer, the QTY-modified enzymes retained stable dynamics comparable to their membrane-bound forms. Root mean square fluctuation (RMSF) values remained below 2 Å across the transmembrane and core regions, and residue-wise root mean square deviation (RMSD) values were minimal (&lt;3 Å), indicating that the structural integrity of the protein core was largely preserved. These results suggest that the QTY variants, designed for soluble environments, effectively mimic the stability and conformational rigidity of natural membrane-bound enzymes. Our findings show that the QTY code is a simple method for designing water-soluble membrane protein enzymes in different biological scenarios, and it may encourage further experiments to validate our structural bioinformatics research.
</description>
<pubDate>Mon, 29 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164791</guid>
<dc:date>2025-09-29T00:00:00Z</dc:date>
</item>
<item>
<title>City of ‘social saints’: the role of place in driving impact entrepreneurship in Turin, Italy</title>
<link>https://hdl.handle.net/1721.1/164790</link>
<description>City of ‘social saints’: the role of place in driving impact entrepreneurship in Turin, Italy
Burke, Mary Kathleen; Sydow, Alisa; Torchia, Daniel; Corazza, Laura
This paper theorizes impact entrepreneurship (IE) in relation to place by examining dynamics at the individual, community, and organizational levels. While existing IE literature emphasizes entrepreneurship aimed at addressing grand challenges, it often adopts an aggregate view that overlooks how locally embedded entrepreneurs access and mobilize social and economic resources. We introduce a novel, multidimensional framework to show how sense of place, community embeddedness and IE interrelate to shape approaches to current social/environmental challenges. Adopting a qualitative approach, this paper investigates how different actors in Turin, Italy, contribute to IE through building on a legacy of social sector institutions. We find that individuals identifying with a place-based vocation of social impact find communities with a shared volition to work together and across organizations. We contribute to understanding how individuals’ senses of place can be leveraged into wider community efforts to support IE in the region. The paper advances the IE concept to account for the individual perspectives influencing local organizing practices and visions for IE rooted in place.
</description>
<pubDate>Mon, 15 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164790</guid>
<dc:date>2025-09-15T00:00:00Z</dc:date>
</item>
<item>
<title>Plane Delivery: Towards a Physical Grammar for Large-Scale Digital Fabrication</title>
<link>https://hdl.handle.net/1721.1/164789</link>
<description>Plane Delivery: Towards a Physical Grammar for Large-Scale Digital Fabrication
Sass, Lawrence
There will come a day when computers and robots will participate regularly in designing, fabricating, and delivering homes as customized kits of parts (Sass Citation2008). They will not replace builders. Instead, one possible future is where computers and robots operate as intelligent assistants, discovering, reasoning, and inferring the best solutions using large language models (LLMs). This language will be vector-based on points, lines, and planes of the type Stiny described (Stiny Citation2006). A standard design and builder language is a first step towards automation. The proposed system is of a Lego-style approach to physical house production, used to manage costs, enhance design variety, improve design quality, and, most importantly, facilitate building.
</description>
<pubDate>Thu, 03 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164789</guid>
<dc:date>2025-07-03T00:00:00Z</dc:date>
</item>
<item>
<title>Techno-statecraft and industrial strategy: semiconductor development in Arizona</title>
<link>https://hdl.handle.net/1721.1/164788</link>
<description>Techno-statecraft and industrial strategy: semiconductor development in Arizona
Kollar, Justin
The resurgence of U.S. industrial strategy discourse is not a centralised return of the state but a territorially fragmented form of techno-statecraft. This article analyzes Arizona's semiconductor expansion as a case in which subnational actors – agencies, utilities, universities, and developers – mobilise infrastructure, land-use policy, and regulatory coordination to attract global capital. Rather than a coherent national plan, Arizona's strategy reflects speculative governance oriented toward risk absorption and territorial readiness. The article situates this conjuncture within longer histories of militarised growth and infrastructural overbuild, contributing to debates on state capitalism, industrial strategy, and the spatial politics of techno-industrial transformation.
</description>
<pubDate>Tue, 27 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164788</guid>
<dc:date>2025-05-27T00:00:00Z</dc:date>
</item>
<item>
<title>Ideology, Equity, and Structure: Comments on Tzu-wei Hung’s ‘Equity and Marxist Buddhism’</title>
<link>https://hdl.handle.net/1721.1/164787</link>
<description>Ideology, Equity, and Structure: Comments on Tzu-wei Hung’s ‘Equity and Marxist Buddhism’
Haslanger, Sally
In his essay, ‘Equity and Marxist Buddhism’, Tzu-wei Hung argues that Marxist Buddhism brings a commitment to social justice together with a distinctive form of virtue theory. In my commentary, I raise several questions from a Marxian perspective: (1) Might it be argued that Marxist Buddhism is (in the critical sense) ideological (similar to religion) because the spiritual goal of ‘transcendence’ distracts us from the need to fight for emancipation? (2) Can justice as equity be achieved by promoting individual altruism? (3) Aren’t both mainstream accounts of justice and Marxist Buddhism aspirational and so need to rely on non-ideal theory to achieve justice?
</description>
<pubDate>Tue, 01 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164787</guid>
<dc:date>2024-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>A data-driven and context-aware approach for demand forecasting in the beverage industry</title>
<link>https://hdl.handle.net/1721.1/164786</link>
<description>A data-driven and context-aware approach for demand forecasting in the beverage industry
Ma, Benedict Jun; Jackson, Ilya; Huang, Maggie; Villegas, Sebastian; Macias-Aguayo, Jaime
Accurate demand forecasting is essential for logistics and supply chain management as it enables efficient inventory planning, reduces operational costs, and ensures high service levels across the network. However, in practice, diverse demand patterns of items make this task challenging, and a one-size-fits-all forecasting approach is inadequate. This paper proposes a data-driven and context-aware forecasting framework and tests it by using both endogenous data from a large private-label beverage manufacturer and exogenous features (such as holidays and temperature). Our method begins by classifying SKUs based on demand volume, volatility, and intermittency, and then refining the derived clusters by taking volume distribution into account. Totally, we obtain four distinct clusters, which are (i) stable and high volume, (ii) stable with low volume, (iii) erratic and intermittent, and (iv) lumpy. To explore the appropriate forecasting models for different demand patterns, we employ statistical models (exponential smoothing, ARIMA, and Croston), machine learning models (XGBoost), deep learning models (TiDE and N-BEATS), and even qualitative approaches such as collaborative planning, forecasting, and replenishment (CPFR). Our experimental results suggest which forecasting models are recommended for each demand pattern, and insightful implications are provided for the managers.
</description>
<pubDate>Fri, 10 Oct 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164786</guid>
<dc:date>2025-10-10T00:00:00Z</dc:date>
</item>
<item>
<title>Adversarial Network Optimization under Bandit Feedback: Maximizing Utility in Non-Stationary Multi-Hop Networks</title>
<link>https://hdl.handle.net/1721.1/164785</link>
<description>Adversarial Network Optimization under Bandit Feedback: Maximizing Utility in Non-Stationary Multi-Hop Networks
Dai, Yan; Huang, Longbo
Stochastic Network Optimization (SNO) concerns scheduling in stochastic queueing systems and has been widely studied in network theory. Classical SNO algorithms require network conditions to be stationary w.r.t. time, which fails to capture the non-stationary components in increasingly many real-world scenarios. Moreover, most existing algorithms in network optimization assume perfect knowledge of network conditions before decision, which again rules out applications where unpredictability in network conditions presents.&#13;
Motivated by these issues, this paper considers Adversarial Network Optimization (ANO) under bandit feedback. Specifically, we consider the task of i) maximizing some unknown and time-varying utility function associated with scheduler's actions, where ii) the underlying network topology is a non-stationary multi-hop network whose conditions change arbitrarily with time, and iii) only bandit feedback (the effect of actually deployed actions) is revealed after decision-making. We propose the UMO2 algorithm, which does not require any pre-decision knowledge or counterfactual feedback, ensures network stability, and also matches the utility maximization performance of any ''mildly varying'' reference policy up to a polynomially decaying gap. To our knowledge, no previous algorithm can handle multi-hop networks or achieve utility maximization guarantees in ANO problems with bandit feedback, whereas ours is able to do both.&#13;
Technically, our method builds upon a novel integration of online learning techniques into the Lyapunov drift-plus-penalty method. Specifically, we propose meticulous analytical techniques to jointly balance online learning and Lyapunov arguments, which is used to handle the complex inter-dependency among queues in multi-hop networks. To tackle the learning obstacles due to potentially unbounded queue sizes and negative queue differences, we design a new online linear optimization algorithm that automatically adapts to the unknown (potentially negative) loss magnitudes. Finally, we also propose a bandit convex optimization algorithm with novel queue-dependent learning rate scheduling that suites drastically varying queue lengths in utility maximization. Our new insights and techniques in online learning can also be of independent interest.
SIGMETRICS Abstracts ’25, Stony Brook, NY, USA
</description>
<pubDate>Mon, 09 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164785</guid>
<dc:date>2025-06-09T00:00:00Z</dc:date>
</item>
<item>
<title>IDAT: A Multi-Modal Dataset and Toolkit for Building and Evaluating Interactive Task-Solving Agents</title>
<link>https://hdl.handle.net/1721.1/164784</link>
<description>IDAT: A Multi-Modal Dataset and Toolkit for Building and Evaluating Interactive Task-Solving Agents
Mohanty, Shrestha; Arabzadeh, Negar; Tupini, Andrea; Sun, Yuxuan; Skrynnik, Alexey; Zholus, Artem; C?t?, Marc-Alexandre; Kiseleva, Julia
Seamless interaction between AI agents and humans using natural language remains a key goal in AI research. This paper addresses the challenges of developing interactive agents capable of understanding and executing grounded natural language instructions through the IGLU competition. Despite advancements, challenges such as a scarcity of appropriate datasets and the need for effective evaluation platforms persist. We introduce a scalable data collection tool for gathering interactive grounded language instructions within a Minecraft-like environment, resulting in a Multi-Modal dataset with around 9,000 utterances and over 1,000 clarification questions. Additionally, we present a Human-in-the-Loop interactive evaluation platform for qualitative analysis and comparison of agent performance through multi-turn communication with human annotators. We offer to the community these assets referred to as IDAT (IGLU Dataset And Toolkit) which aim to advance the development of intelligent, interactive AI agents and provide essential resources for further research.
SIGIR ’25, Padua, Italy
</description>
<pubDate>Sun, 13 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164784</guid>
<dc:date>2025-07-13T00:00:00Z</dc:date>
</item>
<item>
<title>A Theory to Estimate, Bound, and Manage Systemic Cyber-Risk</title>
<link>https://hdl.handle.net/1721.1/164783</link>
<description>A Theory to Estimate, Bound, and Manage Systemic Cyber-Risk
Pal, Ranjan; Duan, Konnie; Sequeira, Rohan
The market to manage critical infrastructure cyber-risks using cyber insurance (CI) has been growing steadily (but not fast enough) as it is still skeptical of the extent of economic and societal impact of systemic risk across networked supply chains in interdependent IT-driven enterprises. To demystify this skepticism, we first study in this paper the role of (a) the statistical nature of multiple enterprise cyber-risks contributing to aggregate supply chain risk and (b) the graph structure of the underlying enterprise supply chain network, in the statistical spread of aggregate cyber-risk. We provide statistical tail bounds on the aggregate cyber-risk that a risk managing firm such as a cyber insurer is exposed to in a supply chain. Subsequently, we study the problem of aggregate cyber-risk management by cyber re-insurance firms via portfolio design to optimally diversify aggregate/systemic cyber-risk sourced from multiple CIs insuring enterprises on a supply chain. We propose the first mathematical framework for re-insurers to test the operational sustainability of systemic cyber-risk diversification portfolios with respect to the standard Value-at-Risk (VaR) metric for general aggregate cyber risk distributions. We also propose a statistical copula methodology to make systemic cyber-risk portfolio diversification sustainable for re-insurers in scenarios where the sustainability test fails. We validate our theory via Monte Carlo simulations.
SIGSIM-PADS ’25, Santa Fe, NM, USA
</description>
<pubDate>Sun, 22 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164783</guid>
<dc:date>2025-06-22T00:00:00Z</dc:date>
</item>
<item>
<title>Dynamic Incentive Allocation for City-Scale Deep Decarbonization</title>
<link>https://hdl.handle.net/1721.1/164781</link>
<description>Dynamic Incentive Allocation for City-Scale Deep Decarbonization
Sitaraman, Anupama; Lechowicz, Adam; Bashir, Noman; Liu, Xutong; Hajiesmaili, Mohammad; Shenoy, Prashant
Greenhouse gas emissions from the residential sector represent a large fraction of global emissions and must be significantly curtailed to achieve ambitious climate goals. To stimulate the adoption of relevant technologies such as rooftop PV and heat pumps, governments and utilities have designed incentives that encourage adoption of decarbonization technologies. However, studies have shown that many of these incentives are inefficient since a substantial fraction of spending does not actually promote adoption. Further, these incentives are not equitably distributed across socioeconomic groups. In this paper, we present a novel data-driven approach that adopts a holistic, emissions-based, and city-scale perspective on decarbonization.  &#13;
We propose an optimization model that dynamically allocates a total incentive budget to households to directly maximize the resultant carbon emissions reduction -- this is in contrast to prior work, which focuses on metrics such as the number of new installations.  We leverage techniques from the multi-armed bandits problem to estimate human factors, such as a household's willingness to adopt new technologies given a certain incentive. We apply our proposed dynamic incentive framework to a city in the Northeast U.S., using real household energy data, grid carbon intensity data, and future price scenarios.  We compare our learning-based technique to two baselines, one "status-quo' baseline using incentives offered by a state and utility, and one simple heuristic baseline. With these baselines, we show that our learning-based technique significantly outperforms both the status-quo baseline and the heuristic baseline, achieving up to 37.88% higher carbon reductions than the status-quo baseline and up to 28.76% higher carbon reductions compared to the heuristic baseline. Additionally, our incentive allocation approach is able to achieve significant carbon reduction even in a broad set of environments, with varying values for electricity and gas prices, and for carbon intensity of the grid. Finally, we show that our framework can accommodate equity-aware constraints to preserve an equitable allocation of incentives across socioeconomic groups while achieving 83.34% of the carbon reductions of the optimal solution on average.
</description>
<pubDate>Wed, 16 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164781</guid>
<dc:date>2025-07-16T00:00:00Z</dc:date>
</item>
<item>
<title>The Economics of Large Language Models: Token Allocation, Fine-Tuning, and Optimal Pricing</title>
<link>https://hdl.handle.net/1721.1/164780</link>
<description>The Economics of Large Language Models: Token Allocation, Fine-Tuning, and Optimal Pricing
Bergemann, Dirk; Bonatti, Alessandro; Smolin, Alex
We develop an economic framework to analyze the optimal pricing and product design of Large Language Models (LLM). Our framework captures several key features of LLMs: variable operational costs of processing input and output tokens; the ability to customize models through fine-tuning; and high-dimensional user heterogeneity in terms of task requirements and error sensitivity. In our model, a monopolistic seller offers multiple versions of LLMs through a menu of products. The optimal pricing structure depends on whether token allocation across tasks is contractible and whether users face scale constraints.&#13;
When it is possible to contract on the entire assignment of tokens to tasks, the seller's problem ("Token Allocations") is an infinite-dimensional screening problem, which is well-known to be difficult. We are nonetheless able to make progress in two important classes of environments: binary environment and two dimensional value-scale heterogeneity, in which case users with similar aggregate value-scale characteristics choose similar levels of fine-tuning and token consumption. When only the total number of tokens is contractible ("Token Packages"), we leverage the tractability of a constant elasticity of substitution framework to drastically simplify the problem: the buyer's type-a function mapping each task to a value of precision- is an index. This index for the value of precision allows the seller to solve a one-dimensional screening problem. The optimal mechanism can be implemented through menus of two-part tariffs, with higher markups for more intensive users. Our results rationalize observed industry practices such as tiered pricing based on model customization and usage levels.
EC ’25, July 7–10, 2025, Stanford, CA, USA
</description>
<pubDate>Wed, 02 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164780</guid>
<dc:date>2025-07-02T00:00:00Z</dc:date>
</item>
<item>
<title>Alternates, Assemble! Selecting Optimal Alternates for Citizens’ Assemblies</title>
<link>https://hdl.handle.net/1721.1/164779</link>
<description>Alternates, Assemble! Selecting Optimal Alternates for Citizens’ Assemblies
Assos, Angelos; Baharav, Carmel; Flanigan, Bailey; Procaccia, Ariel
Citizens' assemblies are an increasingly influential form of deliberative democracy, where randomly selected people discuss policy questions. The legitimacy of these assemblies hinges on their representation of the broader population, but participant dropout often leads to an unbalanced composition. In practice, dropouts are replaced by preselected alternates, but existing methods do not address how to choose these alternates. To address this gap, we introduce an optimization framework for alternate selection. Our algorithmic approach, which leverages learning-theoretic machinery, estimates dropout probabilities using historical data and selects alternates to minimize expected misrepresentation. Our theoretical bounds provide guarantees on sample complexity (with implications for computational efficiency) and on loss due to dropout probability mis-estimation. Empirical evaluation using real-world data demonstrates that, compared to the status quo, our method significantly improves representation while requiring fewer alternates.
EC ’25, July 7–10, 2025, Stanford, CA, USA
</description>
<pubDate>Wed, 02 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164779</guid>
<dc:date>2025-07-02T00:00:00Z</dc:date>
</item>
<item>
<title>Does Firm Size Influence the Collection of Sensitive Data?: A Study of Child-Orientated Apps</title>
<link>https://hdl.handle.net/1721.1/164778</link>
<description>Does Firm Size Influence the Collection of Sensitive Data?: A Study of Child-Orientated Apps
Cecere, Grazia; Tucker, Catherine; Lefrere, Vincent
How does firm size affect the privacy protections offered to customers? On the one hand, it could be that larger firms use their size to amass more data. On the other hand, smaller firms may be less careful in their data protection practices, because they have a different perception of risk. Using data from the Google Play Store over a three-year period, we explore this empirical question in the U.S. children's app market. Our findings indicate that larger app developers consistently implement stronger privacy protections, requesting less sensitive data compared to smaller developers. These results hold across empirical approaches, including instrumental variables and the propensity-score matching approach. Additionally, our analysis shows that mergers between developers and sudden increases in size of the user-bases of the product are associated with reduced data collection. We show that newly created and updated apps produced by large developers collect less data compared to existing apps. Our findings indicate a trend toward standardized privacy practices across different national regulatory regimes. This research highlights the potential for growth-driven improvements in data privacy practices among app developers, regardless of their regulatory context.
EC ’25, July 7–10, 2025, Stanford, CA, USA
</description>
<pubDate>Wed, 02 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164778</guid>
<dc:date>2025-07-02T00:00:00Z</dc:date>
</item>
<item>
<title>Inertial Coordination Games</title>
<link>https://hdl.handle.net/1721.1/164777</link>
<description>Inertial Coordination Games
Koh, Andrew; Li, Ricky; Uzui, Kei
Coordination lies at the heart of many economic phenomena. A well-known example is currency crises, in which traders decide whether to launch a speculative attack. On one hand, shocks to the currency's fundamentals can propagate: as more traders attack, the central bank's foreign reserves are depleted, which in turn encourages further attacks as traders seek to exploit a weakening currency. On the other hand, shocks can also fizzle out: traders may quickly learn that the central bank's balance sheet is strong, causing pessimism to dissipate and attacks to subside. When do shocks propagate, and when do they fizzle out? In particular, how do these outcomes depend on the speed at which traders learn about the fundamental?&#13;
Motivated by these questions, we propose a model of inertial coordination games—dynamic coordination games where players repeatedly decide whether to take a risky action. The payoff from this risky action depends on (i) a persistent fundamental; and (ii) an endogenous component that depends on others' past play. Players receive private signals about the persistent fundamental over time and form beliefs about the current state. Notably, the current state depends on past play, which in turn depends on past beliefs about play yet farther back into the past. Thus, expectations about histories shape behavior in the present which, in turn, drives the evolution of future states and future play.&#13;
Our main result develops a tight connection between the speed of learning and limit aggregate play: the risk-dominant action is played in the limit if and only if posterior precisions grow sub-quadratically. This has sharp implications for the long-run propagation of shocks. With slow (sub-quadratic) learning, limit play exhibits history independence: initial shocks have no lasting effect, and limit play is determined solely by fundamentals. By contrast, with fast (super-quadratic) learning, limit play is history dependent: initial shocks can be self-fulfilling, and whether they propagate depends jointly on fundamentals, the size of the shock, and the speed of learning. Our results offer a novel perspective on whether 'history' or 'expectations' shape long-run coordination outcomes: in our model, expectations about histories is what matters for whether self-fulfilling spirals occur.&#13;
Finally, we show that the speed of learning also shapes the path of play, focusing on the case of sub-quadratic learning. When signals are precise, aggregate play exhibits a sudden transition from nearly all players choosing the non-risk-dominant action to nearly all players choosing the risk-dominant action. In contrast, when signals are noisy, the transition is gradual, with the share of players choosing the risk-dominant action increasing gradually over time. This suggests that "spikes" in aggregate behavior (such as a sudden and massive sell-off of a currency) can be consistent with transition to limit equilibrium play, and need not indicate an "equilibrium shift."
EC ’25, July 7–10, 2025, Stanford, CA, USA
</description>
<pubDate>Wed, 02 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164777</guid>
<dc:date>2025-07-02T00:00:00Z</dc:date>
</item>
<item>
<title>Galley: Modern Query Optimization for Sparse Tensor Programs</title>
<link>https://hdl.handle.net/1721.1/164776</link>
<description>Galley: Modern Query Optimization for Sparse Tensor Programs
Deeds, Kyle; Ahrens, Willow; Balazinska, Magdalena; Suciu, Dan
The tensor programming abstraction is a foundational paradigm which allows users to write high performance programs via a high-level imperative interface. Recent work on sparse tensor compilers has extended this paradigm to sparse tensors (i.e., tensors where most entries are not explicitly represented). With these systems, users define the semantics of the program and the algorithmic decisions in a concise language that can be compiled to efficient low-level code. However, these systems still require users to make complex decisions about program structure and memory layouts to write efficient programs.&#13;
This work presents .Galley, a system for declarative tensor programming that allows users to write efficient tensor programs without making complex algorithmic decisions. Galley is the first system to perform cost based lowering of sparse tensor algebra to the imperative language of sparse tensor compilers, and the first to optimize arbitrary operators beyond Σ and *. First, it decomposes the input program into a sequence of aggregation steps through a novel extension of the FAQ framework. Second, Galley optimizes and converts each aggregation step to a concrete program, which is compiled and executed with a sparse tensor compiler. We show that Galley produces programs that are 1-300x faster than competing methods for machine learning over joins and 5-20x faster than a state-of-the-art relational database for subgraph counting workloads with a minimal optimization overhead.
</description>
<pubDate>Wed, 18 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164776</guid>
<dc:date>2025-06-18T00:00:00Z</dc:date>
</item>
<item>
<title>Virtualizing Cloud Data Infrastructures with BRAD</title>
<link>https://hdl.handle.net/1721.1/164775</link>
<description>Virtualizing Cloud Data Infrastructures with BRAD
Yu, Geoffrey; Wu, Ziniu; Kossmann, Ferdi; Li, Tianyu; Markakis, Markos; Ngom, Amadou; Zhang, Sophie; Kraska, Tim; Madden, Samuel
Organizations usually manage their data using multiple specialized cloud database engines (e.g., Aurora, BigQuery, etc.). However, designing and managing multi-engine infrastructures is hard; there can be many designs, each with different performance and costs. Changing the design afterwards (e.g., due to growth) is even more challenging since application code usually ends up tightly coupled to the engines. We propose data infrastructure virtualization. The key idea is to declare a set of virtual database engines (VDBEs), which specify an engine's application-facing properties (e.g., query interface, performance) and its tables, but do not prescribe a concrete engine. An automated planner then decides how to best realize the VDBEs onto physical engines based on the workload. Clients connect to VDBE endpoints and are oblivious to the underlying physical engines-allowing for seamless infrastructure changes. We implemented VDBEs and an automated planner in BRAD: the first data infrastructure virtualization runtime. Our demo will showcase VDBEs and BRAD's automated planner under different workloads.
SIGMOD-Companion ’25, Berlin, Germany
</description>
<pubDate>Sun, 22 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164775</guid>
<dc:date>2025-06-22T00:00:00Z</dc:date>
</item>
<item>
<title>An Efficient Subcritical Multiplication Mode for Monte Carlo Solvers</title>
<link>https://hdl.handle.net/1721.1/164774</link>
<description>An Efficient Subcritical Multiplication Mode for Monte Carlo Solvers
Forget, Benoit
This paper presents an efficient Monte Carlo mode for simulating subcritical systems with external sources. While solving these systems as a fixed source is possible, the length of the histories grows significantly as the system nears criticality, making run time significant. Instead, a hybrid method is proposed that leverages the traditional eigensolver while including elements of the external source. The method builds on prior work,but proposes an approach that maintains the size of the source bank and also provides a natural way of scaling tallies with the true multiplication factor. The method is demonstrated on a subcritical sphere with varying point source position and energy spectrum, as well as an approach to criticality problem. The results demonstrate good agreement with the fixed-source mode, with much improved particle tracking rates for near-critical problems.
</description>
<pubDate>Mon, 20 Oct 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164774</guid>
<dc:date>2025-10-20T00:00:00Z</dc:date>
</item>
<item>
<title>Afterword: Reflections from Afar, with Hope for our Collective Future</title>
<link>https://hdl.handle.net/1721.1/164773</link>
<description>Afterword: Reflections from Afar, with Hope for our Collective Future
Henderson, Diana E
Appearing in the wake of a decade of rapid growth in Indian screen Shakespeare as an academic subspeciality, ‘Adapting Shakespearean Romance in Indian Cinema’ reveals how cross-cultural comparison and attention to popular reception can profitably modify inherited critical assumptions for all Shakespeare’s readership. Taking ‘romance’ as a key term, this afterword considers the possibilities and potential problems of recasting its dominant meaning as thematic, focusing on modern love, rather than as a dramatic subgenre. In a time of increasing political censorship and existential threats to gender studies, greater engagement and exchange between those in other areas of Shakespeare studies with this rich cinematic corpus and its aligned subfield of cross-disciplinary criticism provides reasons for hope and renewed community.
</description>
<pubDate>Thu, 02 Oct 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164773</guid>
<dc:date>2025-10-02T00:00:00Z</dc:date>
</item>
<item>
<title>Resolving the Contested Future of the GSEs: The Stakes Are High</title>
<link>https://hdl.handle.net/1721.1/164772</link>
<description>Resolving the Contested Future of the GSEs: The Stakes Are High
Golding, Edward; Wachter, Susan
Seventeen years after entering conservatorship, Fannie Mae and Freddie Mac remain central to the future of U.S. housing finance. This paper evaluates the feasibility of their exit from conservatorship without Congressional action, assessing repayment of the federal bailout, capital adequacy under current regulatory frameworks, and the durability of structural reforms. It puts forth a utility model that preserves liquidity, affordability, and mission alignment while mitigating risks of increased mortgage costs. Treasury mechanisms—including commitment fees and stock warrant monetization—are examined as tools to support affordable housing and fulfill charter mandates. A carefully structured exit, supported by robust oversight and capital standards, can balance adequate financial returns with public purpose. A regulatory framework that maintains stable lending standards and pricing over the business cycle is essential to reducing investor-required returns and enhancing affordability, thereby resolving the contested future of the GSEs.
</description>
<pubDate>Mon, 17 Nov 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164772</guid>
<dc:date>2025-11-17T00:00:00Z</dc:date>
</item>
<item>
<title>Stabilizing far-from-equilibrium (Mo,Ti)S2 thin films by metal sulfurization at reduced temperature</title>
<link>https://hdl.handle.net/1721.1/164771</link>
<description>Stabilizing far-from-equilibrium (Mo,Ti)S2 thin films by metal sulfurization at reduced temperature
Li, Yifei; Reidy, Kate; Penn, Aubrey; Lee, Seng Huat; Wang, Baoming; Ye, Kevin; Mao, Zhiqiang; Ross, Frances M; Jaramillo, R
We report the synthesis of large-area, high-Ti-content, Mo1−xTixS2 alloy thin films in the 2H phase at temperature as low as 500 °C using a scalable two-step method of metal film deposition, followed by sulfurization in H2S. Film processing at higher temperature accelerates Ti segregation, film coarsening, and the formation of TiS2 in the 1T phase. Crystal growth at higher temperature results in the formation of multiple binary sulfide phases, in agreement with the equilibrium phase diagram. Making highly metastable, smooth, and uniform single-phase alloy films, therefore, hinges on developing low-temperature processing. Our results are relevant to the development of technologies based on designer transition metal dichalcogenide alloys, including in photonic integrated circuits and gas sensing.
</description>
<pubDate>Thu, 16 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164771</guid>
<dc:date>2023-02-16T00:00:00Z</dc:date>
</item>
<item>
<title>Experimental study of lower hybrid wave power absorption on EAST</title>
<link>https://hdl.handle.net/1721.1/164770</link>
<description>Experimental study of lower hybrid wave power absorption on EAST
Baek, S-G; Li, MH; Bonoli, PT; Ding, BJ; Wallace, GM; Chen, JL; Duan, YM; Gong, XZ; Qian, JP; Wang, L; Yang, H; Zang, Q; Zhang, JY; Zhang, XJ
Lower hybrid power absorption analysis is presented on the EAST high-density plasmas using the power modulation technique. The change in the plasma and magnetic energy is monitored to evaluate the power absorption coefficient by linearizing the change for the first 10 msec for the given input power. The power absorption coefficient evaluated is approximately 0.44 (0.35) for 4.6 GHz (2.45 GHz) at n̄e = 3.5x1019 m-3 GENRAY/CQL3D current drive modeling suggests a combination of antenna spectrum, accessibility, and edge losses could primarily be responsible for the observed level of power absorption. Evidence of first-pass parasite LH power flow causing impurity sputtering is also reported, suggesting a need for optimum power coupling. Implications of the experimental findings are discussed.
</description>
<pubDate>Fri, 18 Aug 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164770</guid>
<dc:date>2023-08-18T00:00:00Z</dc:date>
</item>
<item>
<title>Limiting role of dislocations in high-current AlGaN/GaN hot electron transistors</title>
<link>https://hdl.handle.net/1721.1/164769</link>
<description>Limiting role of dislocations in high-current AlGaN/GaN hot electron transistors
Daulton, J. W.; Molnar, R. J.; Brinkerhoff, J. A.; Weir, T. J.; Hollis, M. A.; Zaslavsky, A.
III-nitride-based hot electron transistors (HETs) hold significant promise as high-speed, high-power devices. In our previous work, we demonstrated high current density and common-emitter gain at room temperature. Here, we measure multiple devices at cryogenic temperatures, extending the Gummel characteristics past the onset of intervalley scattering at 77 K. We demonstrate a Gummel current gain of 4.7 at a collector current density of 2.6 MA/cm2 at 77 K as well as a peak current density exceeding 3 MA/cm2. From these data, we determine that dislocation-associated inhomogeneities play a limiting role in AlGaN/GaN HETs, controlling the current gain, density, knee voltage, and base-collector leakage. A comparison of two nominally identical devices suggests that even a modest reduction in dislocation density would result in a substantial improvement in HET performance.
</description>
<pubDate>Tue, 06 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164769</guid>
<dc:date>2024-02-06T00:00:00Z</dc:date>
</item>
<item>
<title>SeerCuts: Explainable Attribute Discretization</title>
<link>https://hdl.handle.net/1721.1/164768</link>
<description>SeerCuts: Explainable Attribute Discretization
Lai, Eugenie; Croitoru, Inbal; Bitton, Noam; Shalem, Ariel; Youngmann, Brit; Galhotra, Sainyam; Rezig, El Kindi; Cafarella, Michael
This demonstration showcases SeerCuts - a tool that suggests useful and semantically meaningful discretization strategies (partitions) for numerical attributes. SeerCuts is a generic, interactive framework where users specify attributes to discretize and their utility measure for a downstream task of choice. It uses GPT-4o to assess the semantic meaningfulness of candidate partitions and employs an efficient search strategy to explore the vast space of discretization options. With hierarchical clustering to group related partitions and a multi-armed bandit policy to identify useful partitions with only a few samples, SeerCuts quickly finds meaningful and useful partitions. In the demo, we will provide an overview of SeerCuts and allow the audience to explore various datasets and tasks, including data visualization and comprehensive modeling. The users will be able to evaluate how SeerCuts identifies meaningful discretization strategies and compare the tradeoff between different discretization options.
SIGMOD-Companion ’25, Berlin, Germany
</description>
<pubDate>Sun, 22 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164768</guid>
<dc:date>2025-06-22T00:00:00Z</dc:date>
</item>
<item>
<title>PalimpChat: Declarative and Interactive AI analytics</title>
<link>https://hdl.handle.net/1721.1/164767</link>
<description>PalimpChat: Declarative and Interactive AI analytics
Liu, Chunwei; Vitagliano, Gerardo; Rose, Brandon; Printz, Matthew; Samson, David Andrew; Cafarella, Michael
Thanks to the advances in generative architectures and large language models, data scientists can now code pipelines of AI operations to process large collections of unstructured data. Recent progress has seen the rise of declarative AI frameworks (e.g., Palimpzest, Lotus, and DocETL) to build optimized and increasingly complex pipelines, but these systems often remain accessible only to expert programmers. In this demonstration, we present PalimpChat, a chat-based interface to Palimpzest that bridges this gap by letting users create and run sophisticated AI pipelines through natural language alone. By integrating Archytas, a ReAct-based reasoning agent, and Palimpzest's suite of relational and LLM-based operators, PalimpChat provides a practical illustration of how a chat interface can make declarative AI frameworks truly accessible to non-experts.&#13;
Our demo system is publicly available online. At SIGMOD'25, participants can explore three real-world scenarios-scientific discovery, legal discovery, and real estate search-or apply PalimpChat to their own datasets. In this paper, we focus on how PalimpChat, supported by the Palimpzest optimizer, simplifies complex AI workflows such as extracting and analyzing biomedical data.
SIGMOD-Companion ’25, Berlin, Germany
</description>
<pubDate>Sun, 22 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164767</guid>
<dc:date>2025-06-22T00:00:00Z</dc:date>
</item>
<item>
<title>CauSumX: Summarized Causal Explanations For Group-By-Average Queries</title>
<link>https://hdl.handle.net/1721.1/164766</link>
<description>CauSumX: Summarized Causal Explanations For Group-By-Average Queries
Levy, Nativ; Cafarella, Michael; Gilad, Amir; Roy, Sudeepa; Youngmann, Brit
Group-by-average SQL queries are a cornerstone of data analysis, often employed to uncover patterns and trends within datasets. However, interpreting the results of these queries can be challenging and time-intensive, particularly when working with large, high-dimensional datasets. Automating the generation of explanations for such queries can greatly enhance analysts' ability to derive meaningful insights while reducing human effort. Effective explanations must balance succinctness and depth, offering insights into different patterns across aggregate results, while crucially reflecting cause-effect relationships rather than mere correlations. This ensures that users can make informed, data-driven decisions grounded in reality. In this demonstration, we present CauSumX, a system that produces concise and causal explanations for group-by-average queries. Leveraging background causal knowledge, CauSumX identifies the key causal factors driving variations in the outcome variable across different groups. The system employs an efficient algorithm based on a recently published paper. We will demonstrate the utility of CauSumX for generating useful summarized causal explanations by interacting with the SIGMOD'25 participants, who will act as data analysts aiming to explain their query results.
SIGMOD-Companion ’25, Berlin, Germany
</description>
<pubDate>Sun, 22 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164766</guid>
<dc:date>2025-06-22T00:00:00Z</dc:date>
</item>
<item>
<title>CausaLens: A System for Summarizing Causal DAGs</title>
<link>https://hdl.handle.net/1721.1/164765</link>
<description>CausaLens: A System for Summarizing Causal DAGs
Chen, Noam; Zeng, Anna; Cafarella, Michael; Kenig, Batya; Markakis, Markos; Mishali, Oren; Youngmann, Brit; Salimi, Babak
Causal inference aids researchers in discovering causal relationships, leading to scientific insights. Pearl's causal model uses causal DAGs to estimate causal effects, so DAG correctness is essential for reliable causal conclusions. However, for high-dimensional data, the causal DAGs are often complex beyond human verifiability. Graph summarization is a logical next step, but current methods for general-purpose graph summarization are inadequate for causal DAG summarization, as they are not designed to preserve causal information. In this demonstration, we present a system called CausaLens that summarizes a given causal DAG and balances graph simplification for better understanding and retention of essential causal information for reliable inference directly on the summary DAG. We illustrate that causal inference on the summary DAG is more robust to misspecification in the initial causal DAG compared to performing inference directly on the initial causal DAG, thereby enhancing the robustness of causal inference. We will demonstrate the utility of CausaLens for generating useful summary causal DAGs by interacting with the SIGMOD'25 participants, who will act as data analysts aiming to perform causal analysis on high dimensional datasets.
SIGMOD-Companion ’25, Berlin, Germany
</description>
<pubDate>Sun, 22 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164765</guid>
<dc:date>2025-06-22T00:00:00Z</dc:date>
</item>
<item>
<title>First Workshop on Novel Optimizations for Visionary AI Systems (NOVAS)</title>
<link>https://hdl.handle.net/1721.1/164764</link>
<description>First Workshop on Novel Optimizations for Visionary AI Systems (NOVAS)
Vitagliano, Gerardo; Liu, Chunwei; Cao, Lei; Sun, Huan; Papotti, Paolo
The first NOVAS workshop (Novel Optimizations for Visionary AI Systems) is aimed at hosting novel work at the intersection between artificial intelligence and data management. This area has emerged with the rise of transformer-based architectures, which have revolutionized data processing across modalities. While these models benefit from massive pre-training and large-context inference, there are significant challenges related to scalability, determinism, and resource constraints. These issues-long studied in the data management community-have sparked a convergence between generative AI and traditional database research.&#13;
The workshop will be held on June 22nd, in conjunction with SIGMOD/PODS 2025. The workshop solicits regular and short papers on topics including hardware and execution optimizations, high-level programming abstractions, integration of LLMs with relational databases, and new transformer architectures for structured data. By bridging together the different communities of machine learning, data systems, and information retrieval, NOVAS aims at becoming the venue to discuss, share ideas and early results, and spark new research collaborations for the next-generation of data-driven AI systems.
SIGMOD-Companion ’25, Berlin, Germany
</description>
<pubDate>Sun, 22 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164764</guid>
<dc:date>2025-06-22T00:00:00Z</dc:date>
</item>
<item>
<title>Data-Efficient Discovery of Hyperelastic TPMS Metamaterials with Extreme Energy Dissipation</title>
<link>https://hdl.handle.net/1721.1/164763</link>
<description>Data-Efficient Discovery of Hyperelastic TPMS Metamaterials with Extreme Energy Dissipation
Perroni-Scharf, Maxine; Ferguson, Zachary; Butruille, Thomas; Portela, Carlos; Konakovi? Lukovi?, Mina
Triply periodic minimal surfaces (TPMS) are a class of metamaterials with a variety of applications and well-known primitive morphologies. We present a new method for discovering novel microscale TPMS structures with exceptional energy-dissipation capabilities, achieving double the energy absorption of the best existing TPMS primitive structure. Our approach employs a parametric representation, allowing seamless interpolation between structures and representing a rich TPMS design space. As simulations are intractable for efficiently optimizing microscale hyperelastic structures, we propose a sample-efficient computational strategy for rapid discovery with limited empirical data from 3D-printed and tested samples that ensures high-fidelity results. We achieve this by leveraging a predictive uncertainty-aware Deep Ensembles model to identify which structures to fabricate and test next. We iteratively refine our model through batch Bayesian optimization, selecting structures for fabrication that maximize exploration of the performance space and exploitation of our energy-dissipation objective. Using our method, we produce the first open-source dataset of hyperelastic microscale TPMS structures, including a set of novel structures that demonstrate extreme energy dissipation capabilities, and show several potential applications of these structures.
SIGGRAPH Conference Papers ’25, Vancouver, BC, Canada
</description>
<pubDate>Sun, 27 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164763</guid>
<dc:date>2025-07-27T00:00:00Z</dc:date>
</item>
<item>
<title>Splat and Replace: 3D Reconstruction with Repetitive Elements</title>
<link>https://hdl.handle.net/1721.1/164762</link>
<description>Splat and Replace: 3D Reconstruction with Repetitive Elements
Violante, Nicolas; Meuleman, Andr?as; Gauthier, Alban; Durand, Fredo; Groueix, Thibault; Drettakis, George
We leverage repetitive elements in 3D scenes to improve novel view synthesis. Neural Radiance Fields (NeRF) and 3D Gaussian Splatting (3DGS) have greatly improved novel view synthesis but renderings of unseen and occluded parts remain low-quality if the training views are not exhaustive enough. Our key observation is that our environment is often full of repetitive elements. We propose to leverage those repetitions to improve the reconstruction of low-quality parts of the scene due to poor coverage and occlusions. We propose a method that segments each repeated instance in a 3DGS reconstruction, registers them together, and allows information to be shared among instances. Our method improves the geometry while also accounting for appearance variations across instances. We demonstrate our method on a variety of synthetic and real scenes with typical repetitive elements, leading to a substantial improvement in the quality of novel view synthesis.
SIGGRAPH Conference Papers ’25, Vancouver, BC, Canada
</description>
<pubDate>Sun, 27 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164762</guid>
<dc:date>2025-07-27T00:00:00Z</dc:date>
</item>
<item>
<title>Variational Elastodynamic Simulation</title>
<link>https://hdl.handle.net/1721.1/164761</link>
<description>Variational Elastodynamic Simulation
Mattos Da Silva, Leticia; Sell?n, Silvia; Pacheco-Tallaj, Natalia; Solomon, Justin
Numerical schemes for time integration are the cornerstone of dynamical simulations for deformable solids. The most popular time integrators for isotropic distortion energies rely on nonlinear root-finding solvers, most prominently, Newton’s method. These solvers are computationally expensive and sensitive to ill-conditioned Hessians and poor initial guesses; these difficulties can particularly hamper the effectiveness of variational integrators, whose momentum conservation properties require reliable root-finding. To tackle these difficulties, this paper shows how to express variational time integration for a large class of elastic energies as an optimization problem with a “hidden” convex substructure. This hidden convexity suggests uses of optimization techniques with rigorous convergence analysis, guaranteed inversion-free elements, and conservation of physical invariants up to tolerance/numerical precision. In particular, we propose an Alternating Direction Method of Multipliers (ADMM) algorithm combined with a proximal operator step to solve our formulation. Empirically, our integrator improves the performance of elastic simulation tasks, as we demonstrate in a number of examples.
SIGGRAPH Conference Papers ’25, Vancouver, BC, Canada
</description>
<pubDate>Sun, 27 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164761</guid>
<dc:date>2025-07-27T00:00:00Z</dc:date>
</item>
<item>
<title>Strain-Tunable Thermal Conductivity in Largely Amorphous Polyolefin Fibers via Alignment-Induced Vibrational Delocalization</title>
<link>https://hdl.handle.net/1721.1/164760</link>
<description>Strain-Tunable Thermal Conductivity in Largely Amorphous Polyolefin Fibers via Alignment-Induced Vibrational Delocalization
Developing fast, reversible, and recyclable thermal switches is essential to advance adaptive thermal management. Here, we present a strain-tunable thermal switch based on largely amorphous olefin block copolymer (OBC) fibers, achieving a continuous switching ratio above 2 over 1000 cycles, as well as very short response times below 0.22 s. Using Raman spectroscopy, we quantify vibrational delocalization with increasing strain and demonstrate its direct connection to the observed thermal conductivity changes. We show that unlike prior assumptions linking propagating heat carriers primarily to crystalline domains, alignment in amorphous systems can enable phonon-like modes that dominate transport. To our best knowledge, this work is the first to experimentally probe vibrational delocalization using Raman spectroscopy and to demonstrate that alignment alone can govern the dominant carrier in disordered polymers. These findings establish design strategies for fatigue-resistant, high-performance, and recyclable polymer thermal switches for advanced thermal energy transport applications.
</description>
<pubDate>Mon, 09 Feb 2026 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164760</guid>
<dc:date>2026-02-09T00:00:00Z</dc:date>
</item>
<item>
<title>Introducing synchromodality: One missing link between transportation and supply chain management</title>
<link>https://hdl.handle.net/1721.1/164759</link>
<description>Introducing synchromodality: One missing link between transportation and supply chain management
Acero, Beatriz; Saenz, Maria Jesus; Luzzini, Davide
This study develops and tests the synchromodality construct, a novel supply chain concept that integrates the flexible use of different transport modes based on real-time information. At a time when global supply chains are complex and subject to uncertainty, synchromodality has emerged at the forefront of research and practice as a tool to ensure efficient delivery performance and thus supply chain competitiveness. Despite synchromodality is attracting the attention of leading companies and policy makers, only scholars within the transport research community have engaged with the topic so far. We believe a supply chain management perspective is missing, but essential, to develop the full potential of synchromodality. Our study shows that synchromodality capabilities encapsulate four key elements: visibility, integration, multi-modal transport, and flexibility. Thanks to a three-stage research approach exploiting multiple methods, this study conceptualizes, develops, and validates the first synchromodality measurement model, which reflects the multidimensional nature of the concept. We hope to set the stage for a number of potential future research opportunities that can explore synchromodality implementation and outcomes.
</description>
<pubDate>Mon, 24 May 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164759</guid>
<dc:date>2021-05-24T00:00:00Z</dc:date>
</item>
<item>
<title>Shape Space Spectra</title>
<link>https://hdl.handle.net/1721.1/164758</link>
<description>Shape Space Spectra
Chang, Yue; Benchekroun, Otman; Chiaramonte, Maurizio M.; Chen, Peter Yichen; Grinspun, Eitan
Eigenanalysis of differential operators, such as the Laplace operator or elastic energy Hessian, is typically restricted to a single shape and its discretization, limiting reduced order modeling (ROM). We introduce the first eigenanalysis method for continuously parameterized shape families. Given a parametric shape, our method constructs spatial neural fields that represent eigen-functions across the entire shape space. It is agnostic to the specific shape representation, requiring only an inside/outside indicator function that depends on shape parameters. Eigenfunctions are computed by minimizing a variational principle over nested spaces with orthogonality constraints. Since eigenvalues may swap dominance at points of multiplicity, we jointly train multiple eigenfunctions while dynamically reordering them based on their eigenvalues at each step. Through causal gradient filtering, this reordering is reflected in backpropagation. Our method enables applications to operate over shape space, providing a single ROM that encapsulates vibration modes for all shapes, including previously unseen ones. Since our eigenanalysis is differentiable with respect to shape parameters, it facilitates eigenfunction-aware shape optimization. We evaluate our approach on shape optimization for sound synthesis and locomotion, as well as reduced-order modeling for elastodynamic simulation.
</description>
<pubDate>Sun, 27 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164758</guid>
<dc:date>2025-07-27T00:00:00Z</dc:date>
</item>
<item>
<title>Dynamic Mesh Processing on the GPU</title>
<link>https://hdl.handle.net/1721.1/164757</link>
<description>Dynamic Mesh Processing on the GPU
Mahmoud, Ahmed H.; Porumbescu, Serban D.; Owens, John D.
We present a system for dynamic triangle mesh processing entirely on the GPU. Our system features an efficient data structure that enables rapid updates to mesh connectivity and attributes. By partitioning the mesh into small patches, we process all dynamic updates for each patch within the GPU's fast shared memory. This approach leverages speculative processing for conflict handling, minimizing rollback costs, maximizing parallelism, and reducing locking overhead. Additionally, we introduce a new programming model for dynamic mesh processing. This model provides concise semantics for dynamic updates, abstracting away concerns about conflicting updates during parallel execution. At the core of our model is the cavity operator, a general mesh update operator that facilitates any dynamic operation by removing a set of mesh elements and inserting new ones into the resulting void. We applied our system to various GPU applications, including isotropic remeshing, surface tracking, mesh decimation, and Delaunay edge flips. On large inputs, our system achieves an order-of-magnitude speedup compared to multi-threaded CPU solutions and is more than two orders of magnitude faster than state-of-the-art single-threaded CPU solutions. Furthermore, our data structure outperforms state-of-the-art GPU static data structures in terms of both speed and memory efficiency.
</description>
<pubDate>Sun, 27 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164757</guid>
<dc:date>2025-07-27T00:00:00Z</dc:date>
</item>
<item>
<title>Exciton Fine Structure in 2D Perovskites: The Out‐of‐Plane Excitonic State</title>
<link>https://hdl.handle.net/1721.1/164756</link>
<description>Exciton Fine Structure in 2D Perovskites: The Out‐of‐Plane Excitonic State
Posmyk, Katarzyna; Dyksik, Mateusz; Surrente, Alessandro; Maude, Duncan K; Zawadzka, Natalia; Babiński, Adam; Molas, Maciej R; Paritmongkol, Watcharaphol; Mączka, Mirosław; Tisdale, William A; Plochocka, Paulina; Baranowski, Michał
2D Ruddlesden-Popper metal-halide perovskites feature particularly strong excitonic effects, making them a fascinating playground for studying exciton physics. A complete understanding of the properties of this quasi-particle is crucial to fully exploit the tremendous potential of 2D perovskites (2DP) in light emission applications. Despite intense investigations, some of the exciton properties remain elusive to date, for example, the energy-ordering of the exciton states within the so-called fine structure manifold. Using optical spectroscopy, it demonstrates that in the archetypical 2DP (PEA)2PbI4, in contradiction to theoretical predictions, the energy of the bright out-of-plane exciton state is higher than that of two in-plane states. Having elucidated the order of exciton fine structure, it determines the g-factor of the dark exciton transition, together with the values of the electron and hole g-factors in the direction parallel to the c-axis of the crystal. In this way, it provides for the first time, a complete picture of the exciton fine structure in (PEA)2PbI4 2DP.
</description>
<pubDate>Tue, 23 Jul 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164756</guid>
<dc:date>2024-07-23T00:00:00Z</dc:date>
</item>
<item>
<title>Discovery of enhanced lattice dynamics in a single-layered hybrid perovskite</title>
<link>https://hdl.handle.net/1721.1/164755</link>
<description>Discovery of enhanced lattice dynamics in a single-layered hybrid perovskite
Zhang, Zhuquan; Zhang, Jiahao; Liu, Zi-Jie; Dahod, Nabeel S; Paritmongkol, Watcharaphol; Brown, Niamh; Stollmann, Alexia; Lee, Woo Seok; Chien, Yu-Che; Dai, Zhenbang; Nelson, Keith A; Tisdale, William A; Rappe, Andrew M; Baldini, Edoardo
Layered hybrid perovskites exhibit emergent physical properties and exceptional functional performances, but the coexistence of lattice order and structural disorder severely hinders our understanding of these materials. One unsolved problem regards how the lattice dynamics are affected by the dimensional engineering of the inorganic frameworks and their interaction with the molecular moieties. Here, we address this question by using a combination of spontaneous Raman scattering, terahertz spectroscopy, and molecular dynamics simulations. This approach reveals the structural dynamics in and out of equilibrium and provides unexpected observables that differentiate single- and double-layered perovskites. While no distinct vibrational coherence is observed in double-layered perovskites, an off-resonant terahertz pulse can drive a long-lived coherent phonon mode in the single-layered system. This difference highlights the dramatic change in the lattice environment as the dimension is reduced, and the findings pave the way for ultrafast structural engineering and high-speed optical modulators based on layered perovskites.
</description>
<pubDate>Wed, 16 Aug 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164755</guid>
<dc:date>2023-08-16T00:00:00Z</dc:date>
</item>
<item>
<title>Bright Excitonic Fine Structure in Metal-Halide Perovskites: From Two-Dimensional to Bulk</title>
<link>https://hdl.handle.net/1721.1/164754</link>
<description>Bright Excitonic Fine Structure in Metal-Halide Perovskites: From Two-Dimensional to Bulk
Posmyk, Katarzyna; Zawadzka, Natalia; Łucja Kipczak; Dyksik, Mateusz; Surrente, Alessandro; Maude, Duncan K; Kazimierczuk, Tomasz; Babiński, Adam; Molas, Maciej R; Bumrungsan, Wakul; Chooseng, Chanisara; Paritmongkol, Watcharaphol; Tisdale, William A; Baranowski, Michał; Plochocka, Paulina
The optical response of two-dimensional (2D) perovskites, often referred to as natural quantum wells, is primarily governed by excitons, whose properties can be readily tuned by adjusting the perovskite layer thickness. We have investigated the exciton fine structure splitting in the archetypal 2D perovskite (PEA)2(MA)n−1PbnI3n+1 with varying numbers of inorganic octahedral layers n = 1, 2, 3, and 4. We demonstrate that the in-plane excitonic states exhibit splitting and orthogonally oriented dipoles for all confinement regimes. The evolution of the exciton states in an external magnetic field provides further insights into the g-factors and diamagnetic coefficients. With increasing n, we observe a gradual evolution of the excitonic parameters characteristic of a 2D to three-dimensional transition. Our results provide valuable information concerning the evolution of the optoelectronic properties of 2D perovskites with the changing confinement strength.
</description>
<pubDate>Wed, 07 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164754</guid>
<dc:date>2024-02-07T00:00:00Z</dc:date>
</item>
<item>
<title>Persistent enhancement of exciton diffusivity in CsPbBr3 nanocrystal solids</title>
<link>https://hdl.handle.net/1721.1/164753</link>
<description>Persistent enhancement of exciton diffusivity in CsPbBr3 nanocrystal solids
Shcherbakov-Wu, Wenbi; Saris, Seryio; Sheehan, Thomas John; Wong, Narumi Nagaya; Powers, Eric R; Krieg, Franziska; Kovalenko, Maksym V; Willard, Adam P; Tisdale, William A
In semiconductors, exciton or charge carrier diffusivity is typically described as an inherent material property. Here, we show that the transport of excitons among CsPbBr3 perovskite nanocrystals (NCs) depends markedly on how recently those NCs were occupied by a previous exciton. Using transient photoluminescence microscopy, we observe a striking dependence of the apparent exciton diffusivity on excitation laser power that does not arise from nonlinear exciton-exciton interactions or thermal heating. We interpret our observations with a model in which excitons cause NCs to transition to a long-lived metastable configuration that markedly increases exciton transport. The exciton diffusivity observed here (&gt;0.15 square centimeters per second) is considerably higher than that observed in other NC systems, revealing unusually strong excitonic coupling between NCs. The finding of a persistent enhancement in excitonic coupling may help explain other photophysical behaviors observed in CsPbBr3 NCs, such as superfluorescence, and inform the design of optoelectronic devices.
</description>
<pubDate>Wed, 21 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164753</guid>
<dc:date>2024-02-21T00:00:00Z</dc:date>
</item>
<item>
<title>All-Perovskite Multicomponent Nanocrystal Superlattices</title>
<link>https://hdl.handle.net/1721.1/164752</link>
<description>All-Perovskite Multicomponent Nanocrystal Superlattices
Sekh, Taras V; Cherniukh, Ihor; Kobiyama, Etsuki; Sheehan, Thomas J; Manoli, Andreas; Zhu, Chenglian; Athanasiou, Modestos; Sergides, Marios; Ortikova, Oleksandra; Rossell, Marta D; Bertolotti, Federica; Guagliardi, Antonietta; Masciocchi, Norberto; Erni, Rolf; Othonos, Andreas; Itskos, Grigorios; Tisdale, William A; Stöferle, Thilo; Rainò, Gabriele; Bodnarchuk, Maryna I; Kovalenko, Maksym V
Nanocrystal superlattices (NC SLs) have long been sought as promising metamaterials, with nanoscale-engineered properties arising from collective and synergistic effects among the constituent building blocks. Lead halide perovskite (LHP) NCs come across as outstanding candidates for SL design, as they demonstrate collective light emission, known as superfluorescence, in single- and multicomponent SLs. Thus far, LHP NCs have only been assembled in single-component SLs or coassembled with dielectric NC building blocks acting solely as spacers between luminescent NCs. Here, we report the formation of multicomponent LHP NC-only SLs, i.e., using only CsPbBr3 NCs of different sizes as building blocks. The structural diversity of the obtained SLs encompasses the ABO6, ABO3, and NaCl structure types, all of which contain orientationally and positionally locked NCs. For the selected model system, the ABO6-type SL, we observed efficient NC coupling and Förster-like energy transfer from strongly confined 5.3 nm CsPbBr3 NCs to weakly confined 17.6 nm CsPbBr3 NCs, along with characteristic superfluorescence features at cryogenic temperatures. Spatiotemporal exciton dynamics measurements reveal that binary SLs exhibit enhanced exciton diffusivity compared to single-component NC assemblies across the entire temperature range (from 5 to 298 K). The observed coherent and incoherent NC coupling and controllable excitonic transport within the solid NC SLs hold promise for applications in quantum optoelectronic devices.
</description>
<pubDate>Wed, 06 Mar 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164752</guid>
<dc:date>2024-03-06T00:00:00Z</dc:date>
</item>
<item>
<title>Electrical manipulation of dissipation in microwave photon–magnon hybrid system through the spin Hall effect</title>
<link>https://hdl.handle.net/1721.1/164751</link>
<description>Electrical manipulation of dissipation in microwave photon–magnon hybrid system through the spin Hall effect
Hou, Justin T; Chou, Chung-Tao; Han, Jiahao; Fan, Yabin; Liu, Luqiao
Hybrid dynamic systems combine advantages from different subsystems for realizing information processing tasks in both classical and quantum domains. However, the lack of controlling knobs in tuning system parameters becomes a severe challenge in developing scalable, versatile hybrid systems for useful applications. Here, we report an on-chip microwave photon–magnon hybrid system where the dissipation rates and the coupling cooperativity can be electrically influenced by the spin Hall effect. Through magnon–photon coupling, the linewidths of the resonator photon mode and the hybridized magnon polariton modes are effectively changed by the spin injection into the magnetic wires from an applied direct current, which exhibit different trends in samples with low and high coupling strengths. Moreover, the linewidth modification by the spin Hall effect shows strong dependence on the detuning of the two subsystems, in contrast to the classical behavior of a standalone magnonic device. Our results point to a direction of realizing tunable, on-chip, scalable magnon-based hybrid dynamic systems, where spintronic effects provide useful control mechanisms.
</description>
<pubDate>Mon, 12 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164751</guid>
<dc:date>2024-02-12T00:00:00Z</dc:date>
</item>
<item>
<title>When Will (Game) Wars End?</title>
<link>https://hdl.handle.net/1721.1/164749</link>
<description>When Will (Game) Wars End?
Bhatia, Manan; Chin, Byron; Mani, Nitya; Mossel, Elchanan
We study several variants of the classical card game war. As anyone who has played this game knows, the game can take some time to terminate, but it usually does. Here, we analyze a number of asymptotic variants of the game, where the number of cards is n, and show that all have an expected termination time of order &#119899;2. This is the same expected termination time as the game where at each turn a fair coin toss decides which player wins a card, known as Gambler’s Ruin and was studied by Blaise Pascal, Pierre de Fermat and others in the seventeenth century.
</description>
<pubDate>Fri, 02 Jan 2026 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164749</guid>
<dc:date>2026-01-02T00:00:00Z</dc:date>
</item>
<item>
<title>Semiconductor-free, monolithically 3D-printed logic gates and resettable fuses</title>
<link>https://hdl.handle.net/1721.1/164748</link>
<description>Semiconductor-free, monolithically 3D-printed logic gates and resettable fuses
Cañada, Jorge; Velásquez-García, Luis Fernando
Additive manufacturing has the potential to enable the inexpensive, single-step fabrication of fully functional electromechanical devices. However, while the 3D printing of mechanical parts and passive electrical components is well developed, the fabrication of fully 3D-printed active electronics, which are the cornerstone of intelligent devices, remains a challenge. Existing examples of 3D-printed active electronics show potential but lack integrability and accessibility. This work reports the first active electronics fully 3D-printed via material extrusion, i.e. one of the most accessible and versatile additive manufacturing processes. The technology is proof-of-concept demonstrated through the implementation of the first fully 3D-printed, semiconductor-free, solid-state logic gates, and the first fully 3D-printed resettable fuses. The devices take advantage of a positive temperature coefficient phenomenon found to affect narrow traces of 3D-printed copper-reinforced, polylactic acid. Although the reported devices don’t perform competitively against semiconductor-enabled integrated circuits, the customisability and accessibility intrinsic to material extrusion additive manufacturing make this technology promisingly disruptive. This work serves as a steppingstone for the semiconductor-free democratisation of electronic device fabrication and is of immediate relevance for the manufacture of custom, intelligent devices far from traditional manufacturing centres.
</description>
<pubDate>Sat, 21 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164748</guid>
<dc:date>2024-09-21T00:00:00Z</dc:date>
</item>
<item>
<title>Rapid large-scale building damage level classification after earthquakes using deep learning with Lidar and satellite optical data</title>
<link>https://hdl.handle.net/1721.1/164747</link>
<description>Rapid large-scale building damage level classification after earthquakes using deep learning with Lidar and satellite optical data
Liu, Chang; Ge, Linlin; Bai, Ting
In post-earthquake scenarios, the swift assessment of building damage levels is pivotal for efficient emergency response and recovery planning. Nevertheless, conventional in-situ damage evaluations consume time. Current satellite-based deep learning methods save time but often lack detail, usually classifying damage as either collapsed or intact. This two-level information is not enough for rescue or recovery planning. Light Detection and Ranging (Lidar)-based deep learning methods, which provide three-dimensional (3D) information, could address this issue of damage details. Therefore, this paper proposes a deep learning-based building damage level classification method using both Lidar and satellite data. The proposed method classifies damage into four levels, including no/minor damage, partially collapsed, totally collapsed, and story failure. The developed network builds upon RandLA-Net, incorporating surface normal vectors to enhance accuracy. A colourised Lidar dataset was created for the network. The network underscores the advantage of incorporating surface normal information. A framework is also proposed based on the damage level outcomes of the developed network, which aids in emergency response efforts. Consequently, this paper demonstrates the practical utility of deep learning networks in rapidly assessing detailed building damage levels after earthquakes. Its practical contribution is guiding decision-making during the critical phases of post-earthquake response and recovery.
</description>
<pubDate>Tue, 31 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164747</guid>
<dc:date>2024-12-31T00:00:00Z</dc:date>
</item>
<item>
<title>Cortical somatostatin innervation follows a unique experience-independent developmental trajectory</title>
<link>https://hdl.handle.net/1721.1/164746</link>
<description>Cortical somatostatin innervation follows a unique experience-independent developmental trajectory
Boivin, Josiah R; Schmerl, Bettina; Martin, Kendyll B; Lee, Chia-Fang; Nedivi, Elly
&lt;jats:p&gt;Despite the critical role of inhibition in regulating developmental plasticity, there are significant gaps in our understanding of inhibitory synapse development, particularly for the vast majority of inhibitory synapses that reside on dendrites. Dendritic inhibitory synapses, canonically arising from somatostatin (SST)-expressing neurons, are challenging to detect electrophysiologically and difficult to visualize without a molecular tag. Here, we integrate a genetic synapse labeling strategy with epitope-preserving magnified analysis of proteome (eMAP), a combination of tissue expansion and clearing, to reveal the development of SST innervation in the primary visual cortex of male and female mice. Unlike excitatory innervation, which follows a deep to shallow progression and undergoes pruning, we find that SST bouton formation occurs simultaneously across all cortical layers and is not subject to a period of net pruning. SST bouton and synapse formation occur most dramatically in the days following eye opening and during the opening of the critical period for ocular dominance plasticity. Yet, despite a coincidence with these visual milestones, neither SST bouton nor synapse formation depend on visual experience. This is in contrast to excitatory and non-SST inhibitory synapses, whose development has been shown to depend heavily on visual experience. Thus, SST cortical innervation follows a unique developmental trajectory that is independent of sensory experience and is optimally timed to regulate processes that are fundamental to cortical circuit maturation.&lt;/jats:p&gt;&#13;
                  &lt;jats:p&gt;&#13;
                    &lt;jats:bold&gt;Significance statement&lt;/jats:bold&gt;&#13;
                    During development, neurons form extensive synaptic connections while maintaining a delicate balance of excitation and inhibition. It is critical to understand how different subpopulations of synapses form during development, because perturbations in this precisely coordinated process can cause neurodevelopmental disorders. Here, we reveal at unprecedented resolution the development of cortical inhibitory innervation from somatostatin-expressing neurons, which canonically target dendrites. We show that somatostatin neurons follow different rules than other cell types during development, and somatostatin innervation is well-timed to contribute to developmental processes that are central to healthy cortical function. Our results provide new insights on how somatostatin neurons, a critically influential cell type, integrate into cortical circuitry during development.&#13;
                  &lt;/jats:p&gt;
</description>
<pubDate>Tue, 13 Jan 2026 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164746</guid>
<dc:date>2026-01-13T00:00:00Z</dc:date>
</item>
<item>
<title>Neruda through copper-coloured glasses: the role of place attachment in the embeddedness of Chilean entrepreneurship</title>
<link>https://hdl.handle.net/1721.1/164745</link>
<description>Neruda through copper-coloured glasses: the role of place attachment in the embeddedness of Chilean entrepreneurship
Burke, M. Kathleen; Conley, Mark A.; Jack, Sarah L.
Despite scholarly interest in how emotional and instrumental place attachments motivate entrepreneurship, the influences on embeddedness remain underexplored. Building on the notion that entrepreneurship becomes embedded in a locality, we argue that this process is packed with place-based interpretations of the material and imagined reality. Engaging with the empirical setting of Chile, the world’s largest copper producer, we embark on a study examining the interactions between the place attachment, embeddedness and natural resource-based entrepreneurship. We uncover these interactions through analysing several works of poetry by Nobel laureate Pablo Neruda, which focus on the diverging place attachment styles between local and multinational agents. Through reflecting on the poems, we show how historical changes within the Chilean mining industry and broader societal changes are visible in Neruda’s imagery of place attachments, emotions and concerns for local conditions. We problematize embeddedness and entrepreneurship through illuminating the place attachments shaping local actors’ entrepreneurial imagination, thus contributing to knowledge about being embedded in natural resource-based entrepreneurship contexts. We provide new insights into how place attachment can evolve alongside different forms of embedded entrepreneurship.
</description>
<pubDate>Sat, 11 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164745</guid>
<dc:date>2025-01-11T00:00:00Z</dc:date>
</item>
<item>
<title>Human-centric manufacturing culture: a research study of MedTech manufacturers in Ireland</title>
<link>https://hdl.handle.net/1721.1/164744</link>
<description>Human-centric manufacturing culture: a research study of MedTech manufacturers in Ireland
Rhodes, Donna H; Cuddy, Sara; Jeffers, Malcolm; O’Rourke, Fiona
Digital manufacturing is rapidly evolving; however, this transformation is predominantly technology centric. Human-centric manufacturing shifts the paradigm for the digital manufacturing enterprise towards a human focus to realising its envisioned digital future. In that context, Digital Manufacturing Ireland (DMI), Ireland’s expert body for driving digital adoption across manufacturing, initiated a research study in collaboration with two research partners, MIT and IAAE, in support of this important focus for future manufacturing. This paper discusses results of the DMI 2023 human-Centric Manufacturing Culture Study, which engaged manufacturing leaders from 11 MedTech companies with major manufacturing sites in Ireland. Overall findings are discussed, with a focus on 12 emergent themes grouped in four categories: imperatives, values, strategies, and practices. Planned collaboration initiatives and anticipated future research are described. This paper also highlights considerations regarding new thinking needed by manufacturing leaders, along with recommendations as to what leaders can begin to do differently.
</description>
<pubDate>Wed, 31 Dec 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164744</guid>
<dc:date>2025-12-31T00:00:00Z</dc:date>
</item>
<item>
<title>Decomposition of Frobenius pushforwards of line bundles on wonderful compactifications</title>
<link>https://hdl.handle.net/1721.1/164743</link>
<description>Decomposition of Frobenius pushforwards of line bundles on wonderful compactifications
Cai, Merrick; Krylov, Vasily
De Concini and Procesi introduced varieties known as wonderful compactifications, which are smooth projective compactifications of semisimple adjoint groups G. We study the Frobenius pushforwards of line bundles on the wonderful compactifications, and in particular we decompose them into a direct sum of vector subbundles and explicitly describe the ranks. We are especially interested in when these subbundles are line bundles, and in the case of &#119866;=&#120239;&#120242;&#120235;&#119899;, we offer lower bounds on the multiplicities (as direct summands) for these line bundles.
</description>
<pubDate>Tue, 28 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164743</guid>
<dc:date>2025-01-28T00:00:00Z</dc:date>
</item>
<item>
<title>BrepDiff: Single-Stage B-rep Diffusion Model</title>
<link>https://hdl.handle.net/1721.1/164742</link>
<description>BrepDiff: Single-Stage B-rep Diffusion Model
Lee, Mingi; Zhang, Dongsu; Jambon, Cl?ment; Kim, Young Min
The Boundary Representation (B-rep) is a widely used 3D model representation of most consumer products designed with CAD software. However, its highly irregular and sparse set of relationships poses significant challenges for designing a generative model tailored to B-reps. Existing approaches use multi-stage approaches to satisfy the complex constraints sequentially. As a result, the final geometry cannot incorporate user edits due to the non-deterministic dependencies between cascaded stages. In contrast, we propose BrepDiff, a single-stage diffusion model for B-rep generation. We present a masked UV grid representation consisting of structured point samples from faces, serving as input for a diffusion transformer. By introducing an asynchronous and shifted noise schedule, we improve the training signal, enabling the diffusion model to better capture the distribution of UV grids. The explicitness of our masked UV grid representation enables users to intuitively understand and freely design surface geometry without being constrained by topological validity. The interconnectivity can be derived from the face layout, which is later processed into a valid solid volume during post-processing. Our approach achieves performance on par with state-of-the-art cascaded models while offering complex and diverse manipulations of geometry and topology, such as shape completion, merging, and interpolation.
SIGGRAPH Conference Papers ’25, Vancouver, BC, Canada
</description>
<pubDate>Sun, 27 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164742</guid>
<dc:date>2025-07-27T00:00:00Z</dc:date>
</item>
<item>
<title>SwiftSketch: A Diffusion Model for Image-to-Vector Sketch Generation</title>
<link>https://hdl.handle.net/1721.1/164741</link>
<description>SwiftSketch: A Diffusion Model for Image-to-Vector Sketch Generation
Arar, Ellie; Frenkel, Yarden; Cohen-Or, Daniel; Shamir, Ariel; Vinker, Yael
Recent advancements in large vision-language models have enabled highly expressive and diverse vector sketch generation. However, state-of-the-art methods rely on a time-consuming optimization process involving repeated feedback from a pretrained model to determine stroke placement. Consequently, despite producing impressive sketches, these methods are limited in practical applications. In this work, we introduce SwiftSketch, a diffusion model for image-conditioned vector sketch generation that can produce high-quality sketches in less than a second. SwiftSketch operates by progressively denoising stroke control points sampled from a Gaussian distribution. Its transformer-decoder architecture is designed to effectively handle the discrete nature of vector representation and capture the inherent global dependencies between strokes. To train SwiftSketch, we construct a synthetic dataset of image-sketch pairs, addressing the limitations of existing sketch datasets, which are often created by non-artists and lack professional quality. For generating these synthetic sketches, we introduce ControlSketch, a method that enhances SDS-based techniques by incorporating precise spatial control through a depth-aware ControlNet. We demonstrate that SwiftSketch generalizes across diverse concepts, efficiently producing sketches that combine high fidelity with a natural and visually appealing style.
SIGGRAPH Conference Papers ’25, Vancouver, BC, Canada
</description>
<pubDate>Sun, 27 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164741</guid>
<dc:date>2025-07-27T00:00:00Z</dc:date>
</item>
<item>
<title>Lifting the Winding Number: Precise Discontinuities in Neural Fields for Physics Simulation</title>
<link>https://hdl.handle.net/1721.1/164740</link>
<description>Lifting the Winding Number: Precise Discontinuities in Neural Fields for Physics Simulation
Chang, Yue; Liu, Mengfei; Wang, Zhecheng; Chen, Peter Yichen; Grinspun, Eitan
Cutting thin-walled deformable structures is common in daily life, but poses significant challenges for simulation due to the introduced spatial discontinuities. Traditional methods rely on mesh-based domain representations, which require frequent remeshing and refinement to accurately capture evolving discontinuities. These challenges are further compounded in reduced-space simulations, where the basis functions are inherently geometry- and mesh-dependent, making it difficult or even impossible for the basis to represent the diverse family of discontinuities introduced by cuts.&#13;
Recent advances in representing basis functions with neural fields offer a promising alternative, leveraging their discretization-agnostic nature to represent deformations across varying geometries. However, the inherent continuity of neural fields is an obstruction to generalization, particularly if discontinuities are encoded in neural network weights.&#13;
We present Wind Lifter, a novel neural representation designed to accurately model complex cuts in thin-walled deformable structures. Our approach constructs neural fields that reproduce discontinuities precisely at specified locations, without “baking in” the position of the cut line. To achieve this, we augment the input coordinates of the neural field with the generalized winding number of any given cut line, effectively lifting the input from two to three dimensions. Lifting allows the network to focus on the easier problem of learning a 3D everywhere-continuous volumetric field, while a corresponding restriction operator enables the final output field to precisely resolve strict discontinuities. Crucially, our approach does not embed the discontinuity in the neural network’s weights, opening avenues to generalization of cut placement.&#13;
Our method achieves real-time simulation speeds and supports dynamic updates to cut line geometry during the simulation. Moreover, the explicit representation of discontinuities makes our neural field intuitive to control and edit, offering a significant advantage over traditional neural fields, where discontinuities are embedded within the network’s weights, and enabling new applications that rely on general cut placement.
SIGGRAPH Conference Papers ’25, Vancouver, BC, Canada
</description>
<pubDate>Sun, 27 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164740</guid>
<dc:date>2025-07-27T00:00:00Z</dc:date>
</item>
<item>
<title>Making Concurrent Hardware Verification Sequential</title>
<link>https://hdl.handle.net/1721.1/164739</link>
<description>Making Concurrent Hardware Verification Sequential
Bourgeat, Thomas; Liu, Jiazheng; Chlipala, Adam; Arvind
Compared to familiar hardware-description languages like Verilog, rule-based languages like Bluespec offer&#13;
opportunities to import modularity features from software programming. While Verilog modules are about&#13;
connecting wires between submodules, Bluespec modules resemble objects in object-oriented programming,&#13;
where interactions with a module occur only through calls to its methods. However, while software objects&#13;
can typically be characterized one method at a time, the concurrent nature of hardware makes it essential to&#13;
consider the repercussions of invoking multiple methods simultaneously. Prior formalizations of rule-based&#13;
languages conceptualized modules by describing their semantics considering arbitrary sets of simultaneous&#13;
method calls. This internalized concurrency significantly complicates correctness proofs. Rather than analyzing&#13;
methods one-at-a-time, as is done when verifying software object methods, validating the correctness of&#13;
rule-based modules necessitated simultaneous consideration of arbitrary subsets of method calls. The result&#13;
was a number of proof cases that grew exponentially in the size of the module&amp;#8217;s API.&#13;
In this work, we side-step the exponential blowup through a set of judicious language restrictions. We&#13;
introduce a new Bluespec-inspired formal language, Fjfj, that supports sequential characterization of modules,&#13;
while preserving the concurrent hardware nature of the language. We evaluated Fjfj by implementing it in&#13;
Coq, proving the key framework principle: the refinement theorem. We demonstrated Fjfj&amp;#8217;s expressivity via&#13;
implementation and verification of three examples: a pipelined processor, a parameterized crossbar, and a&#13;
network switch.
</description>
<pubDate>Fri, 13 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164739</guid>
<dc:date>2025-06-13T00:00:00Z</dc:date>
</item>
<item>
<title>Lilo: A Higher-Order, Relational Concurrent Separation Logic for Liveness</title>
<link>https://hdl.handle.net/1721.1/164738</link>
<description>Lilo: A Higher-Order, Relational Concurrent Separation Logic for Liveness
Lee, Dongjae; Lee, Janggun; Yoon, Taeyoung; Cho, Minki; Kang, Jeehoon; Hur, Chung-Kil
Concurrent separation logic (CSL) has excelled in verifying safety properties across various applications, yet its application to liveness properties remains limited. While existing approaches like TaDA Live and Fair Operational Semantics (FOS) have made significant strides, they still face limitations. TaDA Live struggles to verify certain classes of programs, particularly concurrent objects with non-local linearization points, and lacks support for general liveness properties such as "good things happen infinitely often". On the other hand, FOS&amp;#8217;s scalability is hindered by the absence of thread modular reasoning principles and modular specifications.&#13;
&#13;
This paper introduces Lilo, a higher-order, relational CSL designed to overcome these limitations. Our core observation is that FOS helps us to maintain simple primitives for our logic, which enable us to explore design space with fewer restrictions. As a result, Lilo adapts various successful techniques from literature. It supports reasoning about non-terminating programs by supporting refinement proofs, and also provides Iris-style invariants and modular specifications to facilitate modular verification. To support higher-order reasoning without relying on step-indexing, we develop a technique called stratified propositions inspired by Nola. In particular, we develop novel abstractions for liveness reasoning that bring these techniques together in a uniform way. We show Lilo&amp;#8217;s scalability through case studies, including the first termination-guaranteeing modular verification of the elimination stack. Lilo and examples in this paper are mechanized in Coq.
</description>
<pubDate>Wed, 09 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164738</guid>
<dc:date>2025-04-09T00:00:00Z</dc:date>
</item>
<item>
<title>Stochastic Lazy Knowledge Compilation for Inference in Discrete Probabilistic Programs</title>
<link>https://hdl.handle.net/1721.1/164737</link>
<description>Stochastic Lazy Knowledge Compilation for Inference in Discrete Probabilistic Programs
Bowers, Maddy; Lew, Alexander K.; Tenenbaum, Joshua B.; Solar-Lezama, Armando; Mansinghka, Vikash K.
We present new techniques for exact and approximate inference in discrete probabilistic programs, based on two new ways of exploiting lazy evaluation. First, we show how knowledge compilation, a state-of-the art technique for exact inference in discrete probabilistic programs, can be made lazy, enabling asymptotic speed-ups. Second, we show how a probabilistic program&amp;#8217;s lazy semantics naturally give rise to a division of its random choices into subproblems, which can be solved in sequence by sequential Monte Carlo with locally-optimal proposals automatically computed via lazy knowledge compilation. We implement our approach in a new tool, Pluck, and evaluate its performance against state-of-the-art approaches to inference in discrete probabilistic languages. We find that on a suite of inference benchmarks, lazy knowledge compilation can be faster than state-of-the-art approaches, sometimes by orders of magnitude.
</description>
<pubDate>Fri, 13 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164737</guid>
<dc:date>2025-06-13T00:00:00Z</dc:date>
</item>
<item>
<title>Probabilistic Programming with Vectorized Programmable Inference</title>
<link>https://hdl.handle.net/1721.1/164736</link>
<description>Probabilistic Programming with Vectorized Programmable Inference
Becker, McCoy R.; Huot, Mathieu; Matheos, George; Wang, Xiaoyan; Chung, Karen; Smith, Colin; Ritchie, Sam; Saurous, Rif A.; Lew, Alexander K.; Rinard, Martin C.; Mansinghka, Vikash K.
We present GenJAX, a new language and compiler for vectorized programmable probabilistic inference.&#13;
GenJAX integrates the vectorizing map (vmap) operation from array programming frameworks such as JAX&#13;
into the programmable inference paradigm, enabling compositional&#13;
vectorization of features such as probabilistic program traces, stochastic branching&#13;
(for expressing mixture models), and programmable inference interfaces&#13;
for writing custom probabilistic inference algorithms.  &#13;
We formalize vectorization as a source-to-source program transformation on a core calculus for probabilistic programming ($\gen$), and&#13;
prove that it correctly vectorizes both modeling and inference operations.&#13;
We have implemented our approach in \href{https://github.com/probcomp/genjax}{the GenJAX language and compiler}, and have empirically evaluated this implementation on&#13;
several benchmarks and case studies. Our results show that our implementation&#13;
supports a wide and expressive set of programmable inference patterns and delivers&#13;
performance comparable to hand-optimized JAX code.
</description>
<pubDate>Thu, 08 Jan 2026 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164736</guid>
<dc:date>2026-01-08T00:00:00Z</dc:date>
</item>
<item>
<title>Waste-Efficient Work Stealing</title>
<link>https://hdl.handle.net/1721.1/164735</link>
<description>Waste-Efficient Work Stealing
Singer, Kyle; Agrawal, Kunal; Schardl, Tao B.
Although randomized work stealing is effective at automatically load-balancing task-parallel programs, it can waste computational resources when scheduling programs that lack sufficient parallelism to use all available threads. For such programs, threads will waste cycles attempting to steal parallel tasks when none are available. This waste can reduce the machine’s efficiency by wasting computational resources and energy and needlessly burdening the operating system.&#13;
This paper introduces WEWS, a simple, practical, and provably efficient extension to randomized work stealing that mitigates waste. WEWS dynamically adjusts the number of active threads to reduce the waste of randomized work stealing. WEWS executes a parallel computation with the same asymptotic running time as traditional randomized work stealing while bounding the waste to O(min{PT∞, T1 + P2}) instructions. WEWS also follows the work-first principle to perform well in practice.&#13;
WEWS requires no special support from the operating system or hardware, which simplifies its implementation. We implemented WEWS within the OpenCilk runtime system and compared it to other common waste-mitigation strategies. Across 10 parallel benchmarks, we find that WEWS has minimal impact on parallel running times while, on programs with limited parallelism, substantially reducing waste.
PPoPP ’26, Sydney, NSW, Australia
</description>
<pubDate>Wed, 28 Jan 2026 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164735</guid>
<dc:date>2026-01-28T00:00:00Z</dc:date>
</item>
<item>
<title>UniTe: A Universal Tensor Abstraction for Capturing Spatial Relationships</title>
<link>https://hdl.handle.net/1721.1/164734</link>
<description>UniTe: A Universal Tensor Abstraction for Capturing Spatial Relationships
Ray, Jessica; Collin, Teodoro; Sze, Vivienne; Reuther, Albert; Amarasinghe, Saman
Tensors are an integral part of numerous domains, and while significant effort has been put into the design of tensor data structures in isolation, little attention has been paid to the relationships that exist across tensors and how this affects their representation and use. In this paper, we focus on spatial relationships across tensors in a program, where such tensors are defined relative to a common reference coordinate system. These relationships are complicated by the fact that the tensors may differ in their representations, such as having variations in their axes, spacings, origins, and overall shape. Due to the lack of existing abstractions and language support for these types of tensor semantics, users are currently forced to manually perform the bookkeeping necessary to account for these varying relationships and representations. Unfortunately, we cannot rely on a simple library to capture these relationships, as computations on these types of tensors often happen at the innermost levels of programs; we find that the overheads associated with an unoptimized implementation quickly accumulate, leading to performance up to nearly 65x slower than a reference C implementation on a series of image and video compression benchmarks.     In this paper, we introduce the novel UniTe abstraction, which captures spatial relationships across all such tensors in a program. We also introduce two domain-specific languages and optimizing compilers, CoLa for Python and SHiM for C/C++, built off of UniTe. Both CoLa and SHiM provide users an intuitive set of tensor primitives based on spatial relationships, hiding the complexity that goes into maintaining the tensors and computing accesses across them. In addition, we discuss the optimizations necessary to remove the associated abstraction overhead, and describe their implementations. On the benchmarks, we show that both CoLa and SHiM successfully remove the overheads, achieving performance parity with existing C implementations.
</description>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164734</guid>
</item>
<item>
<title>Triplet Exciton Sensitization of Silicon Mediated by Defect States in Hafnium Oxynitride</title>
<link>https://hdl.handle.net/1721.1/164733</link>
<description>Triplet Exciton Sensitization of Silicon Mediated by Defect States in Hafnium Oxynitride
Nagaya, Narumi; Alexiu, Alexandra; Perkinson, Collin F; Nix, Oliver M; Koh, Dooyong; Bawendi, Moungi G; Tisdale, William A; Van Voorhis, Troy; Baldo, Marc A
Singlet exciton fission has the potential to increase the efficiency of crystalline silicon solar cells beyond the conventional single junction limit. Perhaps the largest obstacle to achieving this enhancement is uncertainty about energy coupling mechanisms at the interfaces between silicon and exciton fission materials such as tetracene. Here, the previously reported silicon‐hafnium oxynitride‐tetracene structure is studied and a combination of magnetic‐field‐dependent silicon photoluminescence measurements and density functional theory calculations is used to probe the influence of the interlayer composition on the triplet transfer process across the hafnium oxynitride interlayer. It is found that hafnium oxide interlayers do not show triplet exciton sensitization of silicon, and that nitrogen content in hafnium oxynitride layers is correlated with enhanced sensitization. Calculation results reveal that defects in hafnium oxynitride interlayers with higher nitrogen content introduce states close to the band‐edge of silicon, which can mediate the triplet exciton transfer process. Some defects introduce additional deleterious mid‐gap states, which may explain observed silicon photoluminescence quenching. These results show that band‐edge states can mediate the triplet exciton transfer process, potentially through a sequential charge transfer mechanism.
</description>
<pubDate>Mon, 23 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164733</guid>
<dc:date>2024-12-23T00:00:00Z</dc:date>
</item>
<item>
<title>Layered Metal–Organic Chalcogenides: 2D Optoelectronics in 3D Self-Assembled Semiconductors</title>
<link>https://hdl.handle.net/1721.1/164732</link>
<description>Layered Metal–Organic Chalcogenides: 2D Optoelectronics in 3D Self-Assembled Semiconductors
Paritmongkol, Watcharaphol; Feng, Zhifu; Refaely-Abramson, Sivan; Tisdale, William A; Kastl, Christoph; Maserati, Lorenzo
Molecular self-assembly offers an effective and scalable way to design nanostructured materials with tunable optoelectronic properties. In the past 30 years, organic chemistry has delivered a plethora of metal-organic structures based on the combination of organic groups, chalcogens, and a broad range of metals. Among these, several layered metal-organic chalcogenides (MOCs)─including "mithrene" (AgSePh)─recently emerged as interesting platforms to host 2D physics embedded in 3D crystals. Their combination of broad tunability, easy processability, and promising optoelectronic performance is driving a renewed interest in the more general material group of "low-dimensional" hybrids. In addition, the covalent MOC lattice provides higher stability compared with polar materials in operating devices. Here, we provide a perspective on the rise of 2D MOCs in terms of their synthesis approaches, 2D quantum confined exciton physics, and potential future applications in UV and X-ray photodetection, chemical sensors, and electrocatalysis.
</description>
<pubDate>Wed, 26 Mar 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164732</guid>
<dc:date>2025-03-26T00:00:00Z</dc:date>
</item>
<item>
<title>Exciton fission enhanced silicon solar cell</title>
<link>https://hdl.handle.net/1721.1/164731</link>
<description>Exciton fission enhanced silicon solar cell
Nagaya, Narumi; Lee, Kangmin; Perkinson, Collin F; Li, Aaron; Lee, Youri; Zhong, Xinjue; Lee, Sujin; Weisburn, Leah P; Wang, Janet Z; Baikie, Tomi K; Bawendi, Moungi G; Van Voorhis, Troy; Tisdale, William A; Kahn, Antoine; Seo, Kwanyong; Baldo, Marc A
While silicon solar cells dominate global photovoltaic energy production, their continued improvement is hindered by the single-junction limit. One potential solution is to use molecular singlet exciton fission to generate two electrons from each absorbed high-energy photon. We demonstrate that the long-standing challenge of coupling molecular excited states to silicon solar cells can be overcome using sequential charge transfer. Combining zinc phthalocyanine, aluminum oxide, and a shallow junction crystalline silicon microwire solar cell, the peak charge generation efficiency per photon absorbed in tetracene is (138% ± 6%), comfortably surpassing the quantum efficiency limit for conventional silicon solar cells and establishing a new, scalable approach to low-cost, high-efficiency photovoltaics.
</description>
<pubDate>Wed, 16 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164731</guid>
<dc:date>2025-07-16T00:00:00Z</dc:date>
</item>
<item>
<title>1D Silver Organochalcogenide Semiconductors: Color Tunable Luminescence, Polarized Emission, and Long-Range Exciton Diffusion</title>
<link>https://hdl.handle.net/1721.1/164730</link>
<description>1D Silver Organochalcogenide Semiconductors: Color Tunable Luminescence, Polarized Emission, and Long-Range Exciton Diffusion
Sakurada, Tomoaki; Pathoor, Nithin; Matsumoto, Takuma; Khamlue, Rattapon; Chatsiri, Petcharaphorn; Valenta, Jan; Kawamoto, Tadashi; Omagari, Shun; Tisdale, William A; Paritmongkol, Watcharaphol; Cho, Yeongsu; Vacha, Martin
Metal organochalcogenides (MOCs) represent a promising class of organic-inorganic hybrid semiconductors with unique light-matter interactions. Their hybrid nature enables extensive structural and optoelectronic tunability via ligand engineering. In this study, we systematically modulated the electronic properties of ligands using Cl and Me functional groups, achieving precise control over the optoelectronic properties of Ag-based MOCs. Structural analysis revealed that these MOCs adopt a one-dimensional (1D) chain structure with organic ligands surrounding a Ag-chalcogen core. Density functional theory (DFT) calculations demonstrated that MOCs exhibit characteristics of 1D semiconductors with strongly dispersive conduction and valence bands aligned along the crystal rod directions. Experimentally, the MOCs displayed bright luminescence, with peaks centered between 560 and 690 nm. The substitution of Cl with Me groups in the benzene ligands induced a red shift in both absorption and photoluminescence, corroborated by experimental and theoretical analyses. Further optical measurements indicated that the emission from the MOCs is strongly polarized along the chain directions. Notably, Se-based MOCs exhibited enhanced exciton diffusivity along the chain axis with a diffusion length of 130 nm, which is among the highest reported for covalent systems. The observed trend in carrier diffusivity among individual compounds is attributed to differences in the effective masses of the carriers, as determined by DFT calculations. Our findings offer valuable insights into the systematic structural and property tuning of hybrid semiconductors and highlight the unique characteristics of the 1D MOC family.
</description>
<pubDate>Tue, 14 Oct 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164730</guid>
<dc:date>2025-10-14T00:00:00Z</dc:date>
</item>
<item>
<title>Excitonic Anisotropy in Single‐Crystalline 2D Silver Phenylchalcogenides</title>
<link>https://hdl.handle.net/1721.1/164729</link>
<description>Excitonic Anisotropy in Single‐Crystalline 2D Silver Phenylchalcogenides
Lee, Woo Seok; Cho, Yeongsu; Posmyk, Katarzyna; Peksa, Paulina; Dyksik, Mateusz; Samulewicz, Nicholas; Plochocka, Paulina; Baranowski, Michał; Kulik, Heather J; Tisdale, William A
2D materials exhibiting in‐plane anisotropy enable new applications in directional energy transport and polarized optical response. Silver phenylchalcogenides (AgEPh) – including mithrene (AgSePh), tethrene (AgTePh), and thiorene (AgSPh) – represent an exciting new addition to this family, with optical response spanning the visible to near‐UV. Here, excitonic anisotropy is predicted and characterized in this family of materials using a combination of ab initio theory and optical micro‐spectroscopy of single‐crystalline flakes. Using density functional theory and GW with the Bethe–Salpeter equation calculations, it is revealed that all AgEPh compounds exhibit anisotropic electronic band structure and host multiple delocalized excitons with in‐plane anisotropy. Room‐temperature polarization‐resolved optical micro‐spectroscopy shows that orthogonally polarized excitons with similar energy lead to nearly isotropic absorption in AgSPh, whereas energy separation between excitonic resonances in AgSePh and AgTePh leads to strong absorption and emission anisotropy. Cryogenic reflectance micro‐spectroscopy further reveals exciton fine structure in AgSePh, reconciling the discrepancies between room‐temperature experiments and theoretical predictions. Finally, it is demonstrated that the optical response of thicker AgEPh crystals is influenced by photonic effects arising from finite crystal size. Overall, this work advances the understanding of the relationship between anisotropic structure, composition, and excitonic properties in AgEPh, providing a foundation for technological integration.
</description>
<pubDate>Thu, 30 Oct 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164729</guid>
<dc:date>2025-10-30T00:00:00Z</dc:date>
</item>
<item>
<title>Revolutionize cold chain: an AI/ML driven approach to overcome capacity shortages</title>
<link>https://hdl.handle.net/1721.1/164728</link>
<description>Revolutionize cold chain: an AI/ML driven approach to overcome capacity shortages
Jackson, Ilya; Namdar, Jafar; Saénz, Maria Jesús; Elmquist III, Richard Augustus; Dávila Novoa, Luis Rodrigo
This research investigates how Artificial Intelligence (AI) and Machine Learning (ML) forecasting methodologies can be leveraged for cold chain capacity planning, specifically utilising Prophet and Seasonal Autoregressive Integrated Moving Average parametrised through grid search. In collaboration with Americold, the world's second-largest refrigerated logistic service provider, the study explores the challenges and opportunities in applying AI/ML techniques to complex operations covering 385 customers and a capacity of 73,296 pallet positions. We train and test several AI/ML and traditional statistical models using extensive data for every customer over 3.5 years. Based on the results, MAPE of 5.28% was achieved on the whole site level, and SARIMA outperformed ML models in most cases. Next, we show that developing and applying a Customer Segmentation Matrix has enabled more accurate forecasting and planning across various customer segments, addressing the issue of forecasting inaccuracies. This approach effectively improves forecasting inaccuracies, underscoring the significance of tailoring AI/ML models for demand forecasting within the cold-chain industry. Ultimately, this research presents an AI-driven approach that transcends mere forecasting, offering a practical pathway to manage capacity in light of the constraints.
</description>
<pubDate>Wed, 19 Mar 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164728</guid>
<dc:date>2025-03-19T00:00:00Z</dc:date>
</item>
<item>
<title>Beyond binary group categorization: towards a dynamic view of human groups</title>
<link>https://hdl.handle.net/1721.1/164727</link>
<description>Beyond binary group categorization: towards a dynamic view of human groups
Kish Bar-On, Kati
Society is a composite of interacting people and groups. These groups play a significant role in maintaining social status, establishing group identity and social identity, and enforcing norms. As such, groups are essential for understanding human behavior. Nevertheless, the study of groups in everyday group life yields many diverse and sometimes contradicting theories of group behavior, and researchers tend to agree that we have yet to understand the emergence of groups out of aggregates of individuals. The current paper aims to shed new light on the convoluted interrelation between groups and individuals by focusing on individuals’ social identities and group categorization. It does so by exploring the dynamic nature of the self and its implications on identity and group membership, and introducing a framework recognizing the fluidity of groups and group categorization. Incorporating historical insights with contemporary theories, this paper argues for a flexible understanding of group dynamics that surpasses rigid in-group and out-group classifications, proposing instead that group affiliations exist along a continuum that reflects the ever-changing social landscape.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164727</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Reparative Urban Science: Challenging the Myth of Neutrality and Crafting Data-Driven Narratives</title>
<link>https://hdl.handle.net/1721.1/164726</link>
<description>Reparative Urban Science: Challenging the Myth of Neutrality and Crafting Data-Driven Narratives
So, Wonyoung
I offer how urban planning should approach technology within the context of systemic racism, advocating for a reparative approach to address the issues of urban technology perpetuating today’s racial inequality and hindering efforts to redress historical oppression. I identify three mechanisms – formalization, context removal and legitimization, and penalization and extraction – that illustrate how urban technology perpetuates historical inequalities, often penalizing marginalized groups under the pretext of neutrality and fairness. Then, I discuss methodologies of reparative urban science, aiming to use urban technology to challenge race-neutral ideologies and create data-driven narratives for reparations.
</description>
<pubDate>Sun, 26 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164726</guid>
<dc:date>2024-05-26T00:00:00Z</dc:date>
</item>
<item>
<title>What determines EV architecture? An analysis of the most influential battery electric vehicle design decisions from market data</title>
<link>https://hdl.handle.net/1721.1/164725</link>
<description>What determines EV architecture? An analysis of the most influential battery electric vehicle design decisions from market data
Khan, Mumin; Cameron, Bruce
The penetration and variety of Battery Electric Vehicles (BEVs) in the automotive sector have been growing rapidly. While there is substantial research on hybrid ICE-battery vs. battery-only choices, little work has examined whether a dominant design for BEVs is emerging, as predicted by the innovation literature. This study provides a comprehensive exploration of BEV architectures, examining the influence of individual architectural decisions on vehicle performance and market prevalence. This study utilizes multivariate linear regression to analyze a curated dataset of global BEV models from 2022 and 2023, focusing on candidate architectural decisions such as battery cathode composition, battery voltage choice, number of motors, and drive layout. Our research aims to identify potential dominant designs by assessing their impact on performance metrics. The analysis then leverages statistical tools to evaluate the correlation between these architectural decisions and vehicle performance, using range as a primary indicator of consumer appeal. Findings from this research indicate significant variance in the adoption of specific BEV architectures, suggesting that the market has not yet consolidated down to a dominant design. We observe, however, that range is most strongly influenced by the architectural decisions for battery capacity, drive type, and motor type.
</description>
<pubDate>Tue, 12 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164725</guid>
<dc:date>2025-08-12T00:00:00Z</dc:date>
</item>
<item>
<title>Chemical and Chemical-Mechanical Polishing of Surface Roughness on L-PBF/GRCop-42 Cu-Cr-Nb Additive Manufactured 10-GHz RF Structures</title>
<link>https://hdl.handle.net/1721.1/164724</link>
<description>Chemical and Chemical-Mechanical Polishing of Surface Roughness on L-PBF/GRCop-42 Cu-Cr-Nb Additive Manufactured 10-GHz RF Structures
Seltzman, AH; Wukitch, SJ
Laser-based powder bed fusion (L-PBF) allows additive manufacture (AM) of lower hybrid current drive (LHCD) radio-frequency (RF) launchers from Glenn Research Copper, a Cr2Nb precipitation-hardened alloy (GRCop-42) in configurations unachievable with conventional machining. Rough surfaces in AM components increase RF losses and lead to arcing in high-power vacuum RF applications. Chemical polishing, chemical-mechanical polishing, or a combination of both were utilized to planarize the internal surfaces of RF structures, resulting in surface roughness as low as Ra = 0.2 µm. Refinement in polishing techniques now enables GRCop-42 alloys (4 at. % Cr, 2 at. % Nb) to achieve similar surface roughness to GRCop-84 (8 at. % Cr, 4 at. % Nb) and equivalent cavity losses to extruded oxygen-free copper waveguides at 10 GHz.
</description>
<pubDate>Tue, 30 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164724</guid>
<dc:date>2025-09-30T00:00:00Z</dc:date>
</item>
<item>
<title>The Name of Moses in an Egyptian Context—A Hypothetical Etymology</title>
<link>https://hdl.handle.net/1721.1/164723</link>
<description>The Name of Moses in an Egyptian Context—A Hypothetical Etymology
Adair, Aaron
The etymological origins of the name “Moses” have been unclear, but an Egyptian candidate is the most likely hypothesis. In this article, a new proposal is given that finds the best candidate as the Demotic word, mšꜥ, but only in the late Persian period or later would it fit the Hebrew Mōše. Evidence from Greek orthography and testimony from Manetho provide a stronger basis for this proposal over prior candidates. However, this results in a Hellenistic-era inclusion of “Moses” into the Exodus narrative.
</description>
<pubDate>Thu, 02 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164723</guid>
<dc:date>2025-01-02T00:00:00Z</dc:date>
</item>
<item>
<title>Making Sense of Models: Connecting Science and Math Through Decoding and Modifying Computational Models</title>
<link>https://hdl.handle.net/1721.1/164722</link>
<description>Making Sense of Models: Connecting Science and Math Through Decoding and Modifying Computational Models
Lee, Irene A.; Sagartz, Mary; Meyer, Patricia; Anderson, Emma
The Making Sense of Models (MSM) curriculum was designed to bridge math and science learning through agent-based modeling and rich computational thinking investigations that do not require teaching computer programming in middle school classrooms. The MSM curriculum supports students in the NGSS skill of reasoning about how and why a phenomenon happens. After developing decoding skills, students are able to assess the validity of a model based on comparing mechanisms in the model to what they learned about the phenomenon being modeled. In this article, the authors describe the decoding approach and how the MSM curriculum supports students’ ability to reason about scientific models and the real world.
</description>
<pubDate>Mon, 24 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164722</guid>
<dc:date>2025-02-24T00:00:00Z</dc:date>
</item>
<item>
<title>Thermal analog computing: Application to matrix-vector multiplication with inverse-designed metastructures</title>
<link>https://hdl.handle.net/1721.1/164719</link>
<description>Thermal analog computing: Application to matrix-vector multiplication with inverse-designed metastructures
Silva, Caio; Romano, Giuseppe
The rising computational demand of modern workloads has renewed interest in energy-efficient paradigms, such as neuromorphic and analog computing. A fundamental operation in these systems is matrix-vector multiplication (MVM), ubiquitous in signal processing and machine learning. Here, we demonstrate MVM using inverse-designed metastructures that exploit heat conduction as the signal carrier. The proposed approach is based on a generalization of effective thermal conductivity to systems with multiple input and output ports: The input signal is encoded as a set of applied temperatures, while the output is represented by the power collected at designated terminals. The metastructures are obtained via density-based topology optimization, enabled by a differentiable thermal transport solver and automatic differentiation, achieving an accuracy greater than 99% in most cases across a pool of matrices with dimensions 2 ×2 and 3 ×3. We apply this methodology—termed thermal analog computing—to realize matrices relevant to practical tasks, including the discrete Fourier transform and convolutional filters. These findings open avenues for analog information processing in thermally active environments, including temperature-gradient sensing in microelectronics and thermal control systems.
</description>
<pubDate>Thu, 29 Jan 2026 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164719</guid>
<dc:date>2026-01-29T00:00:00Z</dc:date>
</item>
<item>
<title>When competition becomes contagious: Strategic arms racing spillovers, alliance politics, and the Sino-American nuclear competition</title>
<link>https://hdl.handle.net/1721.1/164718</link>
<description>When competition becomes contagious: Strategic arms racing spillovers, alliance politics, and the Sino-American nuclear competition
Seitz, Samuel M.; Ji, Elliot S.
The development of new conventional counterforce systems and improved missile defence systems enables non-nuclear states to directly influence the strategic nuclear balance. These dynamics increase the possibility of strategic arms racing spillovers, where arms racing in one dyad yields capabilities that threaten third parties’ arsenals and thus creates a type of security dilemma. It also increases the risk of non-nuclear allies entrapping their nuclear patrons in strategic arms racing. We illustrate this argument via the case of North and South Korea’s arms racing.
</description>
<pubDate>Sun, 10 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164718</guid>
<dc:date>2025-08-10T00:00:00Z</dc:date>
</item>
<item>
<title>Building communities of critical inquiry in the language classroom</title>
<link>https://hdl.handle.net/1721.1/164717</link>
<description>Building communities of critical inquiry in the language classroom
Dessein, Eva; Ledford, Julian A
Addressing issues of power, difference, and social stratification is essential in language education, where systemic inequities shape classroom experiences. This study examines the design and impact of three targeted modules implemented in beginner and intermediate French courses at two U.S. institutions. Grounded in critical pedagogical principles, the modules focused on language and power, inclusive language practices, and cultural and intercultural awareness. They aimed to foster critical inquiry through individual reflection and engagement with socially relevant topics. Analysis of student reflections and survey responses indicates that the modules supported learners in critically examining how language reinforces or challenges inequities, particularly in relation to gender biases and colonial legacies. Students reported increased awareness of linguistic hierarchies, a stronger sense of agency, and deeper reflection on language’s sociopolitical dimensions. The modules also encouraged engagement with inclusive language and cultural diversity. While the interventions promoted critical awareness and personal growth, findings point to limited peer interaction and community-building. This suggests a need for more structured opportunities for dialogic learning. Overall, the study highlights the transformative potential of critical pedagogy in language education and the importance of designing inclusive curricula that prepare students to reflect on and challenge systemic inequities.
</description>
<pubDate>Thu, 02 Oct 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164717</guid>
<dc:date>2025-10-02T00:00:00Z</dc:date>
</item>
<item>
<title>Managing technology-related disruptions and vulnerabilities in highly automated warehouse systems: an integrative review and research agenda</title>
<link>https://hdl.handle.net/1721.1/164716</link>
<description>Managing technology-related disruptions and vulnerabilities in highly automated warehouse systems: an integrative review and research agenda
Rodríguez-García, Miguel; Kembro, Joakim Hans; Betts, Kellen; Ponce-Cueto, Eva
Recent technological developments in warehousing have introduced new risks. This paper presents an integrative review that combines insights from highly automated warehouse systems (HAWS) and risk management, providing a comprehensive understanding of technology-related warehouse disruptions and vulnerabilities. We identify five major disruptions that can affect HAWS: cyberattacks, technology sabotage, technology failures, power and network outages, and human-machine interaction issues. Moreover, we identify 48 technology-related vulnerabilities across all disruptions. In particular, HAWS have become vulnerable to cyberattacks due to the increasing number of warehouse technology suppliers, greater complexity of multi-robot networks such as AMRs, reliance on cloud-based systems, and cascading effect of cyberattacks due to higher levels of interconnectivity in HAWS networks. Our review also shows that risk management strategies in HAWS are unevenly covered in the literature. In response, we propose a research agenda with 17 pathways aimed at enhancing prevention, detection, mitigation, and recovery strategies for HAWS. Managers also benefit from the identified disruptions and vulnerabilities, as they serve as a reference point for understanding their specific technology-related risks in HAWS. In addition, managers can use our review of current risk management practices as a benchmark and our research agenda to think about areas that they could develop further.
</description>
<pubDate>Fri, 05 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164716</guid>
<dc:date>2025-09-05T00:00:00Z</dc:date>
</item>
<item>
<title>Preliminary Investigation of Gamma Radiation on the Chemical and Physical Characteristics of an Organic Coolant</title>
<link>https://hdl.handle.net/1721.1/164715</link>
<description>Preliminary Investigation of Gamma Radiation on the Chemical and Physical Characteristics of an Organic Coolant
Vasquez, Angel; Seshadri, Arunkumar; Shirvan, Koroush; Buongiorno, Jacopo
Organic-cooled reactor concepts offer potential advantages over traditional light water reactors, including operation at elevated temperatures and reduced pressures. However, radiation-induced degradation of organic coolants remains a critical concern requiring thorough investigation. This study examines the effects of gamma irradiation (1-MGy dose) on Dowtherm A (27% biphenyl, 73% diphenyl ether) under varying atmospheric conditions (ambient air versus argon) and temperatures (room temperature versus 250°C). Chemical characterization using Fourier transform infrared spectroscopy, ultraviolet-visible spectroscopy (UV-Vis), and gas chromatography-mass spectrometry revealed the formation of higher molecular weight byproducts, including terphenyls and quaterphenyls, along with notable biphenyl degradation. Physical property measurements using differential scanning calorimetry, rheometry, and thermal conductivity analysis demonstrated significant changes in the thermophysical properties, including decreased heat capacity and viscosity, with increased thermal conductivity observed under argon irradiation conditions. Pronounced photodarkening occurred in all the irradiated samples, with atmospheric conditions significantly influencing degradation pathways. UV-Vis analysis indicated that oxygen presence during irradiation suppresses certain chromophoric species formation. These findings provide crucial insights into radiation-induced degradation mechanisms and their impact on coolant performance, informing future organic coolant system design and optimization strategies for advanced reactor applications.
</description>
<pubDate>Fri, 05 Dec 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164715</guid>
<dc:date>2025-12-05T00:00:00Z</dc:date>
</item>
<item>
<title>Housing data politics in the United States: Inequitable open data, informal networks, and strategic neutrality</title>
<link>https://hdl.handle.net/1721.1/164714</link>
<description>Housing data politics in the United States: Inequitable open data, informal networks, and strategic neutrality
Aizman, Asya; So, Wonyoung; Navalkha, Chenab; D’Ignazio, Catherine
Open housing data—property transactions, eviction filings, 311 complaints, and rental registries—have been a crucial resource for policymaking and real estate professionals. Meanwhile, housing data actors increasingly collect, analyze, and use data to address housing inequality, including efforts related to eviction prevention and land use reform, among others. This paper examines the motivations and practices of grassroots and institutional housing data actors. From a field scan of 67 entities engaged in housing data work across 12 U.S. states and 18 municipalities, we conducted 18 in-depth interviews to explore how housing data actors operate, their political goals, and data processes. We put forward a two‑axis framework that positions housing data actors according to their organizational structure (institutional/grassroots) and their stated data ideology (neutral/political). This framework contributes to understanding how different actors navigate complex issues such as embedded power dynamics and ethics in housing data. This two-axis view supplies a vocabulary for tracing how normative commitments and material constraints shape housing data pipelines and, ultimately, housing outcomes across the broader housing information ecosystem.
</description>
<pubDate>Thu, 11 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164714</guid>
<dc:date>2025-09-11T00:00:00Z</dc:date>
</item>
<item>
<title>Unconstrained Sovereignty: Delegation of Authority and Reversibility</title>
<link>https://hdl.handle.net/1721.1/164713</link>
<description>Unconstrained Sovereignty: Delegation of Authority and Reversibility
Grinberg, Mariya
The concept of sovereignty shapes our understanding of the world. Yet our current understanding of sovereignty conflates delegation of authority with loss of sovereignty. Delegation is relatively cheap, quick, and leads to an assured outcome; it’s an affirmation of sovereignty. Use of force, however, is required to regain lost sovereignty. I propose a definition of sovereignty that draws a clear distinction between sovereignty and delegated authority. Adopting this definition shows that sovereignty applies across time and space, it is indivisible, institutions do not place permanent constraints on supreme authority, and popular sovereignty is not a well-grounded concept.
</description>
<pubDate>Fri, 17 Oct 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164713</guid>
<dc:date>2025-10-17T00:00:00Z</dc:date>
</item>
<item>
<title>Concurrent Balanced Augmented Trees</title>
<link>https://hdl.handle.net/1721.1/164712</link>
<description>Concurrent Balanced Augmented Trees
Wrench, Evan; Singh, Ajay; Roh, Younghun; Fatourou, Panagiota; Jayanti, Siddhartha; Ruppert, Eric; Wei, Yuanhao
Augmentation makes search trees tremendously more versatile, allowing them to support efficient aggregation queries, order-statistic queries, and range queries in addition to insertion, deletion, and lookup. In this paper, we present the first lock-free augmented balanced search tree supporting generic augmentation functions. Our algorithmic ideas build upon a recent augmented unbalanced search tree presented by Fatourou and Ruppert [DISC, 2024]. We implement both data structures, solving some memory reclamation challenges in the process, and provide an experimental performance analysis of them. We also present optimized versions of our balanced tree that use delegation to achieve better scalability and performance (by more than 2x in most workloads). Our experiments show that our augmented balanced tree completes updates 2.2 to 30 times faster than the unbalanced augmented tree, and outperforms unaugmented trees by up to several orders of magnitude on 120 threads.
PPoPP ’26, Sydney, NSW, Australia
</description>
<pubDate>Wed, 28 Jan 2026 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164712</guid>
<dc:date>2026-01-28T00:00:00Z</dc:date>
</item>
<item>
<title>Building Intelligent Agents with Neuro-Symbolic Concepts</title>
<link>https://hdl.handle.net/1721.1/164711</link>
<description>Building Intelligent Agents with Neuro-Symbolic Concepts
Mao, Jiayuan; Tenenbaum, Joshua; Wu, Jiajun
This article presents a concept-centric paradigm for building agents that can learn continually and reason flexibly. The concept-centric agent utilizes a vocabulary of neuro-symbolic concepts. These concepts, such as object, relation, and action concepts, are grounded on sensory inputs and actuation outputs. They are also compositional, allowing for the creation of novel concepts through their structural combination. To facilitate learning and reasoning, the concepts are typed and represented using a combination of symbolic programs and neural network representations. Leveraging such neuro-symbolic concepts, the agent can efficiently learn and recombine them to solve various tasks across different domains, ranging from 2D images, videos, 3D scenes, and robotic manipulation tasks. This concept-centric framework offers several advantages, including data efficiency, compositional generalization, continual learning, and zero-shot transfer.
</description>
<pubDate>Wed, 28 Jan 2026 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164711</guid>
<dc:date>2026-01-28T00:00:00Z</dc:date>
</item>
<item>
<title>Foundational Verification of Running-Time Bounds for Interactive Programs</title>
<link>https://hdl.handle.net/1721.1/164710</link>
<description>Foundational Verification of Running-Time Bounds for Interactive Programs
Tockman, Andy; Singh, Pratap; Erbsen, Andres; Gruetter, Samuel; Chlipala, Adam
Some important domains of software demand concrete bounds on how long functions may run, for instance for real-time cyberphysical systems where missed deadlines may damage industrial machinery. Such programs may interact with external devices throughout execution, where time deadlines ought to depend on, for instance, sensor readings (e.g. we only scramble to close a valve immediately when a sensor reports that a tank is about to overflow). We present the first software-development toolchain that delivers first-principles proofs of meaningful time bounds for interactive machine code, while allowing all per-application programming and verification to happen at the source-code level. We allow C-like programs to be proved against separation-logic specifications that also constrain their running time, and such proofs are composed with verification of a compiler to RISC-V machine code. All components are implemented and proved inside the Rocq proof assistant, producing final theorems whose statements depend only on machine-language formal semantics and some elementary specification constructions for describing running time. As a capstone case study, we extended a past verification (of a real microcontroller-based cyberphysical system) to bound time between arrival of network packets and actuation of an attached device.
CPP ’26, Rennes, France
</description>
<pubDate>Thu, 08 Jan 2026 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164710</guid>
<dc:date>2026-01-08T00:00:00Z</dc:date>
</item>
<item>
<title>Network-RBV for Critical Minerals: How Standards, Permits, and Licensing Shape Midstream Bottlenecks</title>
<link>https://hdl.handle.net/1721.1/164709</link>
<description>Network-RBV for Critical Minerals: How Standards, Permits, and Licensing Shape Midstream Bottlenecks
Kegenbekov, Zhandos; Alipova, Alima; Jackson, Ilya
Critical mineral supply chains underpin electric mobility, power electronics, clean hydrogen, and advanced manufacturing. Drawing on the resource-based view (RBV), the relational view, and dynamic capabilities, we conceptualize advantage not as ownership of ore bodies but as orchestration of multi-tier resource systems: upstream access, midstream processing know-how, standards and permits, and durable inter-organizational ties. In a world of high concentration at key stages (refining, separation, engineered materials), full “decoupling” is economically costly and technologically constraining. We argue for structured cooperation among the United States, European Union, China, and other producers and consumers, combined with selective domestic capability building for bona fide security needs. Methodologically, we conduct a structured conceptual synthesis integrating RBV, relational view, dynamic capabilities, and network-of-network research, combined with a structured comparative policy analysis of U.S./EU/Chinese instruments anchored in official documents. We operationalize the argument via technology–material dependency maps that identify midstream bottlenecks and the policy/standard levers most likely to expand qualified, compliant capacity.
</description>
<pubDate>Tue, 20 Jan 2026 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164709</guid>
<dc:date>2026-01-20T00:00:00Z</dc:date>
</item>
<item>
<title>REEV SENSE IMUs for Spatiotemporal Gait Analysis in Post-Stroke Patients: Validation Against Optical Motion Capture</title>
<link>https://hdl.handle.net/1721.1/164708</link>
<description>REEV SENSE IMUs for Spatiotemporal Gait Analysis in Post-Stroke Patients: Validation Against Optical Motion Capture
Marsan, Thibault; Clauzade, Sacha; Zhang, Xiang; Grandin, Nicolas; Urman, Tatiana; Linton, Evan; Sibachir, Samy; Ricciardi, Catherine E.; Temporelli, Robin
Objective gait assessment is essential for post-stroke rehabilitation monitoring, yet optical motion capture systems remain inaccessible to most clinical settings due to cost and infrastructure constraints. This study assessed the validity of the REEV SENSE IMU for measuring spatiotemporal gait parameters in post-stroke individuals and evaluated assistive device effects on measurement accuracy. Twenty chronic post-stroke participants were enrolled, and fourteen completed the study (ten without an assistive device, four using a cane) after applying pre-defined exclusion criteria (walking speed &lt;0.28 m/s, n = 6). Participants walked at self-selected speed while simultaneously being recorded by REEV SENSE IMUs and optical motion capture. Spatiotemporal parameters from matched heel strikes were compared using intraclass correlation coefficients (ICC), mean relative error (MRE), and Bland–Altman analysis. Temporal parameters demonstrated excellent reliability: contact time (ICC 0.96–0.99, MRE 2.77–5.45%), stride duration (ICC 0.95–0.99, MRE 2.57–2.62%), and cadence (ICC 0.98–0.99, MRE 1.80–1.93%). Spatial parameters showed greater variability, with stride length degrading substantially in slow-walking conditions (Cane group: ICC 0.76, MRE 8.60%). REEV SENSE provides reliable temporal parameter measurement comparable to commercial systems, positioning it as a practical tool for clinical gait monitoring in post-stroke rehabilitation. However, spatial parameter accuracy requires cautious interpretation in slow-walking regimes, necessitating independent validation when clinical decisions depend on precise stride length estimates.
</description>
<pubDate>Sun, 18 Jan 2026 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164708</guid>
<dc:date>2026-01-18T00:00:00Z</dc:date>
</item>
<item>
<title>Who Am I? Eyebrow Follicles Minimize Donor-Derived DNA for Germline Testing After Hematopoietic Stem Cell Transplantation</title>
<link>https://hdl.handle.net/1721.1/164707</link>
<description>Who Am I? Eyebrow Follicles Minimize Donor-Derived DNA for Germline Testing After Hematopoietic Stem Cell Transplantation
Mertens, Matthias; Sadlo, Mona; Kühl, Jörn-Sven; Metzeler, Klaus; Zschenderlein, Louisa; Edelmann, Jeanett; Lehmann, Claudia; Thull, Sarah; Karakaya, Mert; Velmans, Clara; Tumewu, Theresa; Böhme, Matthias; Klötzer, Christina; Weigert, Anne; Vucinic, Vladan; Hentschel, Julia; Mertens, Mareike
Germline genetic testing plays a critical role in diagnosing inherited predispositions and increasingly guides therapeutic and surveillance choices—but becomes technically challenging after allogeneic hematopoietic stem cell transplantation (HSCT), when donor-derived DNA contaminates host tissues. To address this, we compared donor-derived DNA across three accessible tissues—buccal swab, nail, and eyebrow follicles—in recipients after hematopoietic stem cell transplantation using two orthogonal assays (34-SNP next-generation sequencing and a 27-marker short tandem repeat panel) and modeled clinical covariates that influence chimerism. Eyebrow follicles showed consistently low donor DNA (median 1% by NGS; 3% by STR) whereas buccal swabs and nails carried substantially higher donor fractions (+25 and +22 percentage points versus eyebrow, respectively; both p &lt; 0.01). Across methods, STR yielded on average ≈6 percentage points higher donor fractions than NGS at low-level chimerism. Several transplant covariates correlated with chimerism: matched-related donors and a perfect HLA match (10/10) were each associated with lower donor DNA (≈12–14 and 15–20 percentage points, respectively); longer times since hematopoietic stem cell transplantation correlated with lower levels for nail samples, and donor–recipient sex match correlated with higher donor DNA (~7–8 percentage points). Even low-level chimerism can distort germline variant interpretation. We propose a pragmatic protocol for post-hematopoietic stem cell transplantation germline testing that prioritizes eyebrow follicles as the default tissue. An SNP-based quality control assay is used to flag unsafe donor fractions (≥ 5–10%) before comprehensive germline analysis, reducing the risk that chimeric donor DNA distorts germline variant interpretation.
</description>
<pubDate>Sun, 11 Jan 2026 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164707</guid>
<dc:date>2026-01-11T00:00:00Z</dc:date>
</item>
<item>
<title>Quantum Circuits Are Just a Phase</title>
<link>https://hdl.handle.net/1721.1/164694</link>
<description>Quantum Circuits Are Just a Phase
Heunen, Chris; Lemonnier, Louis; McNally, Christopher; Rice, Alex
Quantum programs today are written at a low level of abstraction---quantum circuits akin to assembly languages - and the unitary parts of even advanced quantum programming languages essentially function as circuit description languages. This state of affairs impedes scalability, clarity, and support for higher-level reasoning. More abstract and expressive quantum programming constructs are needed.&#13;
&#13;
To this end, we introduce a simple syntax for generating unitaries from "just a phase"; we combine a (global) phase operation that captures phase shifts with a quantum analogue of the "if let" construct that captures subspace selection via pattern matching. This minimal language lifts the focus from gates to eigendecomposition, conjugation, and controlled unitaries; common building blocks in quantum algorithm design.&#13;
&#13;
We demonstrate several aspects of the expressive power of our language in several ways. Firstly, we establish that our representation is universal by deriving a universal quantum gate set. Secondly, we show that important quantum algorithms can be expressed naturally and concisely, including Grover's search algorithm, Hamiltonian simulation, Quantum Fourier Transform, Quantum Signal Processing, and the Quantum Eigenvalue Transformation. Furthermore, we give clean denotational semantics grounded in categorical quantum mechanics. Finally, we implement a prototype compiler that efficiently translates terms of our language to quantum circuits, and prove that it is sound with respect to these semantics. Collectively, these contributions show that this construct offers a principled and practical step toward more abstract and structured quantum programming.
</description>
<pubDate>Thu, 08 Jan 2026 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164694</guid>
<dc:date>2026-01-08T00:00:00Z</dc:date>
</item>
<item>
<title>A Digital Engineering Framework for Piston Pin Bearings via Multi-Physics Thermo-Elasto-Hydrodynamic Modeling</title>
<link>https://hdl.handle.net/1721.1/164693</link>
<description>A Digital Engineering Framework for Piston Pin Bearings via Multi-Physics Thermo-Elasto-Hydrodynamic Modeling
Shu, Zhiyuan; Tian, Tian
The piston pin operates under severe mechanical and thermal conditions, making accurate lubrication prediction essential for engine durability. This study presents a comprehensive digital engineering framework for piston pin bearings, built upon a fully coupled thermo-elasto-hydrodynamic (TEHD) formulation. The framework integrates: (1) a Reynolds-equation hydrodynamic solver with temperature-/pressure-dependent viscosity and cavitation; (2) elastic deformation obtained from FEA (finite element analysis)-based compliance matrices; (3) a break-in module that iteratively adjusts surface profiles before steady-state simulation; (4) a three-body heat transfer model resolving heat conduction, convection, and solid&amp;ndash;liquid interfacial heat exchange. Applied to a heavy-duty diesel engine, the framework reproduces experimentally observed behaviors, including bottom-edge rounding at the small end and the slow unidirectional drift of the floating pin. By integrating multi-physics modeling with design-level flexibility, this work aims to provide a robust digital twin for the piston-pin system, enabling virtual diagnostics, early-stage failure prediction, and data-driven design optimization for engine development.
</description>
<pubDate>Sat, 10 Jan 2026 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164693</guid>
<dc:date>2026-01-10T00:00:00Z</dc:date>
</item>
<item>
<title>The Wafold: Curvature-Driven Termination and Dimensional Compression in Black Holes</title>
<link>https://hdl.handle.net/1721.1/164692</link>
<description>The Wafold: Curvature-Driven Termination and Dimensional Compression in Black Holes
Viaña, Javier
This work explores a geometric description of black holes in which spacetime terminates on a curvature-triggered hypersurface rather than extending to an interior singularity. We study the implications of a scenario in which, upon reaching a critical curvature threshold, the three-dimensional spatial geometry compresses into a thin, closed boundary identified here as the wafold. Beyond this, the manifold would no longer continue, and all mass–energy and information would be confined to the hypersurface itself. This framework combines two well-explored paths: (1) curvature-driven geometric compression, in which extreme curvature forces the bulk degrees of freedom to become supported on a thin hypersurface (without altering the underlying dimensionality of spacetime), and (2) the motivation underlying the holographic principle, namely that black-hole entropy scales with surface area rather than volume, suggesting that information is governed by a boundary geometry rather than a bulk volume. We elaborate a dimensional conversion law that would be required to describe the collapse of spatial volume into surface area as a conserved flux of geometric capacity across the wafold, and we analyze the resulting consequences of treating this hypersurface as the terminal boundary of the manifold.
</description>
<pubDate>Tue, 23 Dec 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164692</guid>
<dc:date>2025-12-23T00:00:00Z</dc:date>
</item>
<item>
<title>Design and Implementation of an Automated Thermal Imaging Device for Lower Limb Prosthetic Applications</title>
<link>https://hdl.handle.net/1721.1/164691</link>
<description>Design and Implementation of an Automated Thermal Imaging Device for Lower Limb Prosthetic Applications
Pizarro, Daniel; Huegel, Joel C.; Diaz, Elias; Alemon, Beatriz; Herr, Hugh; Felix-Herran, Luis C.
Since elevated temperature and humidity may occur at the prosthetic socket–skin interface, it is essential to collect thermal data from the residual limb, as this information serves as an indicator of adverse effects such as irritation, postural problems, and significant damage to health. These data are obtained non-invasively through the execution of a thermal imaging (TI) procedure. However, the precision and repeatability of a TI procedure rely significantly on its execution technique. This work presents the design and implementation of a mechatronic device that automates a thermal imaging technique. The application of the device is in lower-limb prosthetics evaluation. The proposed system improves data acquisition consistency by reducing execution time and minimizing human error, thereby enhancing the reproducibility and reliability of thermal measurements. The introduced device, Thermal Imaging Booth, proposes an automated solution for TI standardization in clinical and research settings. By minimizing inconsistencies, this system improves the diagnostic potential of thermography, facilitating its adoption in biomedical applications.
</description>
<pubDate>Thu, 18 Dec 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164691</guid>
<dc:date>2025-12-18T00:00:00Z</dc:date>
</item>
<item>
<title>Peepco: Batch-Based Consistency Optimization</title>
<link>https://hdl.handle.net/1721.1/164690</link>
<description>Peepco: Batch-Based Consistency Optimization
Kuraj, Ivan; Feser, John; Polikarpova, Nadia; Solar-Lezama, Armando
We present batch-based consistency, a new approach for consistency optimization that allows programmers to specialize consistency with application-level integrity properties. We implement the approach with a two-step process: we statically infer optimal consistency requirements for executions of bounded sets of operations, and then, use the inferred requirements to parameterize a new distributed protocol to relax operation reordering at run time when it is safe to do so. Our approach supports standard notions of consistency. We implement batch-based consistency in Peepco, demonstrate its expressiveness for partial data replication, and examine Peepco&amp;#8217;s run-time performance impact in different settings.
</description>
<pubDate>Wed, 09 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164690</guid>
<dc:date>2025-04-09T00:00:00Z</dc:date>
</item>
<item>
<title>Finch: Sparse and Structured Tensor Programming with Control Flow</title>
<link>https://hdl.handle.net/1721.1/164689</link>
<description>Finch: Sparse and Structured Tensor Programming with Control Flow
Ahrens, Willow; Collin, Teodoro; Patel, Radha; Deeds, Kyle; Hong, Changwan; Amarasinghe, Saman
From FORTRAN to NumPy, tensors have revolutionized how we express computation. However, tensors in these, and almost all prominent systems, can only handle dense rectilinear integer grids.  Real world tensors often contain underlying structure, such as sparsity, runs of repeated values, or symmetry.  Support for structured data is fragmented and incomplete.  Existing frameworks limit the tensor structures and program control flow they support to better simplify the problem.&#13;
&#13;
In this work, we propose a new programming language, Finch, which supports both flexible control flow and diverse data structures. Finch facilitates a programming model which resolves the challenges of computing over structured tensors by combining control flow and data structures into a common representation where they can be co-optimized. Finch automatically specializes control flow to data so that performance engineers can focus on experimenting with many algorithms. Finch supports a familiar programming language of loops, statements, ifs, breaks, etc., over a wide variety of tensor structures, such as sparsity, run-length-encoding, symmetry, triangles, padding, or blocks. Finch reliably utilizes the key properties of structure, such as structural zeros, repeated values, or clustered non-zeros. We show that this leads to dramatic speedups in operations such as SpMV and SpGEMM, image processing, and graph analytics.
</description>
<pubDate>Wed, 09 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164689</guid>
<dc:date>2025-04-09T00:00:00Z</dc:date>
</item>
<item>
<title>Smooth, Integrated Proofs of Cryptographic Constant Time for Nondeterministic Programs and Compilers</title>
<link>https://hdl.handle.net/1721.1/164688</link>
<description>Smooth, Integrated Proofs of Cryptographic Constant Time for Nondeterministic Programs and Compilers
Conoly, Owen; Erbsen, Andres; Chlipala, Adam
Formal verification of software and compilers has been used to rule out large classes of security-critical issues, but risk of unintentional information leakage has received much less consideration. It is a key requirement for formal specifications to leave some details of a system's behavior unspecified so that future implementation changes can be accommodated, and yet it is nonetheless expected that these choices would not be made based on confidential information the system handles. This paper formalizes that notion using omnisemantics and plain single-copy assertions, giving for the first time a specification of what it means for a nondeterministic program to be constant-time or more generally to avoid leaking (a part of) its inputs. We use this theory to prove data-leak-free execution of core cryptographic routines compiled from Bedrock2 C to RISC-V machine code, showing that the smooth specification and proof experience omnisemantics provides for nondeterminism extends to constant-time properties in the same setting. We also study variants of the key program-compiler contract, highlighting pitfalls of tempting simplifications and subtle consequences of how inputs to nondeterministic choices are constrained. Our results are backed by modular program-logic and compiler-correctness theorems, and they integrate into a neat end-to-end theorem in the Coq proof assistant.
</description>
<pubDate>Fri, 13 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164688</guid>
<dc:date>2025-06-13T00:00:00Z</dc:date>
</item>
<item>
<title>NeuroChat: A Neuroadaptive AI Chatbot for Customizing Learning Experiences</title>
<link>https://hdl.handle.net/1721.1/164687</link>
<description>NeuroChat: A Neuroadaptive AI Chatbot for Customizing Learning Experiences
Baradari, D?nya; Kosmyna, Nataliya; Petrov, Oscar; Kaplun, Rebecah; Maes, Pattie
Generative AI is reshaping education by enabling personalized, on-demand learning experiences. However, current AI systems lack awareness of the learner’s cognitive state, limiting their adaptability. In parallel, electroencephalography (EEG)-based neuroadaptive systems have shown promise in enhancing engagement through real-time physiological feedback. This paper introduces NeuroChat, a neuroadaptive AI tutor that integrates real-time EEG-based engagement tracking with a large language model to adapt its conversational responses. By continuously monitoring learners’ cognitive engagement, NeuroChat dynamically adjusts content complexity, tone, and response style in a closed-loop interaction. In a within-subjects study (n = 24), NeuroChat significantly increased both EEG-measured and self-reported engagement compared to a non-adaptive chatbot. However, no significant differences in short-term learning outcomes were observed. These findings demonstrate the feasibility of real-time brain–AI interaction for education and highlight opportunities for deeper personalization, longer-term adaptation, and richer learning assessment in future neuroadaptive systems.
CUI ’25, Waterloo, ON, Canada
</description>
<pubDate>Tue, 08 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164687</guid>
<dc:date>2025-07-08T00:00:00Z</dc:date>
</item>
<item>
<title>From Synthetic to Human: The Gap Between AI-Predicted and Actual Pro-Environmental Behavior Change After Chatbot Persuasion</title>
<link>https://hdl.handle.net/1721.1/164686</link>
<description>From Synthetic to Human: The Gap Between AI-Predicted and Actual Pro-Environmental Behavior Change After Chatbot Persuasion
Doudkin, Alexander; Pataranutaporn, Pat; Maes, Pattie
Pro-environmental behavior (PEB) is vital to combat climate change, yet turning awareness into intention and action remains elusive. We explore large language models (LLMs) as tools to promote PEB, comparing their impact across 3,600 participants: real humans (n=1,200), simulated humans based on actual participant data (n=1,200), and fully synthetic personas (n=1,200). All three participant groups faced either personalized chatbots, standard chatbots, or static statements, employing four persuasion strategies (moral foundations, future self-continuity, action orientation, or ”freestyle” chosen by the LLM). Results reveal a ”synthetic persuasion paradox”: synthetic and simulated participants significantly change their post-intervention PEB stance, while human attitudes barely shift. Simulated participants better approximate human behavior but still overestimate effects. This disconnect underscores LLM’s potential for pre-evaluating PEB interventions but warns of its limits in predicting human responses. We call for refined synthetic modeling and sustained and extended human trials to align conversational AI’s promise with tangible sustainability outcomes.
CUI ’25, Waterloo, ON, Canada
</description>
<pubDate>Mon, 07 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164686</guid>
<dc:date>2025-07-07T00:00:00Z</dc:date>
</item>
<item>
<title>Adaptive Approximation Schemes for Matching Queues</title>
<link>https://hdl.handle.net/1721.1/164685</link>
<description>Adaptive Approximation Schemes for Matching Queues
AmaniHamedani, Alireza; Aouad, Ali; Saberi, Amin
We study a continuous-time, infinite-horizon dynamic bipartite matching problem. Suppliers arrive according to a Poisson process; while waiting, they may abandon the queue at a uniform rate. Customers on the other hand must be matched upon arrival. The objective is to minimize the expected long-term average cost subject to a throughput constraint on the total match rate.&#13;
Previous literature on dynamic matching focuses on ”static” policies, where the matching decisions do not depend explicitly on the state of the supplier queues, achieving constant-factor approximations. By contrast, we design ”adaptive” policies, which leverage queue length information, and obtain near-optimal polynomial-time algorithms for several classes of instances.&#13;
First, we develop a bi-criteria fully polynomial-time approximation scheme for dynamic matching on networks with a constant number of queues—that computes a (1−є)-approximation of the optimal policy in time polynomial in both the input size and 1/є. A key new technique is a hybrid LP relaxation, which combines static and state-dependent LP approximations of the queue dynamics, after a decomposition of the network. Networks with a constant number of queues are motivated by deceased organ donation schemes, where the supply types can be divided according to blood and tissue types.&#13;
The above algorithm, combined with a careful cell decomposition gives a polynomial-time approximation scheme for dynamic matching on Euclidean networks of fixed dimension. The Euclidean case is of interest in ride-hailing and spatial service platforms, where the goal is to fulfill as many trips as possible while minimizing driving distances.
STOC ’25, Prague, Czechia
</description>
<pubDate>Sun, 15 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164685</guid>
<dc:date>2025-06-15T00:00:00Z</dc:date>
</item>
<item>
<title>Output-Sensitive Approximate Counting via a Measure-Bounded Hyperedge Oracle, or: How Asymmetry Helps Estimate &amp;#55349;&amp;#56408;-Clique Counts Faster</title>
<link>https://hdl.handle.net/1721.1/164684</link>
<description>Output-Sensitive Approximate Counting via a Measure-Bounded Hyperedge Oracle, or: How Asymmetry Helps Estimate &amp;#55349;&amp;#56408;-Clique Counts Faster
Censor-Hillel, Keren; Even, Tomer; Vassilevska Williams, Virginia
Dell, Lapinskas and Meeks [DLM SICOMP 2022] presented a general reduction from approximate counting to decision for a class of fine-grained problems that can be viewed as hyperedge counting or detection problems in an implicit hypergraph, thus obtaining tight equivalences between approximate counting and decision for many key problems such as k-clique, k-sum and more. Their result is a reduction from approximately counting the number of hyperedges in an implicit k-partite hypergraph to a polylogarithmic number of calls to a hyperedge oracle that returns whether a given subhypergraph contains an edge.&#13;
The main result of this paper is a generalization of the DLM result for output-sensitive approximate counting, where the running time of the desired counting algorithm is inversely proportional to the number of witnesses. Our theorem is a reduction from approximately counting the (unknown) number of hyperedges in an implicit k-partite hypergraph to a polylogarithmic number of calls to a hyperedge oracle called only on subhypergraphs with a small “measure”. If a subhypergraph has ui nodes in the ith node partition of the k-partite hypergraph, then its measure is ∏i ui.&#13;
Using the new general reduction and by efficiently implementing measure-bounded colorful independence oracles, we obtain new improved output-sensitive approximate counting algorithms for k-clique, k-dominating set and k-sum. In graphs with nt k-cliques, for instance, our algorithm (1± є)-approximates the k-clique count in time Õє(nω(k−t−1/3,k−t/3,k−t+2/3) +n2), where ω(a,b,c) is the exponent of na× nb by nb× nc matrix multiplication. For large k and t&gt;2, this is a substantial improvement over prior work, even if ω=2.
STOC ’25, Prague, Czechia
</description>
<pubDate>Sun, 15 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164684</guid>
<dc:date>2025-06-15T00:00:00Z</dc:date>
</item>
<item>
<title>Lightweight and Locality-Aware Composition of Black-Box Subroutines</title>
<link>https://hdl.handle.net/1721.1/164683</link>
<description>Lightweight and Locality-Aware Composition of Black-Box Subroutines
Bansal, Manya; Sharlet, Dillon; Ragan-Kelley, Jonathan; Amarasinghe, Saman
Subroutines are essential building blocks in software design: users encapsulate common functionality in libraries and write applications by composing calls to subroutines. Unfortunately, performance may be lost at subroutine boundaries due to reduced locality and increased memory consumption. Operator fusion helps recover performance lost at composition boundaries. Previous solutions fuse operators by manually rewriting code into monolithic fused subroutines, or by relying on heavy-weight compilers to generate code that performs fusion. Both approaches require a semantic understanding of the entire computation, breaking the decoupling necessary for modularity and reusability of subroutines.&#13;
&#13;
In this work, we attempt to identify the minimal ingredients required to fuse computations, enabling composition of subroutines without sacrificing performance or modularity. We find that, unlike previous approaches that require a semantic understanding of the computation, most opportunities for fusion require understanding only data production and consumption patterns. Exploiting this insight, we add fusion on top of black-box subroutines by proposing a lightweight enrichment of subroutine declarations to expose data-dependence patterns. We implement our approach in a system called Fern, and demonstrate Fern's benefits by showing that it is competitive with state-of-the-art, high-performance libraries with manually fused operators, can fuse across library and domain boundaries for unforeseen workloads, and can deliver speedups of up to $5\times$ over unfused code.
</description>
<pubDate>Fri, 13 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164683</guid>
<dc:date>2025-06-13T00:00:00Z</dc:date>
</item>
<item>
<title>Prolonged photostability in hexagonal boron nitride quantum emitters</title>
<link>https://hdl.handle.net/1721.1/164682</link>
<description>Prolonged photostability in hexagonal boron nitride quantum emitters
Li, Sylvia Xin; Ichihara, Takeo; Park, Hyoju; He, Guangwei; Kozawa, Daichi; Wen, Yi; Koman, Volodymyr B; Zeng, Yuwen; Kuehne, Matthias; Yuan, Zhe; Faucher, Samuel; Warner, Jamie H; Strano, Michael S
Single-photon emitters are crucial building blocks for optical quantum technologies. Hexagonal boron nitride (hBN) is a promising two-dimensional material that hosts bright, room-temperature single-photon emitters. However, photo instability is a persistent challenge preventing practical applications of these properties. Here, we reveal the ubiquitous photobleaching of hBN vacancy emitters. Independent of the source or the number of hBN layers, we find that the photobleaching of a common emission at 1.98 ± 0.05 eV can be described by two consistent time constants, namely a first bleaching lifetime of 5 to 10 s, and a second bleaching lifetime in the range of 150 to 220 s. Only the former is environmentally sensitive and can be significantly mitigated by shielding O&lt;jats:sub&gt;2&lt;/jats:sub&gt;, whereas the latter could be the result of carbon-assisted defect migration. Annular dark-field scanning transmission electron microscopy of photobleached hBN allows for visualizing vacancy defects and carbon substitution at single atom resolution, supporting the migration mechanism along with X-ray photoelectron spectroscopy. Thermal annealing at 850 °C of liquid exfoliated hBN eliminates both bleaching processes, leading to persistent photostability. These results represent a significant advance to potentially engineer hBN vacancy emitters with the photostability requisite for quantum applications.
</description>
<pubDate>Mon, 06 Mar 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164682</guid>
<dc:date>2023-03-06T00:00:00Z</dc:date>
</item>
<item>
<title>Discretized hexagonal boron nitride quantum emitters and their chemical interconversion</title>
<link>https://hdl.handle.net/1721.1/164681</link>
<description>Discretized hexagonal boron nitride quantum emitters and their chemical interconversion
Kozawa, Daichi; Li, Sylvia Xin; Ichihara, Takeo; Rajan, Ananth Govind; Gong, Xun; He, Guangwei; Koman, Volodymyr B; Zeng, Yuwen; Kuehne, Matthias; Silmore, Kevin S; Parviz, Dorsa; Liu, Pingwei; Liu, Albert Tianxiang; Faucher, Samuel; Yuan, Zhe; Warner, Jamie; Blankschtein, Daniel; Strano, Michael S
Quantum emitters in two-dimensional hexagonal boron nitride (hBN) are of significant interest because of their unique photophysical properties, such as single-photon emission at room temperature, and promising applications in quantum computing and communications. The photoemission from hBN defects covers a wide range of emission energies but identifying and modulating the properties of specific emitters remain challenging due to uncontrolled formation of hBN defects. In this study, more than 2000 spectra are collected consisting of single, isolated zero-phonon lines (ZPLs) between 1.59 and 2.25 eV from diverse sample types. Most of ZPLs are organized into seven discretized emission energies. All emitters exhibit a range of lifetimes from 1 to 6 ns, and phonon sidebands offset by the dominant lattice phonon in hBN near 1370 cm−1. Two chemical processing schemes are developed based on water and boric acid etching that generate or preferentially interconvert specific emitters, respectively. The identification and chemical interconversion of these discretized emitters should significantly advance the understanding of solid-state chemistry and photophysics of hBN quantum emission.
</description>
<pubDate>Tue, 03 Jan 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164681</guid>
<dc:date>2023-01-03T00:00:00Z</dc:date>
</item>
<item>
<title>Rational Design and Efficacy of Glucose‐Responsive Insulin Therapeutics and Insulin Delivery Systems by Computation Using Connected Human and Rodent Models</title>
<link>https://hdl.handle.net/1721.1/164680</link>
<description>Rational Design and Efficacy of Glucose‐Responsive Insulin Therapeutics and Insulin Delivery Systems by Computation Using Connected Human and Rodent Models
Yang, Sungyun; Yang, Jing Fan; Gong, Xun; Weiss, Michael A; Strano, Michael S
Glucose‐responsive insulins (GRIs) use plasma glucose levels in a diabetic patient to activate a specifically designed insulin analogue to a more potent state in real time. Alternatively, some GRI concepts use glucose‐mediated release or injection of insulin into the bloodstream. GRIs hold promise to exhibit much improved pharmacological control of the plasma glucose concentration, particularly for the problem of therapeutically induced hypoglycemia. Several innovative GRI schemes are introduced into the literature, but there remains a dearth of quantitative analysis to aid the development and optimization of these constructs into effective therapeutics. This work evaluates several classes of GRIs that are proposed using a pharmacokinetic model as previously described, PAMERAH, simulating the glucoregulatory system of humans and rodents. GRI concepts are grouped into three mechanistic classes: 1) intrinsic GRIs, 2) glucose‐responsive particles, and 3) glucose‐responsive devices. Each class is analyzed for optimal designs that maintain glucose levels within the euglycemic range. These derived GRI parameter spaces are then compared between rodents and humans, providing the differences in clinical translation success for each candidate. This work demonstrates a computational framework to evaluate the potential clinical translatability of existing glucose‐responsive systems, providing a useful approach for future GRI development.
</description>
<pubDate>Thu, 15 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164680</guid>
<dc:date>2023-06-15T00:00:00Z</dc:date>
</item>
<item>
<title>Wearable sensors for monitoring marine environments and their inhabitants</title>
<link>https://hdl.handle.net/1721.1/164679</link>
<description>Wearable sensors for monitoring marine environments and their inhabitants
Kaidarova, Altynay; Geraldi, Nathan R; Wilson, Rory P; Kosel, Jürgen; Meekan, Mark G; Eguíluz, Víctor M; Hussain, Muhammad Mustafa; Shamim, Atif; Liao, Hanguang; Srivastava, Mani; Saha, Swapnil Sayan; Strano, Michael S; Zhang, Xiangliang; Ooi, Boon S; Holton, Mark; Hopkins, Lloyd W; Jin, Xiaojia; Gong, Xun; Quintana, Flavio; Tovasarov, Adylkhan; Tasmagambetova, Assel; Duarte, Carlos M
Human societies depend on marine ecosystems, but their degradation continues. Toward mitigating this decline, new and more effective ways to precisely measure the status and condition of marine environments are needed alongside existing rebuilding strategies. Here, we provide an overview of how sensors and wearable technology developed for humans could be adapted to improve marine monitoring. We describe barriers that have slowed the transition of this technology from land to sea, update on the developments in sensors to advance ocean observation and advocate for more widespread use of wearables on marine organisms in the wild and in aquaculture. We propose that large-scale use of wearables could facilitate the concept of an ‘internet of marine life’ that might contribute to a more robust and effective observation system for the oceans and commercial aquaculture operations. These observations may aid in rationalizing strategies toward conservation and restoration of marine communities and habitats.
</description>
<pubDate>Mon, 26 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164679</guid>
<dc:date>2023-06-26T00:00:00Z</dc:date>
</item>
<item>
<title>Low-Cost Deep Learning for Building Detection with Application to Informal Urban Planning</title>
<link>https://hdl.handle.net/1721.1/164676</link>
<description>Low-Cost Deep Learning for Building Detection with Application to Informal Urban Planning
González, Lucas; Toutouh, Jamal; Nesmachnow, Sergio
This article studies the application of deep neural networks for automatic building detection in aerial RGB images. Special focus is put on accuracy robustness in both well-structured and poorly planned urban scenarios, which pose significant challenges due to occlusions, irregular building layouts, and limited contextual cues. The applied methodology considers several CNNs using only RBG images as input, and both validation and transfer capabilities are studied. U-Net-based models achieve the highest single-model accuracy, with an Intersection over Union (&#119868;⁢&#119900;⁢&#119880;) of 0.9101. A soft-voting ensemble of the best U-Net models further increases performance, reaching a best ensemble &#119868;⁢&#119900;⁢&#119880; of 0.9665, improving over state-of-the-art building detection methods on standard benchmarks. The approach demonstrates strong generalization using only RGB imagery, supporting scalable, low-cost applications in urban planning and geospatial analysis.
</description>
<pubDate>Fri, 09 Jan 2026 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164676</guid>
<dc:date>2026-01-09T00:00:00Z</dc:date>
</item>
<item>
<title>Efficient Learning and Computation of Linear Correlated Equilibrium in General Convex Games</title>
<link>https://hdl.handle.net/1721.1/164637</link>
<description>Efficient Learning and Computation of Linear Correlated Equilibrium in General Convex Games
Daskalakis, Constantinos; Farina, Gabriele; Fishelson, Maxwell; Pipis, Charilaos; Schneider, Jon
We propose efficient no-regret learning dynamics and ellipsoid-based methods for computing linear correlated equilibria—a relaxation of correlated equilibria and a strengthening of coarse correlated equilibria—in general convex games. These are games where the number of pure strategies is potentially exponential in the natural representation of the game, such as extensive-form games. Our work identifies linear correlated equilibria as the tightest known notion of equilibrium that is computable in polynomial time and is efficiently learnable for general convex games. Our results are enabled by a generalization of the seminal framework of Gordon et al. for Φ-regret minimization, providing extensions to this framework that can be used even when the set of deviations Φ is intractable to separate/optimize over. Our polynomial-time algorithms are similarly enabled by extending the Ellipsoid-Against-Hope approach of Papadimitriou and Roughgarden and its generalization to games of non-polynomial type proposed by Farina and Pipis. We provide an extension to these approaches when we do not have access to the separation oracles required by these works for the dual player.
Constantinos Daskalakis, Gabriele Farina, Maxwell Fishelson, Charilaos Pipis, and Jon Schneider. 2025. Efficient Learning and Computation of Linear Correlated Equilibrium in General Convex Games. In Proceedings of the 57th Annual ACM Symposium on Theory of Computing (STOC '25). Association for Computing Machinery, New York, NY, USA, 542–553.
</description>
<pubDate>Sun, 15 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164637</guid>
<dc:date>2025-06-15T00:00:00Z</dc:date>
</item>
<item>
<title>When Connectivity Is Hard, Random Walks Are Easy with Non-determinism</title>
<link>https://hdl.handle.net/1721.1/164636</link>
<description>When Connectivity Is Hard, Random Walks Are Easy with Non-determinism
Doron, Dean; Pyne, Edward; Tell, Roei; Williams, R. Ryan
Two fundamental problems on directed graphs are to decide s-t connectivity, and to estimate the behavior of random walks. Currently, there is no known algorithm for s-t connectivity running in polynomial time and no(1) space, and no known algorithm for estimating the n-step random walk matrix running in non-deterministic logspace.&#13;
We show that for every directed graph, at least one of these problems is solvable in time and space that significantly improve on the respective state-of-the-art. In particular, there is a pair of algorithms A1 and A2 such that for every graph G, either:&#13;
A1(G) outputs the transitive closure of G in polynomial time and polylogarithmic space. A2(G) outputs an approximation of the n-step random walk matrix of G in non-deterministic logspace.&#13;
As one application, we show surprisingly tight win-win results for space-bounded complexity. For example, for certain parameter regimes, either Savitch’s theorem can be non-trivially sped up, or randomized space can be almost completely derandomized.&#13;
We also apply our techniques to significantly weaken the assumptions required to derandomize space-bounded computation, and to make non-deterministic space-bounded computation unambiguous. Specifically, we deduce such conclusions from lower bounds against uniform circuits of polynomial size, which is an exponential improvement on the required hardness in previous works (Doron–Pyne–Tell STOC 2024, Li–Pyne–Tell FOCS 2024). We further show similar results for minimal-memory derandomization (Doron–Tell CCC 2024).&#13;
To prove these results, we substantially improve the array of technical tools introduced in recent years for studying hardness-vs.-randomness for bounded-space computation. In particular, we develop derandomized distinguish-to-predict transformations for new types of distinguishers (corresponding to compositions of PRGs with weak distinguishers), we construct a derandomized logspace reconstruction procedure for the Shaltiel–Umans generator (JACM 2005) that can compress hard truth-tables to polylogarithmic size, and we design a version of the Chen–Tell generator (FOCS 2021) that is particularly suitable for the space-bounded setting.
STOC ’25, Prague, Czechia
</description>
<pubDate>Sun, 15 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164636</guid>
<dc:date>2025-06-15T00:00:00Z</dc:date>
</item>
<item>
<title>QMA vs QCMA and Pseudorandomness</title>
<link>https://hdl.handle.net/1721.1/164635</link>
<description>QMA vs QCMA and Pseudorandomness
Liu, Jiahui; Mutreja, Saachi; Yuen, Henry
We study a longstanding question of Aaronson and Kuperberg on whether there exists a classical oracle separating QMA from QCMA. Settling this question in either direction would yield insight into the power of quantum proofs over classical proofs. We show that such an oracle exists if a certain quantum pseudorandomness conjecture holds. Roughly speaking, the conjecture posits that quantum algorithms cannot, by making few queries, distinguish between the uniform distribution over permutations versus permutations drawn from so-called “dense” distributions.&#13;
Our result can be viewed as establishing a “win-win” scenario: either there is a classical oracle separation of QMA from QCMA, or there is quantum advantage in distinguishing pseudorandom distributions on permutations.
STOC ’25, Prague, Czechia
</description>
<pubDate>Sun, 15 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164635</guid>
<dc:date>2025-06-15T00:00:00Z</dc:date>
</item>
<item>
<title>Semantics of Integrating and Differentiating Singularities</title>
<link>https://hdl.handle.net/1721.1/164634</link>
<description>Semantics of Integrating and Differentiating Singularities
Michel, Jesse; Lee, Wonyeol; Yang, Hongseok
A singular function is a partial function such that at one or more points, the left and/or right limit diverge (e.g., the function 1/x). Since programming languages typically support division, programs may denote singular functions. Although on its own, a singularity may be considered a bug, introducing a division-by-zero error, singular integrals—a version of the integral that is well-defined when the integrand is a singular function and the domain of integration contains a singularity—arise in science and engineering, including in physics, aerodynamics, mechanical engineering, and computer graphics.&#13;
In this paper, we present the first semantics of a programming language for singular integration. Our differentiable programming language, SingularFlow, supports the evaluation and differentiation of singular integrals. We formally define the denotational semantics of SingularFlow, deriving all the necessary mathematical machinery so that this work is rigorous and self-contained. We then define an operational semantics for SingularFlow that estimates integrals and their derivatives using Monte Carlo samples, and show that the operational semantics is a well-behaved estimator for the denotational semantics.&#13;
We implement SingularFlow in JAX and evaluate the implementation on a suite of benchmarks that perform the finite Hilbert transform, an integral transform related to the Fourier transform, which arises in domains such as physics and electrical engineering. We then use SingularFlow to approximate the solutions to four singular integral equations—equations where the unknown function is in the integrand of a singular integral—arising in aerodynamics and mechanical engineering.
</description>
<pubDate>Fri, 13 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164634</guid>
<dc:date>2025-06-13T00:00:00Z</dc:date>
</item>
<item>
<title>SoS Certificates for Sparse Singular Values and Their Applications: Robust Statistics, Subspace Distortion, and More</title>
<link>https://hdl.handle.net/1721.1/164633</link>
<description>SoS Certificates for Sparse Singular Values and Their Applications: Robust Statistics, Subspace Distortion, and More
Diakonikolas, Ilias; Hopkins, Samuel B.; Pensia, Ankit; Tiegel, Stefan
We study sparse singular value certificates for random rectangular matrices. If M is a d × n matrix with independent Gaussian entries, we give a new family of polynomial-time algorithms which can certify upper bounds on the maximum of ||M u||, where u is a unit vector with at most η n nonzero entries for a given η ∈ (0,1). This basic algorithmic primitive lies at the heart of a wide range of problems across algorithmic statistics and theoretical computer science, including robust mean and covariance estimation, certification of distortion of random subspaces of n, certification of the 2 → p norm of a random matrix, and sparse principal component analysis.&#13;
Our algorithms certify a bound which is asymptotically smaller than the naive one, given by the maximum singular value of M, for nearly the widest-possible range of n,d, and η. Efficiently certifying such a bound for a range of n,d and η which is larger by any polynomial factor than what is achieved by our algorithm would violate lower bounds in the statistical query and low-degree polynomials models. Our certification algorithm makes essential use of the Sum-of-Squares hierarchy. To prove the correctness of our algorithm, we develop a new combinatorial connection between the graph matrix approach to analyze random matrices with dependent entries, and the Efron-Stein decomposition of functions of independent random variables.&#13;
As applications of our certification algorithm, we obtain new efficient algorithms for a wide range of well-studied algorithmic tasks. In algorithmic robust statistics, we obtain new algorithms for robust mean and covariance estimation with tradeoffs between breakdown point and sample complexity, which are nearly matched by statistical query and low-degree polynomial lower bounds (that we establish). We also obtain new polynomial-time guarantees for certification of ℓ1/ℓ2 distortion of random subspaces of n (also with nearly matching lower bounds), sparse principal component analysis, and certification of the 2→ p norm of a random matrix.
STOC ’25, Prague, Czechia
</description>
<pubDate>Sun, 15 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164633</guid>
<dc:date>2025-06-15T00:00:00Z</dc:date>
</item>
<item>
<title>Weak Recovery, Hypothesis Testing, and Mutual Information in Stochastic Block Models and Planted Factor Graphs</title>
<link>https://hdl.handle.net/1721.1/164632</link>
<description>Weak Recovery, Hypothesis Testing, and Mutual Information in Stochastic Block Models and Planted Factor Graphs
Mossel, Elchanan; Sly, Allan; Sohn, Youngtak
The stochastic block model is a canonical model of communities in random graphs. It was introduced in the social sciences and statistics as a model of communities, and in theoretical computer science as an average case model for graph partitioning problems under the name of the “planted partition model.” Given a sparse stochastic block model, the two standard inference tasks are: (i) Weak recovery: can we estimate the communities with non-trivial overlap with the true communities? (ii) Detection/Hypothesis testing: can we distinguish if the sample was drawn from the block model or from a random graph with no community structure with probability tending to 1 as the graph size tends to infinity? In this work, we show that for sparse stochastic block models, the two inference tasks are equivalent except at a critical point. That is, weak recovery is information theoretically possible if and only if detection is possible. We thus find a strong connection between these two notions of inference for the model. We further prove that when detection is impossible, an explicit hypothesis test based on low-degree polynomials in the adjacency matrix of the observed graph achieves the optimal statistical power. This low-degree test is efficient as opposed to the likelihood ratio test, which is not known to be efficient. Moreover, we prove that the asymptotic mutual information between the observed network and the community structure exhibits a phase transition at the weak recovery threshold. Our results are proven in much broader settings including the hypergraph stochastic block models and general planted factor graphs. In these settings, we prove that the impossibility of weak recovery implies contiguity and provide a condition that guarantees the equivalence of weak recovery and detection.
STOC ’25, Prague, Czechia
</description>
<pubDate>Sun, 15 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164632</guid>
<dc:date>2025-06-15T00:00:00Z</dc:date>
</item>
<item>
<title>Near-Optimal Time-Sparsity Trade-Offs for Solving Noisy Linear Equations</title>
<link>https://hdl.handle.net/1721.1/164631</link>
<description>Near-Optimal Time-Sparsity Trade-Offs for Solving Noisy Linear Equations
Bangachev, Kiril; Bresler, Guy; Tiegel, Stefan; Vaikuntanathan, Vinod
We present a polynomial-time reduction from solving noisy linear equations over in dimension Θ(klogn/(logk,logq,loglogn)) with a uniformly random coefficient matrix to noisy linear equations over in dimension n where each row of the coefficient matrix has uniformly random support of size k. This allows us to deduce the hardness of sparse problems from their dense counterparts. In particular, we derive hardness results in the following canonical settings:&#13;
• Assuming the ℓ-dimensional (dense) learning with errors () problem over a polynomial-size field takes time 2Ω(ℓ), k-sparse in dimension n takes time nΩ(k/(logk · (logk + loglogn))) .&#13;
• Assuming the ℓ-dimensional (dense) learning parity with noise () problem over ℤ/2ℤ takes time 2Ω(ℓ/logℓ), k-sparse in dimension n takes time nΩ(k/(logk · (logk + loglogn)2)) .&#13;
These running time lower bounds are nearly tight as both sparse problems can be solved in time nO(k), given sufficiently many samples.&#13;
Our reduction allows us to derive several consequences in cryptography and the computational complexity of statistical problems. In addition, as a new application, we give a reduction from k-sparse LWE to noisy tensor completion. Concretely, composing the two reductions implies that order-k rank-2k−1 noisy tensor completion in ℝn⊗ k takes time nΩ(k/ logk · (logk + loglogn)), assuming the exponential hardness of standard worst-case lattice problems.
STOC ’25, Prague, Czechia
</description>
<pubDate>Sun, 15 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164631</guid>
<dc:date>2025-06-15T00:00:00Z</dc:date>
</item>
<item>
<title>Stochastic Matching via In-n-Out Local Computation Algorithms</title>
<link>https://hdl.handle.net/1721.1/164630</link>
<description>Stochastic Matching via In-n-Out Local Computation Algorithms
Azarmehr, Amir; Behnezhad, Soheil; Ghafari, Alma; Rubinfeld, Ronitt
Consider the following stochastic matching problem. We are given a known graph G=(V, E). An unknown subgraph Gp = (V, Ep) is realized where Ep includes every edge of E independently with some probability p ∈ (0, 1]. The goal is to query a sparse subgraph H of G, such that the realized edges in H include an approximate maximum matching of Gp.&#13;
This problem has been studied extensively over the last decade due to its applications in kidney exchange, online dating, and online labor markets. For any fixed є &gt; 0, [BDH STOC’20] showed that any graph G has a subgraph H with (1/p) = (1/p)(log(1/p)) maximum degree, achieving a (1−є)-approximation. A major open question is the best approximation achievable with (1/p)-degree subgraphs. A long line of work has progressively improved the approximation in the (1/p)-degree regime from .5 [BDH+ EC’15] to .501 [AKL EC’17], .656 [BHFR SODA’19], .666 [AB SOSA’19], .731 [BBD SODA’22] (bipartite graphs), and most recently to .68 [DS ’24].&#13;
In this work, we show that a (1/p)-degree subgraph can obtain a (1−є)-approximation for any desirably small fixed є &gt; 0, achieving the best of both worlds.&#13;
Beyond its quantitative improvement, a key conceptual contribution of our work is to connect local computation algorithms (LCAs) to the stochastic matching problem for the first time.&#13;
While prior work on LCAs mainly focuses on their out-queries (the number of vertices probed to produce the output of a given vertex), our analysis also bounds the in-queries (the number of vertices that probe a given vertex). We prove that the outputs of LCAs with bounded in- and out-queries (in-n-out LCAs for short) have limited correlation, a property that our analysis crucially relies on and might find applications beyond stochastic matching
STOC ’25, Prague, Czechia
</description>
<pubDate>Sun, 15 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164630</guid>
<dc:date>2025-06-15T00:00:00Z</dc:date>
</item>
<item>
<title>Colloidal State Machines as Smart Tracers for Chemical Reactor Analysis</title>
<link>https://hdl.handle.net/1721.1/164629</link>
<description>Colloidal State Machines as Smart Tracers for Chemical Reactor Analysis
Zhang, Ge; Yang, Jing Fan; Yang, Sungyun; Brooks, Allan M; Koman, Volodymyr B; Gong, Xun; Strano, Michael S
A widely utilized tool in reactor analysis is passive tracers that report the residence time distribution, allowing estimation of the conversion and other properties of the system. Recently, advances in microrobotics have introduced powered and functional entities with sizes comparable to some traditional tracers. This has motivated the concept of Smart Tracers that could record the local chemical concentrations, temperature, or other conditions as they progress through reactors. Herein, the design constraints and advantages of Smart Tracers by simulating their operation in a laminar flow reactor model conducting chemical reactions of various orders are analyzed. It is noted that far fewer particles are necessary to completely map even the most complex concentration gradients compared with their conventional counterparts. Design criteria explored herein include sampling frequency, memory storage capacity, and ensemble number necessary to achieve the required accuracy to inform a reactor model. Cases of severe particle diffusion and sensor noise appear to bind the functional upper limit of such probes and require consideration for future design. The results of the study provide a starting framework for applying the new technology of microrobotics to the broad and impactful set of problems classified as chemical reactor analysis.
</description>
<pubDate>Thu, 29 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164629</guid>
<dc:date>2023-06-29T00:00:00Z</dc:date>
</item>
<item>
<title>Mitigation of ventilation air methane (VAM) using novel methanotrophic coating materials: a technical analysis</title>
<link>https://hdl.handle.net/1721.1/164628</link>
<description>Mitigation of ventilation air methane (VAM) using novel methanotrophic coating materials: a technical analysis
Lundberg, Daniel James; Kim, Jimin; Parviz, Dorsa; Strano, Michael S
Ventilation air methane (VAM) is a potent greenhouse gas source originating from geological wells, current and extinct mineshafts and other terrestrial conduits venting methane to the atmosphere, contributing to global methane emissions and disproportionate warming potential. Herein, we introduce the concept of the &lt;jats:italic&gt;methanotrophic material&lt;/jats:italic&gt; as an engineering solution. Such materials should be capable of converting methane at ambient temperatures and pressures to a binder product, capturing and permanently sequestering the methane while simultaneously restricting its further emission. While such materials are currently under research development, this goal is supported and facilities by the mathematical framework, introduced and used herein, to evaluate the ability to convert methane, using currently published activity data. We include a case study of the conversion of a characteristic stream of VAM (0.6% methane in air, 1.7 × 10&lt;jats:sup&gt;8&lt;/jats:sup&gt; l hr&lt;jats:sup&gt;−1&lt;/jats:sup&gt; equivalent to 100 000 standard cubic feet per minute). We show that when appropriately designed, such systems require a surface coverage of less than 1000 m of mine tunnel length (equivalent to 20 000 m&lt;jats:sup&gt;2&lt;/jats:sup&gt; areal coverage) in order to reduce the methane emission from this stream by over 99%. Finally, we highlight formaldehyde as a reactive intermediate of methane oxidation which may itself be incorporated into these coating materials. As a component of binders and polymers already used ubiquitously in commercial products, this intermediate ultimately allows these systems to sequester the carbon from methane in a stable and solid form. The results presented here are easily extended to the treatment of other methane streams—either more concentrated or dilute—and the results herein will guide the design and development of a new class of carbon-negative materials.
</description>
<pubDate>Fri, 20 Oct 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164628</guid>
<dc:date>2023-10-20T00:00:00Z</dc:date>
</item>
<item>
<title>Synergistic multi-source ambient RF and thermal energy harvester for green IoT applications</title>
<link>https://hdl.handle.net/1721.1/164627</link>
<description>Synergistic multi-source ambient RF and thermal energy harvester for green IoT applications
Bakytbekov, Azamat; Nguyen, Thang Q; Zhang, Ge; Strano, Michael S; Salama, Khaled N; Shamim, Atif
In a future green Internet of Things (IoT) reality, billions of devices of the IoT infrastructure should be self-powered. Harvesting ambient energy to power IoT devices is an attractive solution that can extend battery life or can completely replace batteries. Considering the global applications of IoT, ubiquitous and continuous availability is an important requirement for ambient energy sources. Radio frequency (RF) energy from mobile phone towers and thermal energy from diurnal cycle temperature fluctuations are good candidates. In this study, we present a synergistic multi-source energy harvester (MSEH) comprising an RF energy harvester (RFEH) and a thermal energy harvester (TEH) integrated through a dual-function component, heatsink antenna. Both harvesters collect ambient energy 24 h a day and are not location specific. The TEH, which is in the shape of a box, collects energy using heatsinks on its sidewalls. The same heatsinks are optimized to also serve as receiving antennas of the RFEH, which collects energy from the GSM900, GSM1800, and 3G bands. Due to the synergistic integration, radiation efficiency of the antenna doubled from 40% to 80% which resulted in ∼ 10% increase in power conversion efficiency of the RFEH. Similarly, the average power of the TEH without heatsinks 120 μ W is doubled to 240 μ W for TEH with heatsinks. Field tests have shown that the outputs of the TEH and RFEH have increased 4 and 3 times compared to the independent TEH and RFEH respectively. A temperature and humidity sensor based IoT node has been successfully powered through this energy harvesting system. Overall, the MSEH can collect 3680 μ W h of energy per day which is sufficient to obtain the sensors data with a time interval of 3.5 s.
</description>
<pubDate>Fri, 01 Dec 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164627</guid>
<dc:date>2023-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Chromatic covalent organic frameworks enabling in-vivo chemical tomography</title>
<link>https://hdl.handle.net/1721.1/164626</link>
<description>Chromatic covalent organic frameworks enabling in-vivo chemical tomography
Wang, Song; Han, Yangyang; Reddy, Vaishnavi Amarr; Ang, Mervin Chun-Yi; Sánchez-Velázquez, Gabriel; Saju, Jolly Madathiparambil; Cao, Yunteng; Khong, Duc Thinh; Jayapal, Praveen Kumar; Cheerlavancha, Raju; Loh, Suh In; Singh, Gajendra Pratap; Urano, Daisuke; Rajani, Sarojam; Marelli, Benedetto; Strano, Michael S
Covalent organic frameworks designed as chromatic sensors offer opportunities to probe biological interfaces, particularly when combined with biocompatible matrices. Particularly compelling is the prospect of chemical tomography – or the 3D spatial mapping of chemical detail within the complex environment of living systems. Herein, we demonstrate a chromic Covalent Organic Framework (COF) integrated within silk fibroin (SF) microneedles that probe plant vasculature, sense the alkalization of vascular fluid as a biomarker for drought stress, and provide a 3D in-vivo mapping of chemical gradients using smartphone technology. A series of Schiff base COFs with tunable pKa ranging from 5.6 to 7.6 enable conical, optically transparent SF microneedles with COF coatings of 120 to 950 nm to probe vascular fluid and the surrounding tissues of tobacco and tomato plants. The conical design allows for 3D mapping of the chemical environment (such as pH) at standoff distances from the plant, enabling in-vivo chemical tomography. Chromatic COF sensors of this type will enable multidimensional chemical mapping of previously inaccessible and complex environments.
</description>
<pubDate>Mon, 28 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164626</guid>
<dc:date>2024-10-28T00:00:00Z</dc:date>
</item>
<item>
<title>Decoding early stress signaling waves in living plants using nanosensor multiplexing</title>
<link>https://hdl.handle.net/1721.1/164625</link>
<description>Decoding early stress signaling waves in living plants using nanosensor multiplexing
Ang, Mervin Chun-Yi; Saju, Jolly Madathiparambil; Porter, Thomas K; Mohaideen, Sayyid; Sarangapani, Sreelatha; Khong, Duc Thinh; Wang, Song; Cui, Jianqiao; Loh, Suh In; Singh, Gajendra Pratap; Chua, Nam-Hai; Strano, Michael S; Sarojam, Rajani
Increased exposure to environmental stresses due to climate change have adversely affected plant growth and productivity. Upon stress, plants activate a signaling cascade, involving multiple molecules like H2O2, and plant hormones such as salicylic acid (SA) leading to resistance or stress adaptation. However, the temporal ordering and composition of the resulting cascade remains largely unknown. In this study we developed a nanosensor for SA and multiplexed it with H2O2 nanosensor for simultaneous monitoring of stress-induced H2O2 and SA signals when Brassica rapa subsp. Chinensis (Pak choi) plants were subjected to distinct stress treatments, namely light, heat, pathogen stress and mechanical wounding. Nanosensors reported distinct dynamics and temporal wave characteristics of H2O2 and SA generation for each stress. Based on these temporal insights, we have formulated a biochemical kinetic model that suggests the early H2O2 waveform encodes information specific to each stress type. These results demonstrate that sensor multiplexing can reveal stress signaling mechanisms in plants, aiding in developing climate-resilient crops and pre-symptomatic stress diagnoses.
</description>
<pubDate>Mon, 01 Jan 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164625</guid>
<dc:date>2024-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Polymeric Nanocarriers Autonomously Cross the Plant Cell Wall and Enable Protein Delivery for Stress Sensing</title>
<link>https://hdl.handle.net/1721.1/164624</link>
<description>Polymeric Nanocarriers Autonomously Cross the Plant Cell Wall and Enable Protein Delivery for Stress Sensing
Zhang, Yilin; Cao, Yunteng; Jiang, Wenzhi; Ma, Qingquan; Shin, Jinwoo; Sun, Hui; Cui, Jianqiao; Chen, Yongsheng; Giraldo, Juan Pablo; Strano, Michael S; Lowry, Gregory V; Sheen, Jen; Marelli, Benedetto
Delivery of proteins in plant cells can facilitate the design of desired functions by modulation of biological processes and plant traits but is currently limited by narrow host range, tissue damage, and poor scalability. Physical barriers in plants, including cell walls and membranes, limit protein delivery to desired plant tissues. Herein, a cationic high aspect ratio polymeric nanocarriers (PNCs) platform is developed to enable efficient protein delivery to plants. The cationic nature of PNCs binds proteins through electrostatic. The ability to precisely design PNCs’ size and aspect ratio allowed us to find a cutoff of ≈14 nm in the cell wall, below which cationic PNCs can autonomously overcome the barrier and carry their cargo into plant cells. To exploit these findings, a reduction‐oxidation sensitive green fluorescent protein (roGFP) is deployed as a stress sensor protein cargo in a model plant &lt;jats:italic&gt;Nicotiana benthamiana&lt;/jats:italic&gt; and common crop plants, including tomato and maize. In vivo imaging of PNC‐roGFP enabled optical monitoring of plant response to wounding, biotic, and heat stressors. These results show that PNCs can be precisely designed below the size exclusion limit of cell walls to overcome current limitations in protein delivery to plants and facilitate species‐independent plant engineering.
</description>
<pubDate>Fri, 16 Aug 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164624</guid>
<dc:date>2024-08-16T00:00:00Z</dc:date>
</item>
<item>
<title>Analysis of Glucose Responsive Glucagon Therapeutics using Computational Models of the Glucoregulatory System</title>
<link>https://hdl.handle.net/1721.1/164623</link>
<description>Analysis of Glucose Responsive Glucagon Therapeutics using Computational Models of the Glucoregulatory System
Alizadehmojarad, Ali A; Yang, Sungyun; Gong, Xun; Strano, Michael S
Glucose‐responsive glucagon (GRG) therapeutics are a promising technology for reducing the risk of severe hypoglycemia as a complication of diabetes mellitus. Herein, the performance of candidate GRGs in the literature by modeling the kinetics of activation and connecting them as input into physiological glucoregulatory models is evaluated and projected the two distinct GRG designs, experimental results reported in Wu et al. (GRG‐I) and Webber et al. (GRG‐II) is considered. Both are evaluated using a multi‐compartmental glucoregulatory model (IMPACT) and used to compare in‐vivo experimental data of therapeutic performance in rats and mice. For GRG‐I and GRG‐II, the total integrated glucose material balances are overestimated by 41.5% ± 14% and underestimated by 24.8% ± 16% compared to in‐vivo time‐course data, respectively. These large differences to the relatively simple computational descriptions of glucagon dynamics in the model, which underscores the urgent need for improved glucagon models is attributed. Additionally, therapeutic insulin and glucagon infusion pumps are modeled for type 1 diabetes mellitus (T1DM) human subjects to extend the results to additional datasets. These observations suggest that both the representative physiological and non‐physiological models considered in this work require additional refinement to successfully describe clinical data that involve simultaneous, coupled insulin, glucose, and glucagon dynamics.
</description>
<pubDate>Thu, 29 Aug 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164623</guid>
<dc:date>2024-08-29T00:00:00Z</dc:date>
</item>
<item>
<title>A Microrobotic Design for the Spontaneous Tracing of Isochemical Contours in the Environment</title>
<link>https://hdl.handle.net/1721.1/164622</link>
<description>A Microrobotic Design for the Spontaneous Tracing of Isochemical Contours in the Environment
Brooks, A Merritt; Yang, Sungyun; Kang, Byung Ha; Strano, Michael S
Microrobotic platforms hold significant potential to advance a variety of fields, from medicine to environmental sensing. Herein, minimally functional robotic entities modeled on readily achievable state-of-the-art features in a modern lab or cleanroom are computationally simulated. Inspired by Dou and Bishop (Phys Rev Res. 2019;1(3):1–5), it is shown that the simple combination of unidirectional steering connected to a single environmental (chemical) sensor along with constant propulsion gives rise to highly complex functions of significant utility. Such systems can trace the contours orthogonal to arbitrary chemical gradients in the environment. Also, pairs of such robots that are additionally capable of emitting the same chemical signal are shown to exhibit coupled relative motion. When the pair has unidirectional steering in opposite directions within the 2D plane (i.e., counter-rotating), they move in parallel trajectories to each other. Alternatively, when steering is in the same direction (corotation), the two move in the same epicyclical trajectory. In this way, the chirality of the unidirectional steering produces two distinct emergent phenomena. The behavior is understood as a ratchet mechanism that exploits the differential in the radii of curvature corresponding to different spatial locations. Applications to environmental detection, remediation, and monitoring are discussed.
</description>
<pubDate>Mon, 09 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164622</guid>
<dc:date>2024-09-09T00:00:00Z</dc:date>
</item>
<item>
<title>Electrokinetic Motion of Neurotransmitter Ions through a 1.01 nm Diameter Single-Walled Carbon Nanotube</title>
<link>https://hdl.handle.net/1721.1/164621</link>
<description>Electrokinetic Motion of Neurotransmitter Ions through a 1.01 nm Diameter Single-Walled Carbon Nanotube
Ellison, Mark D; Allen, Jacqueline; Bonfiglio, Michael; Seeburger, Matthew; Setenet, Jean; DiGinto, Biagio; Bonanny, Harrison; Russell, Aaliyah; Baird, David; Davis, Liana; McCarthy, Ella; Manley, Alyson; Blatt, Sarah; Lippe, David; Ragone, Daniel; Dyer, Brock; Osgood, Jillian; Strano, Michael S
The transport of cations of the neurotransmitters acetylcholine, choline, and dopamine through a 1.01 nm-diameter, 1.1 mm-long single-walled carbon nanotube (SWNT) has been studied for the first time. As a comparison, sodium and aniline ion transport was also investigated. All of these ions exhibited significantly enhanced electrophoretic mobilities over bulk transport. The electrophoretic mobilities of acetylcholine, choline, and sodium were found to depend on pH, specifically increasing as pH decreases. This result is explained by hydrogen ions saturating the surface charges of the SWNT. Conversely, dopamine and aniline have mobilities that do not depend on pH. This difference is attributed to the benzene ring and the size of these ions. An analysis of the time required for an ion to traverse the nanotube shows that the ions adsorb to and desorb from the walls as they pass through the tube. Acetylcholine, choline, and sodium show desorption rate constants that decrease with increasing pH, whereas dopamine and aniline have rate constants that remain constant over different pH values. This is consistent with the relationship between adsorption and desorption rate constants and mobility from an adsorption/desorption kinetic model.
</description>
<pubDate>Tue, 11 Mar 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164621</guid>
<dc:date>2025-03-11T00:00:00Z</dc:date>
</item>
<item>
<title>Advancements in Plant Diagnostic and Sensing Technologies</title>
<link>https://hdl.handle.net/1721.1/164620</link>
<description>Advancements in Plant Diagnostic and Sensing Technologies
Krishnamoorthi, Shalini; Koh, Sally Shuxian; Ang, Mervin Chun‐Yi; Teo, Mark Ju Teng; Jie, Randall Ang; Dinish, US; Strano, Michael S; Urano, Daisuke
Recent advancements in plant sensing technologies have significantly improved agricultural productivity while reducing resource inputs, resulting in higher yields by enabling early disease detection, precise diagnostics, and optimized fertilizer and pesticide applications. Each adopted technology offers unique advantages suitable for various farm operations, breeding programs, and laboratory research. This review article first summarizes key target traits, endogenous structures, and metabolites that serve as focal points for plant diagnostic and sensing technologies. Next, conventional plant sensing technologies based on light reflectance and fluorescence, which rely on foliar phytopigments and fluorophores such as chlorophylls are discussed. These methods, along with advanced analytical strategies incorporating machine learning, enable accurate stress detection and classification beyond general assessments of plant health and stress status. Advanced optical techniques such as Fourier transform infrared spectroscopy (FT‐IR) and Raman spectroscopy, which allow specific measurements of various plant metabolites and structural components are then highlighted. Furthermore, the design and applications of nanotechnology chemical sensors capable of highly sensitive and selective detection of specific phytochemicals, including phytohormones and signaling second messengers, which regulate physiological and developmental processes at micro‐ to sub‐micromolar concentrations are introduced. By selecting appropriate sensing methodologies, agricultural production, and relevant research activities can be significantly improved.
</description>
<pubDate>Mon, 23 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164620</guid>
<dc:date>2025-06-23T00:00:00Z</dc:date>
</item>
<item>
<title>The Jacobi Factoring Circuit: Quantum Factoring with Near-Linear Gates and Sublinear Space and Depth</title>
<link>https://hdl.handle.net/1721.1/164619</link>
<description>The Jacobi Factoring Circuit: Quantum Factoring with Near-Linear Gates and Sublinear Space and Depth
Kahanamoku-Meyer, Gregory D.; Ragavan, Seyoon; Vaikuntanathan, Vinod; Van Kirk, Katherine
We present a compact quantum circuit for factoring a large class of integers, including some whose classical hardness is expected to be equivalent to RSA (but not including RSA integers themselves). Most notably, we factor n-bit integers of the form P2 Q with logQ = Θ(na) for a ∈ (2/3, 1) in space and depth sublinear in n (specifically, O(logQ)) using O(n) quantum gates; for these integers, no known classical algorithms exploit the relatively small size of Q to run asymptotically faster than general-purpose factoring algorithms. To our knowledge, this is the first polynomial-time circuit to achieve sublinear qubit count for a classically-hard factoring problem. We thus believe that factoring such numbers has potential to be the most concretely efficient classically-verifiable proof of quantumness currently known.&#13;
Our circuit builds on the quantum algorithm for squarefree decomposition discovered by Li, Peng, Du, and Suter (Nature Scientific Reports 2012), which relies on computing the Jacobi symbol in quantum superposition. The technical core of our contribution is a new space-efficient quantum algorithm to compute the Jacobi symbol of A mod B, in the regime where B is classical and much larger than A. Our circuit for computing the Jacobi symbol generalizes to related problems such as computing the greatest common divisor and modular inverses, and thus could be of independent interest.
STOC ’25, Prague, Czechia
</description>
<pubDate>Sun, 15 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164619</guid>
<dc:date>2025-06-15T00:00:00Z</dc:date>
</item>
<item>
<title>Classical Commitments to Quantum States</title>
<link>https://hdl.handle.net/1721.1/164618</link>
<description>Classical Commitments to Quantum States
Gunn, Sam; Tauman Kalai, Yael; Natarajan, Anand; Vill?nyi, ?gi
We define the notion of a classical commitment scheme to quantum states, which allows a quantum prover to compute a classical commitment to a quantum state, and later open each qubit of the state in either the standard or the Hadamard basis. Our notion is a strengthening of the measurement protocol from Mahadev (STOC 2018). We construct such a commitment scheme from the post-quantum Learning With Errors (LWE) assumption, and more generally from any noisy trapdoor claw-free function family that has the distributional strong adaptive hardcore bit property (a property that we define in this work).&#13;
Our scheme is succinct in the sense that the running time of the verifier in the commitment phase depends only on the security parameter (independent of the size of the committed state), and its running time in the opening phase grows only with the number of qubits that are being opened (and the security parameter). As a corollary we obtain a classical succinct argument system for QMA under the post-quantum LWE assumption. Previously, this was only known assuming post-quantum secure indistinguishability obfuscation. As an additional corollary we obtain a generic way of converting any X/Z quantum PCP into a succinct argument system under the quantum hardness of LWE.
STOC ’25, Prague, Czechia
</description>
<pubDate>Sun, 15 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164618</guid>
<dc:date>2025-06-15T00:00:00Z</dc:date>
</item>
<item>
<title>Symmetric Perceptrons, Number Partitioning and Lattices</title>
<link>https://hdl.handle.net/1721.1/164617</link>
<description>Symmetric Perceptrons, Number Partitioning and Lattices
Vafa, Neekon; Vaikuntanathan, Vinod
The symmetric binary perceptron (SBPκ) problem with parameter κ : ℝ≥1 → [0,1] is an average-case search problem defined as follows: given a random Gaussian matrix A ∼ N(0,1)n × m as input where m ≥ n, output a vector x ∈ {−1,1}m such that || A x ||∞ ≤ κ(m/n) · √m .&#13;
The number partitioning problem (NPPκ) corresponds to the special case of setting n=1. There is considerable evidence that both problems exhibit large computational-statistical gaps.&#13;
In this work, we show (nearly) tight average-case hardness for these problems, assuming the worst-case hardness of standard approximate shortest vector problems on lattices.&#13;
• For SBPκ, statistically, solutions exist with κ(x) = 2−Θ(x) (Aubin, Perkins and Zdeborová, Journal of Physics 2019). For large n, the best that efficient algorithms have been able to achieve is a far cry from the statistical bound, namely κ(x) = Θ(1/√x) (Bansal and Spencer, Random Structures and Algorithms 2020). The problem has been extensively studied in the TCS and statistics communities, and Gamarnik, Kızıldağ, Perkins and Xu (FOCS 2022) conjecture that Bansal-Spencer is tight: namely, κ(x) = Θ(1/√x) is the optimal value achieved by computationally efficient algorithms. We prove their conjecture assuming the worst-case hardness of approximating the shortest vector problem on lattices.&#13;
• For NPPκ, statistically, solutions exist with κ(m) = Θ(2−m) (Karmarkar, Karp, Lueker and Odlyzko, Journal of Applied Probability 1986). Karmarkar and Karp’s classical differencing algorithm achieves κ(m) = 2−O(log2 m) . We prove that Karmarkar-Karp is nearly tight: namely, no polynomial-time algorithm can achieve κ(m) = 2−Ω(log3 m), once again assuming the worst-case subexponential hardness of approximating the shortest vector problem on lattices to within a subexponential factor.&#13;
Our hardness results are versatile, and hold with respect to different distributions of the matrix A (e.g., i.i.d. uniform entries from [0,1]) and weaker requirements on the solution vector x.
STOC ’25, Prague, Czechia
</description>
<pubDate>Sun, 15 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164617</guid>
<dc:date>2025-06-15T00:00:00Z</dc:date>
</item>
<item>
<title>DNF Learning via Locally Mixing Random Walks</title>
<link>https://hdl.handle.net/1721.1/164616</link>
<description>DNF Learning via Locally Mixing Random Walks
Alman, Josh; Nadimpalli, Shivam; Patel, Shyamal; Servedio, Rocco A.
We give two results on PAC learning DNF formulas using membership queries in the challenging “distribution-free” learning framework, where learning algorithms must succeed for an arbitrary and unknown distribution over {0,1}n.&#13;
(1) We first give a quasi-polynomial time “list-decoding” algorithm for learning a single term of an unknown DNF formula. More precisely, for any target s-term DNF formula f = T1 ∨ ⋯ ∨ Ts over {0,1}n and any unknown distribution D over {0,1}n, our algorithm, which uses membership queries and random examples from D, runs in quasipoly(n,s) time and outputs a list L of candidate terms such that with high probability some term Ti of f belongs to L.&#13;
(2) We then use result (1) to give a quasipoly(n,s)-time algorithm, in the distribution-free PAC learning model with membership queries, for learning the class of size-s DNFs in which all terms have the same size. Our algorithm learns using a DNF hypothesis.&#13;
The key tool used to establish result (1) is a new result on “locally mixing random walks,” which, roughly speaking, shows that a random walk on a graph that is covered by a small number of expanders has a non-negligible probability of mixing quickly in a subset of these expanders.
STOC ’25, Prague, Czechia
</description>
<pubDate>Sun, 15 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164616</guid>
<dc:date>2025-06-15T00:00:00Z</dc:date>
</item>
<item>
<title>Near Optimal Constant Inapproximability under ETH for Fundamental Problems in Parameterized Complexity</title>
<link>https://hdl.handle.net/1721.1/164615</link>
<description>Near Optimal Constant Inapproximability under ETH for Fundamental Problems in Parameterized Complexity
Bafna, Mitali; Karthik C. S.; Minzer, Dor
We prove that under the Exponential Time Hypothesis (ETH), for every ε &gt; 0, there exists a constant C &gt; 0 such that no algorithm running in time nk / logC k can determine whether a given 2-CSP instance with k variables, O(k) constraints, and alphabet size n, is perfectly satisfiable or if every assignment satisfies at most an ε fraction of the constraints.&#13;
By known reductions in the literature, the above result implies near-optimal conditional lower bounds for approximating a host of parameterized problems, such as the k-Clique problem, k-Max-Coverage problem, k-Unique Set Cover problem, k-Median and k-Means problems, parameterized variants of the Nearest Codeword problem, Minimum Distance of a Code problem, Closest Vector problem, and Shortest Vector problem.&#13;
We also establish a densification theorem for the parameterized 2-CSP problem, showing that the aforementioned conditional lower bound for sparse 2-CSPs also holds when the constraint graph is a complete graph. From this densification, we conclude that assuming ETH, there is no algorithm running in time n√k / logC k that approximates the k-Directed Steiner Network problem and the k-Strongly Connected Steiner Subgraph problem to some constant factors.
Mitali Bafna, Karthik C. S., and Dor Minzer. 2025. Near Optimal Constant Inapproximability under ETH for Fundamental Problems in Parameterized Complexity. In Proceedings of the 57th Annual ACM Symposium on Theory of Computing (STOC '25). Association for Computing Machinery, New York, NY, USA, 2118–2129.
</description>
<pubDate>Sun, 15 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164615</guid>
<dc:date>2025-06-15T00:00:00Z</dc:date>
</item>
<item>
<title>Oblivious Defense in ML Models: Backdoor Removal without Detection</title>
<link>https://hdl.handle.net/1721.1/164614</link>
<description>Oblivious Defense in ML Models: Backdoor Removal without Detection
Goldwasser, Shafi; Shafer, Jonathan; Vafa, Neekon; Vaikuntanathan, Vinod
As society grows more reliant on machine learning, ensuring the security of machine learning systems against sophisticated attacks becomes a pressing concern. A recent result of&#13;
Goldwasser, Kim, Vaikuntanathan, and Zamir (FOCS ’22) shows that an adversary can plant undetectable backdoors in machine learning models, allowing the adversary to covertly control the model’s behavior. Backdoors can be planted in such a way that the backdoored machine learning model is computationally indistinguishable from an honest model without backdoors.&#13;
In this paper, we present strategies for defending against backdoors in ML models, even if they are undetectable. The key observation is that it is sometimes possible to provably mitigate or even remove backdoors without needing to detect them, using techniques inspired by the notion of random self-reducibility. This depends on properties of the ground-truth labels (chosen by nature), and not of the proposed ML model (which may be chosen by an attacker).&#13;
We give formal definitions for secure backdoor mitigation, and proceed to show two types of results. First, we show a “global mitigation” technique, which removes all backdoors from a machine learning model under the assumption that the ground-truth labels are close to a Fourier-heavy function. Second, we consider distributions where the ground-truth labels are close to a linear or polynomial function in ℝn. Here, we show “local mitigation” techniques, which remove backdoors with high probability for every input of interest, and are computationally cheaper than global mitigation. All of our constructions are black-box, so our techniques work without needing access to the model’s representation (i.e., its code or parameters). Along the way we prove a simple result for robust mean estimation.
STOC ’25, Prague, Czechia
</description>
<pubDate>Sun, 15 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164614</guid>
<dc:date>2025-06-15T00:00:00Z</dc:date>
</item>
<item>
<title>Faster Rates for No-Regret Learning in General Games via Cautious Optimism</title>
<link>https://hdl.handle.net/1721.1/164613</link>
<description>Faster Rates for No-Regret Learning in General Games via Cautious Optimism
Soleymani, Ashkan; Piliouras, Georgios; Farina, Gabriele
We establish the first uncoupled learning algorithm that attains O(n log2 d logT) per-player regret in multi-player general-sum games, where n is the number of players, d is the number of actions available to each player, and T is the number of repetitions of the game. Our results exponentially improve the dependence on d compared to the O(n  d logT) regret attainable by Log-Regularized Lifted Optimistic FTRL introduced by Farina, Anagnostides, Luo, Lee, Kroer, and Sandholm [2022], and also reduce the dependence on the number of iterations T from log4 T to logT compared to Optimistic Hedge, the previously well-studied algorithm with O(n logd log4 T) regret shown by Daskalakis, Fishelson, and Golowich [2021]. Our algorithm is obtained by combining the classic Optimistic Multiplicative Weights Update (OMWU) with an adaptive, non-monotonic learning rate that paces the learning process of the players, making them more cautious when their regret becomes too negative.
STOC ’25, Prague, Czechia
</description>
<pubDate>Sun, 15 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164613</guid>
<dc:date>2025-06-15T00:00:00Z</dc:date>
</item>
<item>
<title>Explicit Two-Sided Vertex Expanders beyond the Spectral Barrier</title>
<link>https://hdl.handle.net/1721.1/164612</link>
<description>Explicit Two-Sided Vertex Expanders beyond the Spectral Barrier
Hsieh, Jun-Ting; Lin, Ting-Chun; Mohanty, Sidhanth; O'Donnell, Ryan; Zhang, Rachel Yun
We construct the first explicit two-sided vertex expanders that bypass the spectral barrier.&#13;
Previously, the strongest known explicit vertex expanders were given by d-regular Ramanujan graphs, whose spectral properties imply that every small subset of vertices S has at least 0.5d|S| distinct neighbors. However, it is possible to construct Ramanujan graphs containing a small set S with no more than 0.5d|S| neighbors. In fact, no explicit construction was known to break the 0.5 d-barrier.&#13;
In this work, we give an explicit construction of an infinite family of d-regular graphs (for large enough d) where every small set expands by a factor of ≈ 0.6d.&#13;
More generally, for large enough d1,d2, we give an infinite family of (d1,d2)-biregular graphs where small sets on the left expand by a factor of ≈ 0.6d1, and small sets on the right expand by a factor of ≈ 0.6d2. In fact, our construction satisfies an even stronger property: small sets on the left and right have unique-neighbor expansion 0.6d1 and 0.6d2 respectively.&#13;
Our construction follows the tripartite line product framework of Hsieh et. al., and instantiates it using the face-vertex incidence of the 4-dimensional Ramanujan clique complex as its base component. As a key part of our analysis, we derive new bounds on the triangle density of small sets in the Ramanujan clique complex.
STOC ’25, Prague, Czechia
</description>
<pubDate>Sun, 15 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164612</guid>
<dc:date>2025-06-15T00:00:00Z</dc:date>
</item>
<item>
<title>All-Pairs Shortest Paths with Few Weights per Node</title>
<link>https://hdl.handle.net/1721.1/164611</link>
<description>All-Pairs Shortest Paths with Few Weights per Node
Abboud, Amir; Fischer, Nick; Jin, Ce; Williams, Virginia Vassilevska; Xi, Zoe
We study the central All-Pairs Shortest Paths (APSP) problem under the restriction that there are at most d distinct weights on the outgoing edges from every node.&#13;
For d=n this is the classical (unrestricted) APSP problem that is hypothesized to require cubic time n3−o(1), and at the other extreme, for d=1, it is equivalent to the Node-Weighted APSP problem.&#13;
We present new algorithms that achieve the following results:&#13;
* Node-Weighted APSP can be solved in time Õ(n(3+ω)/2) = Õ(n2.686), improving on the 15-year-old subcubic bounds Õ(n(9+ω)/4) = Õ(n2.843) [Chan; STOC ’07] and Õ(n2.830) [Yuster; SODA ’09]. This positively resolves the question of whether Node-Weighted APSP is an ”intermediate” problem in the sense of having complexity n2.5+o(1) if ω=2, in which case it also matches an n2.5−o(1) conditional lower bound.&#13;
* For up to d ≤ n3−ω−є distinct weights per node (where є &gt; 0), the problem can be solved in subcubic time O(n3−f(є)) (where f(є) &gt; 0). In particular, assuming that ω = 2, we can tolerate any sublinear number of distinct weights per node d ≤ n1−є, whereas previous work [Yuster; SODA ’09] could only handle d ≤ n1/2−є in subcubic time. This promotes our understanding of the APSP hypothesis showing that the hardest instances must exhaust a linear number of weights per node. With the current bounds on ω, we achieve a subcubic algorithm for d ≤ n0.628 whereas previously a subcubic running time could only be achieved for d ≤ n0.384. Our result also applies to the All-Pairs Exact Triangle problem, thus generalizing a result of Chan and Lewenstein on “Clustered 3SUM” from arrays to matrices. Notably, our technique constitutes a rare application of additive combinatorics in graph algorithms.&#13;
We complement our algorithmic results with simple hardness reductions extending the n2.5−o(1) conditional lower bound for Node-Weighted APSP to undirected graphs. Interestingly, under fine-grained assumptions, the complexity in the undirected case jumps from O(nω) for d=1 to n2.5−o(1) for d ≥ 2.
STOC ’25, Prague, Czechia
</description>
<pubDate>Sun, 15 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164611</guid>
<dc:date>2025-06-15T00:00:00Z</dc:date>
</item>
<item>
<title>Weak Poincaré Inequalities, Simulated Annealing, and Sampling from Spherical Spin Glasses</title>
<link>https://hdl.handle.net/1721.1/164610</link>
<description>Weak Poincaré Inequalities, Simulated Annealing, and Sampling from Spherical Spin Glasses
Huang, Brice; Mohanty, Sidhanth; Rajaraman, Amit; Wu, David X.
There has been a recent surge of powerful tools to show rapid mixing of Markov chains, via functional inequalities such as Poincaré inequalities. In many situations, Markov chains fail to mix rapidly from a worst-case initialization, yet are expected to approximately sample from a random initialization. For example, this occurs if the target distribution has metastable states, small clusters accounting for a vanishing fraction of the mass that are essentially disconnected from the bulk of the measure. Under such conditions, a Poincaré inequality cannot hold, necessitating new tools to prove sampling guarantees.&#13;
We develop a framework to analyze simulated annealing, based on establishing so-called weak Poincaré inequalities. These inequalities imply mixing from a suitably warm start, and simulated annealing provides a way to chain such warm starts together into a sampling algorithm. We further identify a local-to-global principle to prove weak Poincaré inequalities, mirroring the spectral independence and localization schemes frameworks for analyzing mixing times of Markov chains.&#13;
As our main application, we prove that simulated annealing samples from the Gibbs measure of a spherical spin glass for inverse temperatures up to a natural threshold, matching recent algorithms based on algorithmic stochastic localization. This provides the first Markov chain sampling guarantee that holds beyond the uniqueness threshold for spherical spin glasses, where mixing from a worst-case initialization is provably slow due to the presence of metastable states. As an ingredient in our proof, we prove bounds on the operator norm of the covariance matrix of spherical spin glasses in the full replica-symmetric regime.&#13;
Additionally, we resolve a question related to sampling using data-based initializations.&#13;
The full version of this paper can be found on arXiv (arXiv ID: 2411.09075).
STOC ’25, Prague, Czechia
</description>
<pubDate>Sun, 15 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164610</guid>
<dc:date>2025-06-15T00:00:00Z</dc:date>
</item>
<item>
<title>Bypassing the Noisy Parity Barrier: Learning Higher-Order Markov Random Fields from Dynamics</title>
<link>https://hdl.handle.net/1721.1/164609</link>
<description>Bypassing the Noisy Parity Barrier: Learning Higher-Order Markov Random Fields from Dynamics
Gaitonde, Jason; Moitra, Ankur; Mossel, Elchanan
We consider the problem of learning graphical models, also known as Markov random fields (MRFs) from temporally correlated samples. As in many traditional statistical settings, fundamental results in the area all assume independent samples from the distribution. However, these samples generally will not directly correspond to more realistic observations from nature, which instead evolve according to some stochastic process. From the computational lens, even generating a single sample from the true MRF distribution is intractable unless NP=RP, and moreover, any algorithm to learn from i.i.d. samples requires prohibitive runtime due to hardness reductions to the parity with noise problem. These computational barriers for sampling and learning from the i.i.d. setting severely lessen the utility of these breakthrough results for this important task; however, dropping this assumption typically only introduces further algorithmic and statistical complexities. In this work, we surprisingly demonstrate that the direct trajectory data from a natural evolution of the MRF overcomes the fundamental computational lower bounds to efficient learning. In particular, we show that given a trajectory with Ok(n) site updates of an order k MRF from the Glauber dynamics, a well-studied, natural stochastic process on graphical models, there is an algorithm that recovers the graph and the parameters in Ok(n2) time. By contrast, all prior algorithms for learning order k MRFs inherently suffer from nΘ(k) runtime even in sparse instances due to the reductions to sparse parity with noise. Our results thus surprisingly show that this more realistic, but intuitively less tractable, model for MRFs actually leads to efficiency far beyond what is known and believed to be true in the traditional i.i.d. case.
STOC ’25, Prague, Czechia
</description>
<pubDate>Sun, 15 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164609</guid>
<dc:date>2025-06-15T00:00:00Z</dc:date>
</item>
<item>
<title>Simulating Time with Square-Root Space</title>
<link>https://hdl.handle.net/1721.1/164608</link>
<description>Simulating Time with Square-Root Space
Williams, R. Ryan
We show that for all functions t(n) ≥ n, every multitape Turing machine running in time t can be simulated in space only O(√t logt). This is a substantial improvement over Hopcroft, Paul, and Valiant’s simulation of time t in O(t/logt) space from 50 years ago [FOCS 1975, JACM 1977]. Among other results, our simulation implies that bounded fan-in circuits of size s can be evaluated on any input in only √s · poly(logs) space, and that there are explicit problems solvable in O(n) space which require at least n2−ε time on every multitape Turing machine for all ε &gt; 0, thereby making a little progress on the P versus PSPACE problem.&#13;
Our simulation reduces the problem of simulating time-bounded multitape Turing machines to a series of implicitly-defined Tree Evaluation instances with nice parameters, leveraging the remarkable space-efficient algorithm for Tree Evaluation recently found by Cook and Mertz [STOC 2024].
STOC ’25, Prague, Czechia
</description>
<pubDate>Sun, 15 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164608</guid>
<dc:date>2025-06-15T00:00:00Z</dc:date>
</item>
<item>
<title>Model Stealing for Any Low-Rank Language Model</title>
<link>https://hdl.handle.net/1721.1/164607</link>
<description>Model Stealing for Any Low-Rank Language Model
Liu, Allen; Moitra, Ankur
Model stealing, where a learner tries to recover an unknown model via carefully chosen queries, is a critical problem in machine learning, as it threatens the security of proprietary models and the privacy of data they are trained on. In recent years, there has been particular interest in stealing large language models (LLMs). In this paper, we aim to build a theoretical understanding of stealing language models by studying a simple and mathematically tractable setting. We study model stealing for Hidden Markov Models (HMMs), and more generally low-rank language models.&#13;
We assume that the learner works in the conditional query model, introduced by Kakade, Krishnamurthy, Mahajan and Zhang. Our main result is an efficient algorithm in the conditional query model, for learning any low-rank distribution. In other words, our algorithm succeeds at stealing any language model whose output distribution is low-rank. This improves upon the previous result which also requires the unknown distribution to have high “fidelity” ­– a property that holds only in restricted cases. There are two key insights behind our algorithm: First, we represent the conditional distributions at each timestep by constructing barycentric spanners among a collection of vectors of exponentially large dimension. Second, for sampling from our representation, we iteratively solve a sequence of convex optimization problems that involve projection in relative entropy to prevent compounding of errors over the length of the sequence. This is an interesting example where, at least theoretically, allowing a machine learning model to solve more complex problems at inference time can lead to drastic improvements in its performance.
STOC ’25, Prague, Czechia
</description>
<pubDate>Sun, 15 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164607</guid>
<dc:date>2025-06-15T00:00:00Z</dc:date>
</item>
<item>
<title>Maximum Circuit Lower Bounds for Exponential-Time Arthur Merlin</title>
<link>https://hdl.handle.net/1721.1/164606</link>
<description>Maximum Circuit Lower Bounds for Exponential-Time Arthur Merlin
Chen, Lijie; Li, Jiatu; Liang, Jingxun
We show that the complexity class of exponential-time Arthur Merlin with sub-exponential advice (AMEXP/2nε) requires circuit complexity at least 2n/n. Previously, the best known such near-maximum lower bounds were for symmetric exponential time by Chen, Hirahara, and Ren (STOC’24) and Li (STOC’24), or randomized exponential time with MCSP oracle and sub-exponential advice by Hirahara, Lu, and Ren (CCC’23).&#13;
Our result is proved by combining the recent iterative win-win paradigm of Chen, Lu, Oliveira, Ren, and Santhanam (FOCS’23) together with the uniform hardness-vs-randomness connection for Arthur-Merlin protocols by Shaltiel-Umans (STOC’07) and van Melkebeek-Sdroievski (CCC’23). We also provide a conceptually different proof using a novel ”critical win-win” argument that extends a technique of Lu, Oliveira, and Santhanam (STOC’21).&#13;
Indeed, our circuit lower bound is a corollary of a new explicit construction for properties in coAM. We show that for every dense property P ∈ coAM, there is a quasi-polynomial-time Arthur-Merlin protocol with short advice such that the following holds for infinitely many n: There exists a canonical string wn ∈ P ∩ {0,1}n so that (1) there is a strategy of Merlin such that Arthur outputs wn with probability 1 and (2) for any strategy of Merlin, with probability 2/3, Arthur outputs either wn or a failure symbol ⊥. As a direct consequence of this new explicit construction, our circuit lower bound also generalizes to circuits with an AM ∩ coAM oracle. To our knowledge, this is the first unconditional lower bound against a strong non-uniform class using a hard language that is only ”quantitatively harder”.
STOC ’25, Prague, Czechia
</description>
<pubDate>Sun, 15 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164606</guid>
<dc:date>2025-06-15T00:00:00Z</dc:date>
</item>
<item>
<title>List-Decoding Capacity Implies Capacity on the &#119902;-ary Symmetric Channel</title>
<link>https://hdl.handle.net/1721.1/164605</link>
<description>List-Decoding Capacity Implies Capacity on the &#119902;-ary Symmetric Channel
Pernice, Francisco; Sprumont, Oscar; Wootters, Mary
It is known that the Shannon capacity of the q-ary symmetric channel (qSC) is the same as the list-decoding capacity of an adversarial channel, raising the question of whether there is a formal (and black-box) connection between the two. We show that there is: Any linear code C⊆ Fqn that has superconstant minimum distance and achieves list-decoding capacity also achieves capacity on the qSC.
STOC ’25, Prague, Czechia
</description>
<pubDate>Sun, 15 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164605</guid>
<dc:date>2025-06-15T00:00:00Z</dc:date>
</item>
<item>
<title>A molecularly impermeable polymer from two-dimensional polyaramids</title>
<link>https://hdl.handle.net/1721.1/164604</link>
<description>A molecularly impermeable polymer from two-dimensional polyaramids
Ritt, Cody L; Quien, Michelle; Wei, Zitang; Gress, Hagen; Dronadula, Mohan T; Altmisdort, Kaan; Nguyen, Huong Giang T; Zangmeister, Christopher D; Tu, Yu-Ming; Garimella, Sanjay S; Amirabadi, Shahab; Gadaloff, Michael; Hu, Weiguo; Aluru, Narayana R; Ekinci, Kamil L; Bunch, J Scott; Strano, Michael S
All polymers exhibit gas permeability through the free volume of entangled polymer chains1, 2–3. By contrast, two-dimensional (2D) materials including graphene stack densely and can exhibit molecular impermeability4, 5–6. Solution-synthesized 2D polymers that exhibit the latter by poly-condensation have been a longstanding goal. Herein, we demonstrate self-supporting, spin-coated 2D polyaramid nanofilms that exhibit nitrogen permeability below 3.1 × 10−9 Barrer, nearly four orders of magnitude lower than every class of existing polymer, and similar for other gases tested (helium, argon, oxygen, methane and sulfur hexafluoride). Optical interference during the pressurization of nanofilm-coated microwells allows measurement of mechanosensitive rim opening and sealing, creating gas-filled bulges that are stable exceeding three years. This discovery enables 2D polymer resonators with high resonance frequencies (about 8 MHz) and quality factors up to 537, similar to graphene. A 60-nm coating of air-sensitive perovskites reduces the lattice degradation rate 14-fold with an oxygen permeability of 3.3 × 10−8 Barrer. Molecularly impermeable polymers promise the next generation of barriers that are synthetically processable, chemically amenable and maximize molecular rejection with minimal material, ultimately advancing sustainability goals.
</description>
<pubDate>Wed, 12 Nov 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164604</guid>
<dc:date>2025-11-12T00:00:00Z</dc:date>
</item>
<item>
<title>Molecularly Thin Polyaramid Nanomechanical Resonators</title>
<link>https://hdl.handle.net/1721.1/164552</link>
<description>Molecularly Thin Polyaramid Nanomechanical Resonators
Gress, Hagen; Ritt, Cody L; Shomakhov, Inal; Altmisdort, Kaan; Quien, Michelle; Wei, Zitang; Lawall, John R; Boddeti, Narasimha; Strano, Michael S; Bunch, J Scott; Ekinci, Kamil L
Two-dimensional polyaramids exhibit strong hydrogen bonding to create molecularly thin nanosheets analogous to graphene. Here, we report the first nanomechanical resonators made out of a two-dimensional polyaramid, 2DPA-1, with thicknesses as small as 8 nm. To fabricate these molecular-scale resonators, we transferred nanofilms of 2DPA-1 onto chips with previously etched arrays of circular microwells. We then characterized the thermal resonances of these resonators under different conditions. When there is no residual gas inside the 2DPA-1-covered microwells, the eigenfrequencies are well-described by a tensioned plate theory, providing the Young's modulus and tension of the 2DPA-1 nanofilms. With gas present, the nanofilms bulge up and mechanical resonances are modified due to the adhesion, bulging and slack present in the system. The fabrication and mechanical characterization of these first 2DPA-1 nanomechanical resonators represent a convincing path toward molecular-scale polymeric NEMS with high mechanical strength, low density, and synthetic processability.
</description>
<pubDate>Wed, 03 Dec 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164552</guid>
<dc:date>2025-12-03T00:00:00Z</dc:date>
</item>
<item>
<title>Interferometric Deflection Analysis of Suspended 2D Polyaramid Thin Films</title>
<link>https://hdl.handle.net/1721.1/164551</link>
<description>Interferometric Deflection Analysis of Suspended 2D Polyaramid Thin Films
Quien, Michelle; Ritt, Cody L; Garimella, Sanjay S; Gress, Hagen; Ekinci, Kamil L; Bunch, Joseph Scott; Strano, Michael S
The 2D nanofilm bulge test, which uses an Atomic Force Microscope (AFM) to measure the deflection of a suspended film under various conditions, has emerged as an important measurement platform for understanding mechanical, barrier, and permeability properties of 2D materials as thickness approaches the angstrom scale. The problem considered in this work is the limitation of such bulge analyses imposed by the AFM whereby dynamic measurements under high pressure, high temperature, and chemically corrosive conditions are limited. In this work, a technique is developed for measuring nanofilm deflection using only visible light interferometry. Both theoretical and semi‐empirical models are applied to translate multicolor interference patterns from broadband excitation into estimates of nano‐film deflection, allowing nanoscale precision in most cases. The technique and algorithm advanced in this work allows the use of widespread optical microscopy to widen the study of these important 2D nanofilm systems to more relevant conditions.
</description>
<pubDate>Fri, 05 Dec 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164551</guid>
<dc:date>2025-12-05T00:00:00Z</dc:date>
</item>
<item>
<title>Quantum One-Time Programs, Revisited</title>
<link>https://hdl.handle.net/1721.1/164550</link>
<description>Quantum One-Time Programs, Revisited
Gupte, Aparna; Liu, Jiahui; Raizes, Justin; Roberts, Bhaskar; Vaikuntanathan, Vinod
One-time programs (Goldwasser, Kalai and Rothblum, CRYPTO 2008) are programs that can be run on any single input of a user’s choice, but not on a second input. Classically, they are unachievable without trusted hardware, but the destructive nature of quantum measurements seems to provide an alternate path to constructing them. Unfortunately, Broadbent, Gutoski and Stebila (CRYPTO 2013) showed that even with quantum techniques,&#13;
a strong notion of one-time programs, similar to ideal obfuscation, cannot be achieved for any non-trivial quantum function. On the positive side, Ben-David and Sattath (Quantum, 2023) showed how to construct a quantum one-time program for a certain (probabilistic) digital signature scheme, under a weaker notion of one-time program security. There is a vast gap between achievable and provably impossible notions of one-time program security, and it is unclear what functionalities are one-time programmable and which are not, under the achievable notions of security.&#13;
In this work, we present new, meaningful, yet achievable definitions of one-time program security for probabilistic classical functions. We show how to construct one time programs satisfying these definitions for all functions in the classical oracle model and for constrained pseudorandom functions in the plain model. Finally, we examine the limits of these notions: we show a class of functions which cannot be one-time programmed in the plain model, as well as a class of functions which appears to be highly random given a single query, but whose quantum one-time program leaks the entire function even in the oracle model.
STOC ’25, Prague, Czechia
</description>
<pubDate>Sun, 15 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164550</guid>
<dc:date>2025-06-15T00:00:00Z</dc:date>
</item>
<item>
<title>Learning the Closest Product State</title>
<link>https://hdl.handle.net/1721.1/164549</link>
<description>Learning the Closest Product State
Bakshi, Ainesh; Bostanci, John; Kretschmer, William; Landau, Zeph; Li, Jerry; Liu, Allen; O'Donnell, Ryan; Tang, Ewin
We study the problem of finding a product state with optimal fidelity to an unknown n-qubit quantum state ρ, given copies of ρ. This is a basic instance of a fundamental question in quantum learning: is it possible to efficiently learn a simple approximation to an arbitrary state? We give an algorithm which finds a product state with fidelity ε-close to optimal, using N = npoly(1/ε) copies of ρ and poly(N) classical overhead. We further show that estimating the optimal fidelity is NP-hard for error ε = 1/poly(n), showing that the error dependence cannot be significantly improved. For our algorithm, we build a carefully-defined cover over candidate product states, qubit by qubit, and then demonstrate that extending the cover can be reduced to approximate constrained polynomial optimization. For our proof of hardness, we give a formal reduction from polynomial optimization to finding the closest product state. Together, these results demonstrate a fundamental connection between these two seemingly unrelated questions. Building on our general approach, we also develop more efficient algorithms in three simpler settings: when the optimal fidelity exceeds 5/6; when we restrict ourselves to a discrete class of product states; and when we are allowed to output a matrix product state.
STOC ’25, Prague, Czechia
</description>
<pubDate>Sun, 15 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164549</guid>
<dc:date>2025-06-15T00:00:00Z</dc:date>
</item>
<item>
<title>Breaking the T^(2/3) Barrier for Sequential Calibration</title>
<link>https://hdl.handle.net/1721.1/164548</link>
<description>Breaking the T^(2/3) Barrier for Sequential Calibration
Dagan, Yuval; Daskalakis, Constantinos; Fishelson, Maxwell; Golowich, Noah; Kleinberg, Robert; Okoroafor, Princewill
A set of probabilistic forecasts is calibrated if each prediction of the forecaster closely approximates the empirical distribution of outcomes on the subset of timesteps where that prediction was made. We study the fundamental problem of online calibrated forecasting of binary sequences, which was initially studied by Foster and Vohra. They derived an algorithm with O(T2/3) calibration error after T time steps, and showed a lower bound of Ω(T1/2). These bounds remained stagnant for two decades, until Qiao and Valiant improved the lower bound to Ω(T0.528) by introducing a combinatorial game called sign preservation and showing that lower bounds for this game imply lower bounds for calibration.&#13;
In this paper, we give the first improvement to the O(T2/3) upper bound on calibration error of Foster and Vohra.&#13;
We do this by introducing a variant of Qiao and Valiant’s game that we call sign preservation with reuse (SPR). We prove that the relationship between SPR and calibrated forecasting is bidirectional: not only do lower bounds for SPR translate into lower bounds for calibration, but algorithms for SPR also translate into new algorithms for calibrated forecasting. We then give an improved upper bound for the SPR game, which implies, via our equivalence, a forecasting algorithm with calibration error O(T2/3 − ) for some &gt; 0, improving Foster and Vohra’s upper bound for the first time. Using similar ideas, we then prove a slightly stronger lower bound than that of Qiao and Valiant, namely Ω(T0.54389). Our lower bound is obtained by an oblivious adversary, marking the first ω(T1/2) calibration lower bound for oblivious adversaries.
STOC ’25, Prague, Czechia
</description>
<pubDate>Sun, 15 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164548</guid>
<dc:date>2025-06-15T00:00:00Z</dc:date>
</item>
<item>
<title>PrEP attitudes, willingness, and preferences among men incarcerated in jail in Massachusetts</title>
<link>https://hdl.handle.net/1721.1/164547</link>
<description>PrEP attitudes, willingness, and preferences among men incarcerated in jail in Massachusetts
Al Abosy, Jude; Kalavacherla, Sruthi; Koutoujian, Peter J.; Siddiqi, Kashif; Senst, Thomas; Caro, Jose; Grossman, Anna; Dong, Kimberly R.
Background People who inject drugs (PWID) are both disproportionately incarcerated and affected by HIV infection. Systemic inequities perpetuate the cyclic nature of injection drug use (IDU) and incarceration, and both IDU and incarceration are linked to higher rates of HIV infection. Pre-exposure prophylaxis (PrEP) is highly effective in HIV prevention and is currently available as a daily oral pill. Longer-acting PrEP options, such as injectables and implants, are also in development to improve accessibility and adherence. Despite these advancements, PrEP uptake remains low among PWID and individuals recently released from jail, and there is limited literature exploring the preferences for PrEP uptake within this population. Methods We conducted qualitative interviews using a semi-structured interview guide with 20 male participants (19 incarcerated in a Massachusetts jail and 1 recently released) to assess perceived HIV risk, knowledge of PrEP, barriers to PrEP uptake, and preferences for PrEP modality and frequency. The data were analyzed using a directed content analysis approach. Results Most participants were aware of their HIV risk but were largely unaware of PrEP and had never been educated about PrEP by a healthcare provider. Participants cited a lack of access to healthcare, stigma around HIV infection, and feasibility as barriers to uptake. While participants expressed interest in longer-acting PrEP, most preferred the oral pill due to distrust of the safety and efficacy of injectables and implants, countering the assumption that modality changes alone can improve low PrEP uptake. Conclusions Our findings underscore the urgent need for targeted education and interventions to improve HIV prevention in vulnerable populations impacted by incarceration. While long-acting injectables have been touted to help address barriers to accessing healthcare among this population, skepticism about the efficacy of long-acting injectables among this population may prevent these efforts. It is important to further research the willingness to uptake PrEP and modality preferences among this population to meet their needs.
</description>
<pubDate>Wed, 26 Nov 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164547</guid>
<dc:date>2025-11-26T00:00:00Z</dc:date>
</item>
<item>
<title>Search for a new scalar resonance decaying to a Higgs boson and another new scalar particle in the final state with two bottom quarks and two photons in proton-proton collisions at $$\sqrt{s}=13$$ TeV</title>
<link>https://hdl.handle.net/1721.1/164546</link>
<description>Search for a new scalar resonance decaying to a Higgs boson and another new scalar particle in the final state with two bottom quarks and two photons in proton-proton collisions at $$\sqrt{s}=13$$ TeV
Hayrapetyan, A.; Makarenko, V.; Tumasyan, A.; Adam, W.; Andrejkovic, J. W.; Benato, L.; Bergauer, T.; Dragicevic, M.; Giordano, C.; Hussain, P. S.; Jeitler, M.; Krammer, N.; Li, A.; Liko, D.; Matthewman, M.; Mikulec, I.
A search is presented for a new scalar resonance, X, decaying to a standard model Higgs boson and another new scalar particle, Y, in the final state where the Higgs boson decays to a $$\text{b}\overline{\text{b} }$$ pair, while the Y particle decays to a pair of photons. The search is performed in the mass range 240–1000 GeV for the resonance X, and in the mass range 70–800 GeV for the particle Y, using proton-proton collision data collected by the CMS experiment at $$\sqrt{s}=13$$ TeV, corresponding to an integrated luminosity of 132 fb−1. In general, the data are found to be compatible with the standard model expectation. Observed (expected) upper limits at 95% confidence level on the product of the production cross section and the relevant branching fraction are extracted for the X → YH process, and are found to be within the range of 0.05–2.69 (0.08–1.94) fb, depending on mX and mY. The most significant deviation from the background-only hypothesis is observed for X and Y masses of 300 and 77 GeV, respectively, with a local (global) significance of 3.33 (0.65) standard deviations.
</description>
<pubDate>Tue, 23 Dec 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164546</guid>
<dc:date>2025-12-23T00:00:00Z</dc:date>
</item>
<item>
<title>Public Service Provision and the Virtuous Circle: Evidence from Malawi</title>
<link>https://hdl.handle.net/1721.1/164545</link>
<description>Public Service Provision and the Virtuous Circle: Evidence from Malawi
Chen, Nuole; Grady, Christopher; Dulani, Boniface; Masumbu, Mwayi; Chiona, Busta; Bowers, Jake; Winters, Matthew S.
Many governments struggle to obtain the resources they need to govern effectively. In the virtuous circle model of state development, tax revenue allows governments to provide public goods and services to citizens, and citizens comply with taxation when governments provide sufficient levels of goods and services. The model, however, also suggests a vicious version of the circle, where citizens do not pay taxes, governments lack revenue to provide public goods and services, and citizens therefore continue to not pay taxes. Under this suboptimal equilibrium, governments cannot deliver on their governing and service provision mandates. We study whether a shock to public service provision in a major city in Malawi can induce citizens to pay taxes, thereby shifting the relationship between the city and its citizens from a vicious circle to a virtuous circle. With a difference-in-differences-style analysis, we show that households exposed to new government-provided waste collection expressed more trust in and better perceptions of the local government. Most importantly, these households were more likely to make tax payments. We find that this increase in tax payments largely came from people paying more of what they owed rather than from new taxpayers entering the rolls.
</description>
<pubDate>Mon, 29 Dec 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164545</guid>
<dc:date>2025-12-29T00:00:00Z</dc:date>
</item>
<item>
<title>NonlinearSolve.jl: High-Performance and Robust Solvers for Systems of Nonlinear Equations in Julia</title>
<link>https://hdl.handle.net/1721.1/164544</link>
<description>NonlinearSolve.jl: High-Performance and Robust Solvers for Systems of Nonlinear Equations in Julia
Pal, Avik; Holtorf, Flemming; Larsson, Axel; Loman, Torkel; Rajput, Utkarsh; Sch?fer, Frank; Qu, Qingyu; Edelman, Alan; Rackauckas, Chris
Efficiently solving nonlinear equations underpins numerous scientific and engineering disciplines, yet scaling these solutions for challenging system models remains a challenge. This paper presents NonlinearSolve.jl -- a suite of high-performance open-source nonlinear equation solvers implemented natively in the Julia programming language. NonlinearSolve.jl distinguishes itself by offering a unified API that accommodates a diverse range of solver specifications alongside features such as automatic algorithm selection based on runtime analysis, support for static array kernels for improved GPU computation on smaller problems, and the utilization of sparse automatic differentiation and Jacobian-free Krylov methods for large-scale problem-solving. Through rigorous comparison with established tools such as PETSc SNES, Sundials KINSOL, and MINPACK, NonlinearSolve.jl demonstrates robustness and efficiency, achieving significant advancements in solving nonlinear equations while being implemented in a high-level programming language. The capabilities of NonlinearSolve.jl unlock new potentials in modeling and simulation across various domains, making it a valuable addition to the computational toolkit of researchers and practitioners alike.
</description>
<pubDate>Mon, 01 Dec 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164544</guid>
<dc:date>2025-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Property Testing with Online Adversaries</title>
<link>https://hdl.handle.net/1721.1/164543</link>
<description>Property Testing with Online Adversaries
Ben Eliezer, Omri; Kelman, Esty; Meir, Uri; Raskhodnikova, Sofya
The online manipulation-resilient testing model, proposed by Kalemaj, Raskhodnikova and Varma (Theory of Computing 2023), studies property testing in situations where access to the input degrades continuously and adversarially.    Our main contributions are as follows:    - An extension of the model, introducing \emph{batch queries} where multiple queries are made and answered between each round of manipulation, and \emph{fractional manipulation rate}, where the adversary makes less than one manipulation per round.    - New optimal testers for linearity testing of Boolean functions in the original online and offline models.        - A new lower-bound for testing low-degree of Boolean functions in the original model which can be overcome by an algorithm using batch queries.         - Efficient testers for local properties of sequences when the manipulation rate is fractional. Specifically, for sortedness, we show a sharp transition from optimal query complexity to the impossibility of testability, depending on the manipulation rate.
</description>
<pubDate>Tue, 02 Dec 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164543</guid>
<dc:date>2025-12-02T00:00:00Z</dc:date>
</item>
<item>
<title>Engineered yeast tolerance enables efficient production from toxified lignocellulosic feedstocks</title>
<link>https://hdl.handle.net/1721.1/164542</link>
<description>Engineered yeast tolerance enables efficient production from toxified lignocellulosic feedstocks
Lam, Felix H; Turanlı-Yıldız, Burcu; Liu, Dany; Resch, Michael G; Fink, Gerald R; Stephanopoulos, Gregory
Lignocellulosic biomass remains unharnessed for the production of renewable fuels and chemicals due to challenges in deconstruction and the toxicity its hydrolysates pose to fermentation microorganisms. Here, we show in Saccharomyces cerevisiae that engineered aldehyde reduction and elevated extracellular potassium and pH are sufficient to enable near-parity production between inhibitor-laden and inhibitor-free feedstocks. By specifically targeting the universal hydrolysate inhibitors, a single strain is enhanced to tolerate a broad diversity of highly toxified genuine feedstocks and consistently achieve industrial-scale titers (cellulosic ethanol of &gt;100 grams per liter when toxified). Furthermore, a functionally orthogonal, lightweight design enables seamless transferability to existing metabolically engineered chassis strains: We endow full, multifeedstock tolerance on a xylose-consuming strain and one producing the biodegradable plastics precursor lactic acid. The demonstration of “drop-in” hydrolysate competence enables the potential of cost-effective, at-scale biomass utilization for cellulosic fuel and nonfuel products alike.
</description>
<pubDate>Fri, 25 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164542</guid>
<dc:date>2021-06-25T00:00:00Z</dc:date>
</item>
<item>
<title>Removal of lycopene substrate inhibition enables high carotenoid productivity in Yarrowia lipolytica</title>
<link>https://hdl.handle.net/1721.1/164541</link>
<description>Removal of lycopene substrate inhibition enables high carotenoid productivity in Yarrowia lipolytica
Ma, Yongshuo; Liu, Nian; Greisen, Per; Li, Jingbo; Qiao, Kangjian; Huang, Sanwen; Stephanopoulos, Gregory
Substrate inhibition of enzymes can be a major obstacle to the production of valuable chemicals in engineered microorganisms. Here, we show substrate inhibition of lycopene cyclase as the main limitation in carotenoid biosynthesis in &lt;jats:italic&gt;Yarrowia lipolytica&lt;/jats:italic&gt;. To overcome this bottleneck, we exploit two independent approaches. Structure-guided protein engineering yields a variant, Y27R, characterized by complete loss of substrate inhibition without reduction of enzymatic activity. Alternatively, establishing a geranylgeranyl pyrophosphate synthase-mediated flux flow restrictor also prevents the onset of substrate inhibition by diverting metabolic flux away from the inhibitory metabolite while maintaining sufficient flux towards product formation. Both approaches result in high levels of near-exclusive β-carotene production. Ultimately, we construct strains capable of producing 39.5 g/L β-carotene at a productivity of 0.165 g/L/h in bioreactor fermentations (a 1441-fold improvement over the initial strain). Our findings provide effective approaches for removing substrate inhibition in engineering pathways for efficient synthesis of natural products.
</description>
<pubDate>Mon, 31 Jan 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164541</guid>
<dc:date>2022-01-31T00:00:00Z</dc:date>
</item>
<item>
<title>Isotope tracing in health and disease</title>
<link>https://hdl.handle.net/1721.1/164540</link>
<description>Isotope tracing in health and disease
Dong, Wentao; Rawat, Eshaan S; Stephanopoulos, Gregory; Abu-Remaileh, Monther
Biochemical characterization of metabolism provides molecular insights for understanding biology in health and disease. Over the past decades, metabolic perturbations have been implicated in cancer, neurodegeneration, and diabetes, among others. Isotope tracing is a technique that allows tracking of labeled atoms within metabolites through biochemical reactions. This technique has become an integral component of the contemporary metabolic research. Isotope tracing measures substrate contribution to downstream metabolites and indicates its utilization in cellular metabolic networks. In addition, isotopic labeling data are necessary for quantitative metabolic flux analysis. Here, we review recent work utilizing metabolic tracing to study health and disease, and highlight its application to interrogate subcellular, intercellular, and in vivo metabolism. We further discuss the current challenges and opportunities to expand the utility of isotope tracing to new research areas.
</description>
<pubDate>Mon, 01 Aug 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164540</guid>
<dc:date>2022-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Engineering a universal and efficient platform for terpenoid synthesis in yeast</title>
<link>https://hdl.handle.net/1721.1/164539</link>
<description>Engineering a universal and efficient platform for terpenoid synthesis in yeast
Ma, Yongshuo; Zu, Yuexuan; Huang, Sanwen; Stephanopoulos, Gregory
Engineering microbes for the production of valuable natural products is often hindered by the regulation of native competing metabolic networks in host. This is particularly evident in the case of terpenoid synthesis in yeast, where the canonical terpenoid precursors are tightly coupled to the biosynthesis of sterols essential for yeast viability. One way to circumvent this limitation is by engineering product pathways less connected to the host native metabolism. Here, we introduce a two-step isopentenol utilization pathway (IUP) in&#13;
            &lt;jats:italic&gt;Saccharomyces cerevisiae&lt;/jats:italic&gt;&#13;
            to augment the native mevalonate pathway by providing a shortcut to the synthesis of the common terpenoid precursors, isopentenyl diphosphate (IPP) and dimethylallyl diphosphate (DMAPP). As such, the IUP was capable of elevating the IPP/DMAPP pool by 147-fold compared with the native pathway. We further demonstrate that cofeeding isoprenol and prenol enhances geranyl diphosphate (GPP) content for monoterpene biosynthesis. More importantly, we established a synthetic three-step route for efficient synthesis of di-and tetraterpene precursor geranylgeranyl diphosphate (GGPP), circumventing the competition with farnesyl diphosphate (FPP) for sterol biosynthesis and elevating the GGPP level by 374-fold. We combine these IUP-supported precursor-forming platforms with downstream terpene synthases to harness their potential and improve the production of industrially relevant terpenoids by several fold. Our exploration provides a universal and effective platform for supporting terpenoid synthesis in yeast.
</description>
<pubDate>Wed, 28 Dec 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164539</guid>
<dc:date>2022-12-28T00:00:00Z</dc:date>
</item>
<item>
<title>Oscillatory control of cortical space as a computational dimension</title>
<link>https://hdl.handle.net/1721.1/164538</link>
<description>Oscillatory control of cortical space as a computational dimension
Chen, Zhen; Brincat, Scott L.; Lundqvist, Mikael; Loonis, Roman F.; Warden, Melissa R.; Miller, Earl K.
Flexible cognition depends on the ability to represent and apply relevant information to the current task at hand. This allows the brain to interpret sensory input and guide behavior in a context-dependent manner. Recent work has proposed “spatial computing” as a mechanism for this flexibility, suggesting that task-related signals organize information processing through spatial patterns of oscillatory activity across the cortical surface. These patterns are proposed to act as “inhibitory stencils” that constrain where sensory-related information (the “content” of cognition) can be expressed in spiking activity. Here, we provide a comprehensive empirical test of spatial computing using multi-electrode recordings from the lateral prefrontal cortex in non-human primates performing a range of cognitive tasks (object working memory, sequence working memory, and categorization). We found that alpha/beta oscillations encoded task-related information, were organized into spatial patterns that changed with task conditions, and inversely correlated with the spatial expression of sensory-related spiking activity. Furthermore, we found that alpha/beta oscillations reflected misattributions of task conditions and correlated with subjects’ trial-by-trial decisions. These findings validate core predictions of spatial computing, suggesting that oscillatory dynamics not only gate information in time but also shape where in the cortex cognitive content is represented. This framework offers a unifying principle for understanding how the brain flexibly coordinates cognition through structured population dynamics.
</description>
<pubDate>Mon, 22 Dec 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164538</guid>
<dc:date>2025-12-22T00:00:00Z</dc:date>
</item>
<item>
<title>Working memory readout varies with frontal theta rhythms</title>
<link>https://hdl.handle.net/1721.1/164537</link>
<description>Working memory readout varies with frontal theta rhythms
Han, Hio-Been; Brincat, Scott L.; Buschman, Timothy J.; Miller, Earl K.
Increasing evidence suggests that attention varies rhythmically, phase locked to ongoing cortical oscillations. Here, we report that the phase of theta oscillations (3–6 Hz) in the frontal eye field (FEF) is associated with the spatiotemporal variation of information readout from working memory (WM). Non-human primates were briefly shown a sample array of colored squares. A short time later, they viewed a test array and were rewarded for identifying which square changed color (the target). Behavioral performance varied systematically with theta phase at the time of test array onset, as well as with the target’s location. This is consistent with theta “scanning” across the FEF and thus visual space from top to bottom. Theta was coupled, on opposing phases, to both spiking and beta (12–20 Hz). These results could be explained by a wave of activity that moves across the FEF, modulating the readout of information from WM.
</description>
<pubDate>Wed, 07 Jan 2026 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164537</guid>
<dc:date>2026-01-07T00:00:00Z</dc:date>
</item>
<item>
<title>Exploring the Emotional Effects of Enhanced Interoception via Heartbeat-Synchronized Haptic Feedback</title>
<link>https://hdl.handle.net/1721.1/164536</link>
<description>Exploring the Emotional Effects of Enhanced Interoception via Heartbeat-Synchronized Haptic Feedback
Kim, Minsol; Whitmore, Nathan; Chua, Phoebe; Pei, Serena; Abdalla, Malak; Maes, Pattie
This study examines how amplifying real-time heartbeat feedback affects emotion regulation. Accurate heartbeat perception—a key facet of cardiac interoception—has been linked to emotional awareness and mental well-being, yet the causal role of interoceptive feedback in emotion regulation remains underexplored. We empirically tested whether making heart rate signals more perceptible through wearable haptic feedback could facilitate implicit emotion regulation during emotionally evocative experiences. Using a custom Fitbit-based system, thirty participants received real-time, sham, or no heartbeat-synchronized vibrations while viewing fear- and amusement-inducing film clips. Interoceptive accuracy, emotional disturbance, and the linguistic complexity of emotion descriptions were measured. Exploratory analyses showed that real-time feedback reduced emotional disturbance during fear stimuli, especially among individuals attentive to bodily sensations, though effects did not remain significant after multiple comparisons correction. Feedback primarily modulated arousal rather than valence and did not significantly affect heartbeat counting or linguistic complexity. As one of the first causal, empirical investigations of interoceptive feedback and emotion regulation, this work identifies boundary conditions for its effectiveness and offers insights for designing personalized, interoception-aware wearable technologies.
</description>
<pubDate>Tue, 02 Dec 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164536</guid>
<dc:date>2025-12-02T00:00:00Z</dc:date>
</item>
<item>
<title>EcoLearn: Optimizing the Carbon Footprint of Federated Learning</title>
<link>https://hdl.handle.net/1721.1/164535</link>
<description>EcoLearn: Optimizing the Carbon Footprint of Federated Learning
Mehboob, Talha; Bashir, Noman; Iglesias, Jesus Oma?a; Zink, Michael; Irwin, David
Federated Learning (FL) distributes machine learning (ML) training across edge devices to reduce data transfer overhead and protect data privacy. Since FL model training may span hundreds of devices and is thus resource- and energy-intensive, it has a significant carbon footprint. Importantly, since energy's carbon-intensity differs substantially (by up to 60×) across locations, training on the same device using the same amount of energy, but at different locations, can incur widely different carbon emissions. While prior work has focused on improving FL's resource- and energy-efficiency by optimizing time-to-accuracy, it implicitly assumes all energy has the same carbon intensity and thus does not optimize carbon efficiency, i.e., work done per unit of carbon emitted.&#13;
To address the problem, we design EcoLearn, which minimizes FL's carbon footprint without significantly affecting model accuracy or training time. EcoLearn achieves a favorable tradeoff by integrating carbon awareness into multiple aspects of FL training, including i) selecting clients with high data utility and low carbon, ii) provisioning more clients during the initial training rounds, and iii) mitigating stragglers by dynamically adjusting client over-provisioning based on carbon. We implement EcoLearn and its carbon-aware FL training policies in the Flower framework and show that it reduces the carbon footprint of training (by up to 10.8×) while maintaining model accuracy and training time (within ~1%) compared to state-of-the-art approaches.
SEC ’25, Arlington, VA, USA
</description>
<pubDate>Wed, 03 Dec 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164535</guid>
<dc:date>2025-12-03T00:00:00Z</dc:date>
</item>
<item>
<title>Democratizing Multi-Granularity Spatio-Temporal Intelligence with Multi-Agent Systems</title>
<link>https://hdl.handle.net/1721.1/164534</link>
<description>Democratizing Multi-Granularity Spatio-Temporal Intelligence with Multi-Agent Systems
Wu, Che-Cheng; Huang, Syuan-Bo; Song, Yu-Lun; Lin, Po-Han; Lin, Michael; Lin, Yu-Ta
We propose a system that democratizes multi-granularity spatio-temporal analysis by integrating a Discrete Global Grid System (DGGS) data pipeline with a Multi-Agent System (MAS). Unlike existing single-agent spatial AI solutions that primarily target experts and lack support for heterogeneous data, persistent memory, and validation, our platform converts diverse datasets into standardized H3-indexed cells, enabling consistent analysis across scales. To enhance usability for non-experts, the system interactively guides users to refine queries, which are decomposed into sub-tasks managed by specialized agents for data retrieval, transformation, analysis, and visualization. Agents communicate through a decentralized framework with shared memory, supporting persistent reasoning and multi-turn dialogue. Reflection modules and human-in-the-loop validation further strengthen robustness. Demonstrated through real-world scenarios, such as analyzing the relationship between aging rate patterns and average income to inform social welfare policy in Taiwan, the system illustrates how natural language queries, combined with intuitive map- and chart-based visualizations, can support evidence-based decision-making.
GeoGenAgent ’25, Minneapolis, MN, USA
</description>
<pubDate>Sun, 02 Nov 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164534</guid>
<dc:date>2025-11-02T00:00:00Z</dc:date>
</item>
<item>
<title>One String to Pull Them All: Fast Assembly of Curved Structures from Flat Auxetic Linkages</title>
<link>https://hdl.handle.net/1721.1/164533</link>
<description>One String to Pull Them All: Fast Assembly of Curved Structures from Flat Auxetic Linkages
Zaman, Akib; Aslarus, Jacqueline; Li, Jiaji; Mueller, Stefanie; Konakovic Lukovic, Mina
We present a computational approach for designing freeform structures that can be rapidly assembled from initially flat configurations by a single string pull. The target structures are decomposed into rigid spatially varied quad tiles that are optimized to approximate the user-provided surface, forming a flat mechanical linkage. Our algorithm then uses a two-step method to find a physically realizable string path that controls only a subset of tiles to smoothly actuate the structure from flat to assembled configuration. We initially compute the minimal subset of tiles that are required to be controlled with the string considering the geometry of the structure and interaction among the tiles. We then find a valid string path through these tiles that minimizes friction, which will assemble the flat linkage into the target 3D structure upon tightening a single string. The resulting designs can be easily manufactured with computational fabrication techniques such as 3D printing, CNC milling, molding, etc. in flat configuration that, in addition to manufacturing, facilitates storage and transportation. We validate our approach by developing a series of physical prototypes and showcasing various application case studies, ranging from medical devices, space shelters, to architectural designs.
</description>
<pubDate>Thu, 04 Dec 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164533</guid>
<dc:date>2025-12-04T00:00:00Z</dc:date>
</item>
<item>
<title>Discovering Folding Lines for Surface Compression</title>
<link>https://hdl.handle.net/1721.1/164532</link>
<description>Discovering Folding Lines for Surface Compression
Aoki, Toshiki; Tachi, Tomohiro; Konakovic Lukovic, Mina
The miniaturization of shell structures presents a versatile and complex challenge, bridging geometry with diverse practical applications. In this paper, we introduce a novel approach for computing origami crease patterns to compress arbitrary 3D shell objects. First, we employ the adapted Material Point Method (MPM) to simulate the compression of a target surface and obtain an initial folded configuration. Since MPM produces overly smooth curved surfaces, their crease patterns are unsuitable for practical origami fabrication. We then propose a novel Folding Line Extraction (FLE) method that optimizes these smoothed surfaces to extract folding lines that achieve the target compression with minimal deformation and stretching outside the crease lines. This method produces smooth curved folding lines. Fabrication and experimental validation of the extracted patterns demonstrate their effectiveness and applicability in real-world scenarios.
SA Conference Papers ’25, Hong Kong, Hong Kong
</description>
<pubDate>Sun, 14 Dec 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164532</guid>
<dc:date>2025-12-14T00:00:00Z</dc:date>
</item>
<item>
<title>3DPR: Single Image 3D Portrait Relighting with Generative Priors</title>
<link>https://hdl.handle.net/1721.1/164531</link>
<description>3DPR: Single Image 3D Portrait Relighting with Generative Priors
Rao, Pramod; Meka, Abhimitra; Zhou, Xilong; Fox, Gereon; B R, Mallikarjun; Zhan, Fangneng; Weyrich, Tim; Bickel, Bernd; Pfister, Hanspeter; Matusik, Wojciech; Beeler, Thabo; Elgharib, Mohamed; Habermann, Marc; Theobalt, Christian
Rendering novel, relit views of a human head, given a monocular portrait image as input, is an inherently underconstrained problem. The traditional graphics solution is to explicitly decompose the input image into geometry, material and lighting via differentiable rendering; but this is constrained by the multiple assumptions and approximations of the underlying models and parameterizations of these scene components. We propose 3DPR, an image-based relighting model that leverages generative priors learnt from multi-view One-Light-at-A-Time (OLAT) images captured in a light stage. We introduce a new diverse and large-scale multi-view 4K OLAT dataset of 139 subjects to learn a high-quality prior over the distribution of high-frequency face reflectance. We leverage the latent space of a pre-trained generative head model that provides a rich prior over face geometry learnt from in-the-wild image datasets. The input portrait is first embedded in the latent manifold of such a model through an encoder-based inversion process. Then a novel triplane-based reflectance network trained on our lightstage data is used to synthesize high-fidelity OLAT images to enable image-based relighting. Our reflectance network operates in the latent space of the generative head model, crucially enabling a relatively small number of lightstage images to train the reflectance model. Combining the generated OLATs according to a given HDRI environment maps yields physically accurate environmental relighting results. Through quantitative and qualitative evaluations, we demonstrate that 3DPR outperforms previous methods, particularly in preserving identity and in capturing lighting effects such as specularities, self-shadows, and subsurface scattering.
SA Conference Papers ’25, December 15–18, 2025, Hong Kong, Hong Kong
</description>
<pubDate>Sun, 14 Dec 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164531</guid>
<dc:date>2025-12-14T00:00:00Z</dc:date>
</item>
<item>
<title>Shoot-Bounce-3D: Single-Shot Occlusion-Aware 3D from Lidar by Decomposing Two-Bounce Light</title>
<link>https://hdl.handle.net/1721.1/164530</link>
<description>Shoot-Bounce-3D: Single-Shot Occlusion-Aware 3D from Lidar by Decomposing Two-Bounce Light
Klinghoffer, Tzofi; Somasundaram, Siddharth; Xiang, Xiaoyu; Fan, Yuchen; Richardt, Christian; Dave, Akshat; Raskar, Ramesh; Ranjan, Rakesh
3D scene reconstruction from a single measurement is challenging, especially in the presence of occluded regions and specular materials, such as mirrors. We address these challenges by leveraging single-photon lidars. These lidars estimate depth from light that is emitted into the scene and reflected directly back to the sensor. However, they can also measure light that bounces multiple times in the scene before reaching the sensor. This multi-bounce light contains additional information that can be used to recover dense depth, occluded geometry, and material properties. Prior work with single-photon lidar, however, has only demonstrated these use cases when a laser sequentially illuminates one scene point at a time. We instead focus on the more practical – and challenging – scenario of illuminating multiple scene points simultaneously. The complexity of light transport due to the combined effects of multiplexed illumination, two-bounce light, shadows, and specular reflections is challenging to invert analytically. Instead, we propose a data-driven method to invert light transport in single-photon lidar. To enable this approach, we create the first large-scale simulated dataset of ~100k lidar transients for indoor scenes. We use this dataset to learn a prior on complex light transport, enabling measured two-bounce light to be decomposed into the constituent contributions from each laser spot. Finally, we experimentally demonstrate how this decomposed light can be used to infer 3D geometry in scenes with occlusions and mirrors from a single measurement. Our code and dataset are released on our project webpage.
SA Conference Papers ’25, Hong Kong, Hong Kong
</description>
<pubDate>Sun, 14 Dec 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164530</guid>
<dc:date>2025-12-14T00:00:00Z</dc:date>
</item>
<item>
<title>PhysiOpt: Physics-Driven Shape Optimization for 3D Generative Models</title>
<link>https://hdl.handle.net/1721.1/164529</link>
<description>PhysiOpt: Physics-Driven Shape Optimization for 3D Generative Models
Zhan, Xiao; Jambon, Cl?ment; Thompson, Evan; Ng, Kenney; Konakovi? Lukovi?, Mina
Generative models have recently demonstrated impressive capabilities in producing high-quality 3D shapes from a variety of user inputs (e.g., text or images). However, generated objects often lack physical integrity. We introduce PhysiOpt, a differentiable physics optimizer designed to improve the physical behavior of 3D generative outputs, enabling them to transition from virtual designs to physically plausible, real-world objects. While most generative models represent geometry as continuous implicit fields, physics-based approaches often rely on the finite element method (FEM), requiring ad hoc mesh extraction to perform shape optimization. In addition, these methods are typically slow, limiting their integration in fast, iterative generative design workflows. Instead, we bridge the representation gap and propose a fast and effective differentiable simulation pipeline that optimizes shapes directly in the latent space of generative models using an intuitive and easy-to-implement differentiable mapping. This approach enables fast optimization while preserving semantic structure, unlike traditional methods relying on local mesh-based adjustments. We demonstrate the versatility of our optimizer across a range of shape priors, from global and part-based latent models to a state-of-the-art large-scale 3D generator, and compare it to a traditional mesh-based shape optimizer. Our method preserves the native representation and capabilities of the underlying generative model while supporting user-specified materials, loads, and boundary conditions. The resulting designs exhibit improved physical behavior, remain faithful to the learned priors, and are suitable for fabrication. We demonstrate the effectiveness of our approach on both virtual and fabricated objects.
SA Conference Papers ’25, Hong Kong, Hong Kong
</description>
<pubDate>Sun, 14 Dec 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164529</guid>
<dc:date>2025-12-14T00:00:00Z</dc:date>
</item>
<item>
<title>Low-Rank Adaptation of Neural Fields</title>
<link>https://hdl.handle.net/1721.1/164528</link>
<description>Low-Rank Adaptation of Neural Fields
Truong, Anh; Mahmoud, Ahmed; Konakovi? Lukovi?, Mina; Solomon, Justin
Processing visual data often involves small adjustments or sequences of changes, e.g., image filtering, surface smoothing, and animation. While established graphics techniques like normal mapping and video compression exploit redundancy to encode such small changes efficiently, the problem of encoding small changes to neural fields—neural network parameterizations of visual or physical functions—has received less attention. We propose a parameter-efficient strategy for updating neural fields using low-rank adaptations (LoRA). LoRA, a method from the parameter-efficient fine-tuning LLM community, encodes small updates to pre-trained models with minimal computational overhead. We adapt LoRA for instance-specific neural fields, avoiding the need for large pre-trained models and yielding lightweight updates. We validate our approach with experiments in image filtering, geometry editing, video compression, and energy-based editing, demonstrating its effectiveness and versatility for representing neural field updates.
Anh Truong, Ahmed H. Mahmoud, Mina Konaković Luković, and Justin Solomon. 2025. Low-Rank Adaptation of Neural Fields. In Proceedings of the SIGGRAPH Asia 2025 Conference Papers (SA Conference Papers '25). Association for Computing Machinery, New York, NY, USA, Article 86, 1–12.
</description>
<pubDate>Sun, 14 Dec 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164528</guid>
<dc:date>2025-12-14T00:00:00Z</dc:date>
</item>
<item>
<title>Participatory Evolution of Artificial Life Systems via Semantic Feedback</title>
<link>https://hdl.handle.net/1721.1/164527</link>
<description>Participatory Evolution of Artificial Life Systems via Semantic Feedback
Li, Shuowen; Wang, Kexin; Fang, Minglu; Huang, Danqi; Asadipour, Ali; Mi, Haipeng; Sun, Yitong
We present a semantic-feedback framework that treats natural language as a regulatory signal for evolving artificial-life systems. Instead of using prompts to select finished images, text in our system shapes the dynamics of an interactive ecosystem, allowing audiences to cultivate behaviors over time. The framework couples a learned mapping from prompts to simulation parameters with evolutionary search and vision–language evaluation, so user intent modulates both visible outcomes and the underlying generative rules. It supports iterative prompt refinement, multi-agent interaction, and the synthesis of new collective rules from community input. In a user study, participants achieved higher semantic alignment and reported a greater sense of control than with manual tuning, while behaviors remained diverse across generations. As an art-led contribution, the work reframes authoring as participatory cultivation and advances open-ended evolution as a socially distributed, not solely algorithmic, process; as a tool contribution, it offers a practical platform for co-creative generative design.
SA Art Papers ’25, Hong Kong, Hong Kong
</description>
<pubDate>Sun, 14 Dec 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164527</guid>
<dc:date>2025-12-14T00:00:00Z</dc:date>
</item>
<item>
<title>Physical Manifestation of Generative AI Music Systems for Live Performance</title>
<link>https://hdl.handle.net/1721.1/164526</link>
<description>Physical Manifestation of Generative AI Music Systems for Live Performance
Naseck, Perry; Blanchard, Lancelot; Lavakare, Madhav; Lecamwasam, Kimaya; Paradiso, Joseph
This paper explores the physical manifestation of generative AI music systems for live performance, focusing on bridging the expressive gap between AI-generated music and audience perception. Through a year-long collaboration with a human performer, we constructed a kinetic sculpture that visualizes the outputs of an AI jam_bot during concerts. The sculpture, powered by ML-based and pattern-driven mapping methodologies, interprets real-time AI musical decisions as expressive movements. Audience feedback indicates increased engagement and curiosity, although interpretability remains a challenge. Our work highlights the potential of embodied visualization to establish communicative presence for AI performers and suggests avenues for future research.
SA Art Papers ’25, Hong Kong, Hong Kong
</description>
<pubDate>Sun, 14 Dec 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164526</guid>
<dc:date>2025-12-14T00:00:00Z</dc:date>
</item>
<item>
<title>Performant Unified GPU Kernels for Portable Singular Value Computation Across Hardware and Precision</title>
<link>https://hdl.handle.net/1721.1/164525</link>
<description>Performant Unified GPU Kernels for Portable Singular Value Computation Across Hardware and Precision
Ringoot, Evelyne; Alomairy, Rabab; Churavy, Valentin; Edelman, Alan
This paper presents a portable, GPU-accelerated implementation of a QR-based singular value computation algorithm in Julia. The singular value decomposition (SVD) is a fundamental numerical tool in scientific computing and machine learning, providing optimal low-rank matrix approximations. Its importance has increased even more in large-scale machine learning pipelines, including large language models (LLMs), where it enables low-rank adaptation (LoRA). The implemented algorithm is based on the classic two-stage QR reduction, consisting of successive matrix reduction to band form and bidiagonal form. Our implementation leverages Julia’s multiple dispatch and metaprogramming capabilities, integrating with the GPUArrays and KernelAbstractions frameworks to provide a unified type and hardware-agnostic function. It supports diverse GPU architectures and data types, and is, to our knowledge, the first GPU-accelerated singular value implementation to support Apple Metal GPUs and half precision. Performance results on multiple GPU backends and data types demonstrate that portability does not require sacrificing performance: the unified function outperforms most linear algebra libraries (MAGMA, SLATE, rocSOLVER, oneMKL) for matrix sizes larger than 1024 × 1024, and achieves 80%-90% of the performance of cuSOLVER for large matrices.
ICPP ’25, San Diego, CA, USA
</description>
<pubDate>Sat, 20 Dec 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164525</guid>
<dc:date>2025-12-20T00:00:00Z</dc:date>
</item>
<item>
<title>UQGNN: Uncertainty Quantification of Graph Neural Networks for Multivariate Spatiotemporal Prediction</title>
<link>https://hdl.handle.net/1721.1/164524</link>
<description>UQGNN: Uncertainty Quantification of Graph Neural Networks for Multivariate Spatiotemporal Prediction
Yu, Dahai; Zhuang, Dingyi; Jiang, Lin; Xu, Rongchao; Ye, Xinyue; Bu, Yuheng; Wang, Shenhao; Wang, Guang
Spatiotemporal prediction plays a critical role in numerous real-world applications such as urban planning, transportation optimization, disaster response, and pandemic control. In recent years, researchers have made significant progress by developing advanced deep learning models for spatiotemporal prediction. However, most existing models are deterministic, i.e., predicting only the expected mean values without quantifying uncertainty, leading to potentially unreliable and inaccurate outcomes. While recent studies have introduced probabilistic models to quantify uncertainty, they typically focus on a single phenomenon (e.g., taxi, bike, crime, or traffic crashes), thereby neglecting the inherent correlations among heterogeneous urban phenomena. To address the research gap, we propose a novel Graph Neural Network with Uncertainty Quantification, termed UQGNN for multivariate spatiotemporal prediction. UQGNN introduces two key innovations: (i) an Interaction-aware Spatiotemporal Embedding Module that integrates a multivariate diffusion graph convolutional network and an interaction-aware temporal convolutional network to effectively capture complex spatial and temporal interaction patterns, and (ii) a multivariate probabilistic prediction module designed to estimate both expected mean values and associated uncertainties. Extensive experiments on four real-world multivariate spatiotemporal datasets from Shenzhen, New York City, and Chicago demonstrate that UQGNN consistently outperforms state-of-the-art baselines in both prediction accuracy and uncertainty quantification. For example, on the Shenzhen dataset, UQGNN achieves a 5% improvement in both prediction accuracy and uncertainty quantification.
SIGSPATIAL ’25, Minneapolis, MN, USA
</description>
<pubDate>Fri, 12 Dec 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164524</guid>
<dc:date>2025-12-12T00:00:00Z</dc:date>
</item>
<item>
<title>SONAR Web: A Platform-Agnostic Framework for Real-Time Decentralized Learning Across Heterogeneous Edge Clients</title>
<link>https://hdl.handle.net/1721.1/164523</link>
<description>SONAR Web: A Platform-Agnostic Framework for Real-Time Decentralized Learning Across Heterogeneous Edge Clients
Yuan, Joyce; Le, Brian; Le, Kathryn; Shi, Yichuan; Singh, Abhishek; Sharma, Rishi; Patricio, Angel; Raskar, Ramesh
Most federated learning (FL) frameworks assume reliable networks and homogeneous devices, limiting their applicability in mobile and edge environments where connectivity is intermittent and devices are highly heterogeneous. We introduce SONAR Web, an open-source framework for fully decentralized, cross-platform collaborative learning between browsers, servers, tablets, and smartphones. SONAR Web decouples the learning protocol from the underlying client platform through a platform-agnostic configuration interface—enabling Python, JavaScript, and mobile clients to seamlessly interoperate in real time. By combining peer-to-peer RTC protocols with communication-efficient techniques from FL, SONAR Web supports privacy-preserving training without centralized orchestration. We demonstrate SONAR Web's robustness through deployments on real-world devices and networks, showing resilience under heterogeneous network conditions and resource variability. SONAR Web provides a unified, language-agnostic interface for decentralized learning, enabling seamless collaboration across heterogeneous devices and runtimes—advancing scalable, inclusive, and real-time model training at the mobile and edge frontier.
FLEdge-AI ’25, November 4-8, 2025, Hong Kong, China
</description>
<pubDate>Tue, 02 Dec 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164523</guid>
<dc:date>2025-12-02T00:00:00Z</dc:date>
</item>
<item>
<title>Ferrozuit: Ferromagnetic Electronic Textile System for Zero-Gravity Spatial Anchoring</title>
<link>https://hdl.handle.net/1721.1/164522</link>
<description>Ferrozuit: Ferromagnetic Electronic Textile System for Zero-Gravity Spatial Anchoring
Honnet, Cedric; Freire, Rachel; Cherston, Juliana; Guenther, Maximilian; Paradiso, Joseph; Wicaksono, Irmandy
Long-duration human space missions introduce persistent physical, physiological, and psychological challenges stemming from the absence of gravity. Beyond major concerns like bone deterioration, cardiovascular deconditioning, and muscle atrophy, astronauts frequently experience spatial disorientation, discomfort during routine tasks, and difficulty maintaining stable body positioning. These subtle yet pervasive issues impact daily functioning, underscoring the need for lightweight, unobtrusive solutions that support orientation, comfort, and stability in microgravity environments. Ferrozuit introduces a solution to address these challenges in microgravity. It is a prototype crafted from custom ferromagnetic thread, woven and tailored to interact with programmable (electro)permanent magnets embedded within the microgravity environment. This system aims to provide an anchoring force intended to improve stability during tasks, enhance comfort during rest, and create a sense of orientation. This paper details the design rationale, the fabrication of the ferromagnetic textile, the magnetic docking system, initial technical evaluations, and potential applications. Ferrozuit reimagines spatial anchoring as an embedded, textile-driven experience, blending textile craft with advanced materials for adaptive wearable anchoring in microgravity environments.
UbiComp Companion ’25, Espoo, Finland
</description>
<pubDate>Mon, 29 Dec 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164522</guid>
<dc:date>2025-12-29T00:00:00Z</dc:date>
</item>
<item>
<title>Intelligent Soft Wearables</title>
<link>https://hdl.handle.net/1721.1/164521</link>
<description>Intelligent Soft Wearables
Yu, Tianhong; Honnet, Cedric; Cheng, Tingyu; Takahashi, Ryo; Zhou, Bo; Zhang, Cheng; Lukowicz, Paul; Kawahara, Yoshihiro; Hester, Josiah; Paradiso, Joseph; Luo, Yiyue; Wicaksono, Irmandy
Human bodies are almost always in contact with soft materials like clothing, for warmth, protection, self-expression, etc. Recent advancements in intelligent soft wearables have augmented these on-body soft objects with computational functions and intelligence with little compromise on the softness and comforts of wearables, allowing prolonged wear. These innovations, which combine advanced soft sensor design, fabrication, and computational power, offer unprecedented opportunities to improve our health, productivity, and overall well-being with monitoring and assistive capabilities. However, the inherent physical properties of soft materials present unique challenges in achieving practical interactions. The complexity of intelligent soft wearables, multiplexing intricate designs, soft materials, flexible electronics, advanced signal processing algorithms, and machine learning models, necessitates collaborative efforts from experts across diverse domains. This workshop aims to bring together interested researchers and practitioners across relevant domains to discuss the challenges and opportunities of intelligent soft wearables.
</description>
<pubDate>Mon, 29 Dec 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164521</guid>
<dc:date>2025-12-29T00:00:00Z</dc:date>
</item>
<item>
<title>Quantum dots: A journey from fundamental discovery to technological impacts</title>
<link>https://hdl.handle.net/1721.1/164520</link>
<description>Quantum dots: A journey from fundamental discovery to technological impacts
Hassan, Abeera; Kaur, Jaspreet; Chen, Ou; Bawendi, Moungi G.
This article traces the evolution of quantum dots (QDs) from their initial discovery to growing technological impacts. We highlight the key breakthroughs in the development of colloidal QDs that have enabled precise control over their unique optical and optoelectronic properties. We also discuss a range of QD-based applications and address commercialization efforts. Finally, we examine ongoing challenges and emerging opportunities that are set to shape the future of QD research and technological advancement.
</description>
<pubDate>Thu, 13 Nov 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164520</guid>
<dc:date>2025-11-13T00:00:00Z</dc:date>
</item>
<item>
<title>Proximity-labeling proteomics reveals remodeled interactomes and altered localization of pathogenic SHP2 variants</title>
<link>https://hdl.handle.net/1721.1/164519</link>
<description>Proximity-labeling proteomics reveals remodeled interactomes and altered localization of pathogenic SHP2 variants
van Vlimmeren, Anne E.; Tang, Lauren C.; Jiang, Ziyuan; Iyer, Abhishek; Voleti, Rashmi; Krismer, Konstantin; Gaublomme, Jellert T.; Jovanovic, Marko; Shah, Neel H.
Missense mutations in PTPN11, which encodes the protein tyrosine phosphatase SHP2, are common in several developmental disorders and cancers. While many mutations disrupt auto-inhibition and hyperactivate SHP2, several do not enhance catalytic activity. Both activating and non-activating mutations could potentially drive pathogenic signaling by altering SHP2 interactions or localization. We employed proximity-labeling proteomics to map the interaction networks of wild-type SHP2, ten clinically relevant mutants, and SHP2 bound to an inhibitor that stabilizes its auto-inhibited state. Our analyses reveal mutation- and inhibitor-dependent alterations in the SHP2 interactome, with several mutations also changing localization. Some mutants show increased mitochondrial localization and impact mitochondrial function. This study provides a resource for exploring SHP2 signaling and offers new insights into the molecular basis of SHP2-driven diseases. Furthermore, this work highlights the capacity for proximity-labeling proteomics to detect missense-mutation-dependent changes in protein interactions and localization.
</description>
<pubDate>Mon, 22 Dec 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164519</guid>
<dc:date>2025-12-22T00:00:00Z</dc:date>
</item>
<item>
<title>Perspective on patient and non-academic partner engagement for the responsible integration of large language models in health chatbots</title>
<link>https://hdl.handle.net/1721.1/164518</link>
<description>Perspective on patient and non-academic partner engagement for the responsible integration of large language models in health chatbots
Jaiswal, Nikhil; Ma, Yuanchao; Lebouché, Bertrand; Poenaru, Dan; Pomey, Marie-Pascale; Achiche, Sofiane; Lessard, David; Engler, Kim; Montiel, Zully; Acevedo, Hector; Gameiro, Rodrigo R.; Celi, Leo A.; Osmanlliu, Esli
Uses of large language models (LLMs) in health chatbots are expanding into high-stakes clinical contexts, heightening the need for tools that are evidence-based, accountable, accurate, and patient-centred. This conceptual, practice-informed Perspective reflects on engaging patients and non-academic partners for the responsible integration of LLMs, grounded in the co-construction of MARVIN (for people living with HIV) and in an emerging collaboration with MIT Critical Data. Organised by the Software Development Life Cycle, we describe: conception/needs assessment with patient partners to identify use cases, acceptable trade-offs, and privacy expectations; development that prioritises grounding via vetted sources, structured human feedback, and data-validation committees including patient partners; testing and evaluation using patient-reported outcome measures (PROMs) and patient-reported experience measures (PREMs) chosen in collaboration with patients to capture usability, acceptability, trust, and perceived safety, alongside task performance and harmful-output monitoring; and implementation via diverse governance boards, knowledge-mobilisation materials to set expectations, and risk-management pathways for potentially unsafe outputs. Based on our experience with MARVIN, we recommend early and continuous engagement of patients and non-academic partners, fair compensation, shared decision-making power, transparent decision logging, and inclusive, adaptable governance that can evolve with changing models and standards. These lessons highlight how patient partnership can directly shape chatbot design and oversight, helping teams align LLM-enabled tools with patient-centred goals while building accountable, safe, and equitable systems.
</description>
<pubDate>Mon, 22 Dec 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164518</guid>
<dc:date>2025-12-22T00:00:00Z</dc:date>
</item>
<item>
<title>Study of charm mixing and CP violation with D0 → K±π∓π±π∓ decays</title>
<link>https://hdl.handle.net/1721.1/164517</link>
<description>Study of charm mixing and CP violation with D0 → K±π∓π±π∓ decays
Aaij, R.; Abdelmotteleb, A. S. W.; Abellan Beteta, C.; Abudinén, F.; Ackernley, T.; Adefisoye, A. A.; Adeva, B.; Adinolfi, M.; Adlarson, P.; Agapopoulou, C.; Aidala, C. A.; Ajaltouni, Z.; Akar, S.; Akiba, K.; Albicocco, P.; Albrecht, J.
A study of charm mixing and CP violation in D0 → K±π∓π±π∓ decays is performed using data collected by the LHCb experiment in proton-proton collisions from 2015 to 2018, corresponding to an integrated luminosity of 6 fb−1. The ratio of promptly produced D0 → K+π−π+π− to D0 → K−π+π−π+ decay rates is measured as a function of D0 decay time, both inclusive over phase space and in bins of phase space. Taking external inputs for the D 0 − D ¯ 0 mixing parameters x and y allows constraints to be obtained on the hadronic parameters of the charm decay. When combined with previous measurements from charm-threshold experiments and at LHCb, improved knowledge is obtained for these parameters, which is valuable for studies of the angle γ of the Unitarity Triangle. An alternative analysis is also performed, in which external inputs are taken for the hadronic parameters, and the mixing parameters are determined, including ∆x and ∆y, which are nonzero in the presence of CP violation. It is found that x = 0 . 85 − 0.24 + 0.15 % , y = 0 . 21 − 0.27 + 0.29 % , ∆x = (−0.02 ± 0.04) % and Δ y = 0.0 2 − 0.03 + 0.04 % . These results are consistent with previous measurements and the hypothesis of CP conservation.
</description>
<pubDate>Fri, 19 Dec 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164517</guid>
<dc:date>2025-12-19T00:00:00Z</dc:date>
</item>
<item>
<title>Energy-energy correlator at hadron colliders: celestial blocks and singularities</title>
<link>https://hdl.handle.net/1721.1/164516</link>
<description>Energy-energy correlator at hadron colliders: celestial blocks and singularities
Chen, Hao; Ruan, Hongyi; Zhu, Hua X.
Energy-energy correlator (EEC) is an event shape observable that characterizes the distribution of energy flux in collision events. We initiate the study of full-range EEC at hadron colliders, generalizing the extensively studied EEC in e+e− collision as well as the transverse EEC in hadron collisions. We derive celestial blocks from Lorentz symmetry to perform partial wave decomposition of the EEC at hadron colliders. These celestial blocks are essentially conformal blocks on the 2d celestial sphere, which have additional dependence on the collinear spin of “light-ray transition matrix” along the collision axis. In this work, we perform the leading-order (LO) analytic calculation of this observable in pure Yang-Mills theory and use it as an example to illustrate the block decomposition. Numerically, the block expansion demonstrates superior accuracy in the collinear limit compared to conventional power series expansion. Analytically, we observe in this example that the block coefficients exhibit analyticity in both collinear and transverse spin. In addition, we analyze several kinematic limits at LO — collinear, back-to-back, opposite coplanar and Regge limit. While the first three limits naturally generalize their e+e− collision counterparts or transverse EEC and are governed by soft-collinear dynamics, the Regge limit requires complete angular dependence and reveals BFKL physics. Phenomenologically, we propose a realistic experimental setup and briefly discuss how the convolution of parton distribution function modifies the perturbative EEC result. Our work suggests that the full-range EEC at hadron colliders is an elegant observable which probes a broader kinematic space and connects various regimes of different QCD dynamics through a single measurement.
</description>
<pubDate>Mon, 22 Dec 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164516</guid>
<dc:date>2025-12-22T00:00:00Z</dc:date>
</item>
<item>
<title>Measurement of the B0 → ρ(770)0γ branching fraction</title>
<link>https://hdl.handle.net/1721.1/164473</link>
<description>Measurement of the B0 → ρ(770)0γ branching fraction
Aaij, R.; Abdelmotteleb, A. S. W.; Abellan Beteta, C.; Abudinén, F.; Ackernley, T.; Adefisoye, A. A.; Adeva, B.; Adinolfi, M.; Adlarson, P.; Agapopoulou, C.; Aidala, C. A.; Ajaltouni, Z.; Akar, S.; Akiba, K.; Albicocco, P.; Albrecht, J.
The ratio between the branching fractions of the B0 → ρ(770)0γ and B0 → K*(892)0γ decays is measured with proton-proton collision data collected by the LHCb experiment at centre-of-mass energies of 7, 8, and 13 TeV, corresponding to an integrated luminosity of 9 fb−1. The measured value is B B 0 → ρ 770 0 γ B B 0 → K ∗ 892 0 γ = 0.0189 ± 0.0007 ± 0.0005 , where the first uncertainty is statistical and the second systematic. The branching fraction for B0 → ρ(770)0γ decays is hence obtained as B B 0 → ρ 770 0 γ = 7.9 ± 0.3 ± 0.2 ± 0.2 × 10 − 7 , where the last uncertainty is due to the branching fraction of the normalisation mode. This result assumes that both the ρ(770)0 and K*(892)0 decays saturate the dihadron mass spectra considered in the analysis. It is consistent with the current world-average value and by far the most precise measurement to date.
</description>
<pubDate>Fri, 19 Dec 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164473</guid>
<dc:date>2025-12-19T00:00:00Z</dc:date>
</item>
<item>
<title>A Beamdump facility at Jefferson Lab</title>
<link>https://hdl.handle.net/1721.1/164472</link>
<description>A Beamdump facility at Jefferson Lab
Achenbach, Patrick; Afanasev, Andrei; Ambrozewicz, Pawel; Ashkenazi, Adi; Banerjee, Dipanwita; Battaglieri, Marco; Benesch, Jay; Bondí, Mariangela; Brindza, Paul; Camsonne, Alexandre; Christy, Eric M.; Cline, Ethan W.; Cuevas, Chris; Dilling, Jens; Doria, Luca; Fegan, Stuart; Filippini, Marco; Fulci, Antonino; Giovannella, Simona; Grazzi, Stefano
The potential of the intense secondary muon, neutrino, and (hypothetical) light dark matter beams at the Thomas Jefferson National Accelerator Facility (Jefferson Lab) is explored. These are produced in the high-power dumps with high-current electron beams. Light dark matter searches with the approved Beam Dump eXperiment (BDX) are driving the realization of a new underground vault behind Hall A that could be extended to a Beamdump Facility with little additional installations. High-energy muons created via the Bethe–Heitler process uniquely do not proceed through the more common pion production and decay channels. Several possible muon physics applications are highlighted. Neutrino detector technologies and experiments suitable for a beamdump facility are outlined.
</description>
<pubDate>Wed, 24 Dec 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164472</guid>
<dc:date>2025-12-24T00:00:00Z</dc:date>
</item>
<item>
<title>Linking Chemical Phase and Mechanical Properties to Evaluate the Use of Millimeter-Wave Induced Vitrified Basalt in Enhanced Geothermal Systems</title>
<link>https://hdl.handle.net/1721.1/164471</link>
<description>Linking Chemical Phase and Mechanical Properties to Evaluate the Use of Millimeter-Wave Induced Vitrified Basalt in Enhanced Geothermal Systems
Meltzer, Eve R.; Stefaniuk, Damian; Einstein, Herbert H.
Extraction of geothermal energy from Earth’s heat could significantly contribute to long-term energy needs, yet the current geothermal drilling process faces significant technical limitations. A promising advancement in enhanced geothermal systems is the use of a millimeter-wave (MMW) gyrotron, which enables faster and more efficient drilling. The MMW drilling process offers two key advantages over traditional methods: (1) rock is melted rather than mechanically drilled, leading to faster well hole advancement, and (2) the molten rock solidifies into a vitrified wall, eliminating the need for additional casing materials. This integrated drilling and casing method has the potential to save costs, time, and materials. This paper examines the strength, structural integrity, and microscale mechanical and chemical properties of the vitrified material formed during the mm-wave process, focusing on basalt as the test material. By employing a suite of experimental and analytical characterization techniques, this study aims to provide a comprehensive comparison of the structural, mechanical, and chemical changes in the rock before and after melting, offering insights into the effectiveness and implications of mm-wave drilling for enhanced geothermal systems. Highlights There is a clear change of phase between the basalt, the transition zone, and the melt, due to mm-wave exposure. The region exposed to mm-waves is completely vitrified, while there is partial melting of minerals within the zone right outside of the mm-wave beam. The transition zone created from mm-waves poses high risk to wellbore stability due to its variable mechanical strength and chemical composition. A better understanding of this new material can be achieved by overlaying a series of chemical and mechanical characterization data.
</description>
<pubDate>Mon, 22 Dec 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164471</guid>
<dc:date>2025-12-22T00:00:00Z</dc:date>
</item>
<item>
<title>Reimagining Commercial Health Insurance in India: A System-Dynamics Approach to Complex Stakeholder Incentives and Policy Outcomes</title>
<link>https://hdl.handle.net/1721.1/164470</link>
<description>Reimagining Commercial Health Insurance in India: A System-Dynamics Approach to Complex Stakeholder Incentives and Policy Outcomes
Mor, Nachiket; Gupta, Aakriti; Roy, Rahul
Most low- and middle-income governments are unwilling and unable to adequately fund their health systems using tax resources. Despite this route’s popularity in public discourse, it is neither a feasible nor a desirable route for financing Universal Health Coverage (UHC), given competing public finance priorities and limited citizen demand, among other challenges. It thus becomes essential to study the underlying mechanisms behind commercial health insurance and offer citizens the best possible product, which ensures that they not only receive a high degree of protection from health and financial risk on a sustained basis but also find reasonable access and support to improve their health outcomes. In this paper, we build a system-dynamics model that simulates the aggregate behavior of the Indian health-insurance industry, with interacting feedbacks between decisions by stakeholders such as the insurer, healthy and chronically ill populations, and the regulator to outcomes like insurance penetration among segments, overall coverage, health status over the long run, a mechanism of market-discovered premium, and financial viability of the private insurer. We then investigate policy choices and scenarios to explore contrast between design choices and ideal or targeted states of this market, such as a market with 100% enrollment, risk selection by insurers, group insurance models, and managed care, and study the impact on our outcomes of interest, i.e., insurance penetration and pricing, the financial sustainability of the insurers, and the population’s health outcomes. The simulations show that even while insurers and the different population segments optimize for their respective near-term objectives, the best outcomes for all come from the managed-care policy option, which has greater insurance penetration, lower premiums, higher profitability for insurers, and better long-term health outcomes. All other choices and scenarios yield suboptimal, imbalanced systemic outcomes. We thus recommend managed care as a desirable policy alternative for low-income countries intending to improve UHC by leveraging commercial health insurance.
</description>
<pubDate>Mon, 08 Dec 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164470</guid>
<dc:date>2025-12-08T00:00:00Z</dc:date>
</item>
<item>
<title>A Symbiotic Digital Environment Framework for Industry 4.0 and 5.0: Enhancing Lifecycle Circularity</title>
<link>https://hdl.handle.net/1721.1/164469</link>
<description>A Symbiotic Digital Environment Framework for Industry 4.0 and 5.0: Enhancing Lifecycle Circularity
Ponce, Pedro; Maldonado-Romo, Javier; Anthony, Brian W.; Bradley, Russel; Montesinos, Luis
This paper introduces a Symbiotic Digital Environment Framework (SDEF) that integrates Human Digital Twins (HDTs) and Machine Digital Twins (MDTs) to advance lifecycle circularity across all stages of the CADMID model (i.e., Concept, Assessment, Design, Manufacture, In-Service, and Disposal). Unlike existing frameworks that address either digital twins or sustainability in isolation, SDEF establishes a bidirectional adaptive system where human, machine, and environmental digital entities continuously interact to co-optimize performance, resource efficiency, and well-being. The framework’s novelty lies in unifying human-centric adaptability (via HDTs) with circular economy principles to enable real-time symbiosis between industrial processes and their operators. Predictive analytics, immersive simulation, and continuous feedback loops dynamically adjust production parameters based on operator states and environmental conditions, extending asset lifespan while minimizing waste. Two simulation-based scenarios in VR using synthetic data demonstrate the framework’s capacity to integrate circularity metrics (material throughput, energy efficiency, remanufacturability index) with human-machine interaction variables in virtual manufacturing environments. SDEF bridges Industry 4.0’s automation capabilities and Industry 5.0’s human-centric vision, offering a scalable pathway toward sustainable and resilient industrial ecosystems by closing the loop between physical and digital realms.
</description>
<pubDate>Sat, 06 Dec 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164469</guid>
<dc:date>2025-12-06T00:00:00Z</dc:date>
</item>
<item>
<title>Development of a Test Bed to Investigate Wetting Behaviours of High-Temperature Heavy Liquid Metals for Advanced Nuclear Applications</title>
<link>https://hdl.handle.net/1721.1/164468</link>
<description>Development of a Test Bed to Investigate Wetting Behaviours of High-Temperature Heavy Liquid Metals for Advanced Nuclear Applications
Saraswat, Abhishek; Bhattacharyay, Rajendraprasad; Chaudhuri, Paritosh; Gedupudi, Sateesh
Specifically engineered heavy liquid metals are proposed as candidate coolants and tritium breeders for advanced nuclear applications. Understanding the wetting behaviours of these liquids on relevant substrate configurations is crucial to tackle the challenges associated with corrosion protection and flow diagnostics development. However, detailed investigations are scarce in the literature. In this experimental study, an apparatus is designed to measure contact angles of different liquid metals over a mirror-polished horizontal SS-304 substrate. This paper presents design aspects of the developed test facility, as well as initial results obtained using direct imaging and the Low-Bond Axisymmetric Drop Shape Analysis algorithm-based image processing technique. Methodological validation is achieved through surrogate liquids/liquid metals (H2O, Hg, Ga, GaInSn), prior to taking measurements from molten lead (Pb) droplets at 425 °C. Estimated contact angles obtained using the two techniques lie within ±10% deviation. Towards the end, the paper lays out plans for future upgrades for studies of wetting behaviours of molten Pb/Pb alloys on substrates with relevant surface properties, including bare P-91 and reduced-activation ferritic–martensitic steels, along with Al2O3/Er2O3-coated versions of these materials, to generate a database for Gen-IV fission reactors and fusion power plants.
</description>
<pubDate>Wed, 26 Nov 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164468</guid>
<dc:date>2025-11-26T00:00:00Z</dc:date>
</item>
<item>
<title>Future Circular Collider Feasibility Study Report</title>
<link>https://hdl.handle.net/1721.1/164467</link>
<description>Future Circular Collider Feasibility Study Report
Benedikt, M.; Zimmermann, F.; Auchmann, B.; Bartmann, W.; Burnet, J. P.; Carli, C.; Chancé, A.; Craievich, P.; Giovannozzi, M.; Grojean, C.; Gutleber, J.; Hanke, K.; Henriques, André; Janot, P.; Lourenço, C.; Mangano, M.; Otto, T.; Poole, J.; Rajagopalan, S.; Raubenheimer, T.
Volume 1 of the FCC Feasibility Report presents an overview of the physics case, experimental programme, and detector concepts for the Future Circular Collider (FCC). This volume outlines how FCC would address some of the most profound open questions in particle physics, from precision studies of the Higgs and EW bosons and of the top quark, to the exploration of physics beyond the Standard Model. The report reviews the experimental opportunities offered by the staged implementation of FCC, beginning with an electron-positron collider (FCC-ee), operating at several centre-of-mass energies, followed by a hadron collider (FCC-hh). Benchmark examples are given of the expected physics performance, in terms of precision and sensitivity to new phenomena, of each collider stage. Detector requirements and conceptual designs for FCC-ee experiments are discussed, as are the specific demands that the physics programme imposes on the accelerator in the domains of the calibration of the collision energy, and the interface region between the accelerator and the detector. The report also highlights advances in detector, software and computing technologies, as well as the theoretical tools/reconstruction techniques that will enable the precision measurements and discovery potential of the FCC experimental programme. The content and structure of this report are guided by the scope and priorities defined in the mandate of the FCC Feasibility Study. It is therefore not intended to serve as an exhaustive review of the full physics potential of FCC. Several topics, already covered in earlier reports such as the FCC CDR, are not reiterated here or are addressed only briefly, in alignment with the study’s focus. This volume reflects the outcome of a global collaborative effort involving hundreds of scientists and institutions, aided by a dedicated community-building coordination, and provides a targeted assessment of the scientific opportunities and experimental foundations of the FCC programme.
</description>
<pubDate>Wed, 24 Dec 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164467</guid>
<dc:date>2025-12-24T00:00:00Z</dc:date>
</item>
<item>
<title>Unregulated Vertical Urban Growth Alters Microclimate: Coupling Building-Scale Digital Surface Models with High-Resolution Microclimate Simulations</title>
<link>https://hdl.handle.net/1721.1/164466</link>
<description>Unregulated Vertical Urban Growth Alters Microclimate: Coupling Building-Scale Digital Surface Models with High-Resolution Microclimate Simulations
Falcão, Jonatas Goulart Marinho; Furtado, Luiz Felipe de Almeida; Barbosa, Gisele Silva; Teixeira Coelho, Luiz Carlos
Rio de Janeiro&amp;rsquo;s favelas house over 20% of the city&amp;rsquo;s population in just 5% of its territory, with Rio das Pedras emerging as a critical case study: ranking as Brazil&amp;rsquo;s fifth most populous favela and its most vertically intensified. This study quantifies how uncontrolled vertical growth in informal settlements disrupts microclimate dynamics, directly impacting thermal comfort. Using high-resolution geospatial analytics, we integrated digital surface models (DSMs) derived from LiDAR and photogrammetric data (2013, 2019, and 2024) with microclimatic simulations to assess urban morphology changes and their thermal effects. A spatiotemporal cadastral analysis tracked vertical expansion (new floors) and demolition patterns, while ENVI-met simulations mapped air temperature anomalies across decadal scenarios. Results reveal two key findings: (1) rapid, unregulated construction has significantly altered local airflow and surface energy balance, exacerbating the urban heat island (UHI) effect; (2) microclimatic simulations consistently recorded elevated temperatures, with the most pronounced impacts in densely built zones. These findings underscore the need for public policies to mitigate such negative effects observed in informal settlement areas.
</description>
<pubDate>Mon, 10 Nov 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164466</guid>
<dc:date>2025-11-10T00:00:00Z</dc:date>
</item>
<item>
<title>Striking a Pose: DIY Computer Vision Sensor Kit to Measure Public Life Using Pose Estimation Enhanced Action Recognition Model</title>
<link>https://hdl.handle.net/1721.1/164463</link>
<description>Striking a Pose: DIY Computer Vision Sensor Kit to Measure Public Life Using Pose Estimation Enhanced Action Recognition Model
Williams, Sarah; Kang, Minwook
Observing and measuring public life is essential for designing inclusive, vibrant, and climate-resilient public spaces. While urban planners have traditionally relied on manual observation, recent advances in open-source Computer Vision (CV) now enable automated analysis. However, most CV sensors in urban studies focus on transportation analysis, offering limited insight into nuanced human behaviors such as sitting or socializing. This limitation stems in part from the challenges CV algorithms face in detecting subtle activities within public spaces. This study introduces the Public Life Sensor Kit (PLSK), an open-source, do-it-yourself system that integrates a GoPro camera with an NVIDIA Jetson edge device, and evaluates whether pose estimation-enhanced CV models can improve the detection of fine-grained public life behaviors, such as sitting and social interaction. The PLSK was deployed during a public space intervention project in Sydney, Australia. The resulting data were measured against data collected from the Vivacity sensor, a commercial transportation-focused CV system, and traditional human observation. The results show that the PLSK outperforms the commercial sensor in detecting and classifying key public life activities, including pedestrian traffic, sitting, and socializing. These findings highlight the potential of the PLSK to support ethically collected and behavior-rich public space analysis and advocate for its adoption in next-generation urban sensing technologies.
</description>
<pubDate>Sat, 01 Nov 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164463</guid>
<dc:date>2025-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Pretreatment of Mice with 830 nm Light Enhances Endurance During Acute Exercise</title>
<link>https://hdl.handle.net/1721.1/164462</link>
<description>Pretreatment of Mice with 830 nm Light Enhances Endurance During Acute Exercise
Cheema, Nashwa; Ghag, Namrata; Pham, Linh; Wise, Emma; Fuchs, Christiane; Anderson, Rox; Tam, Joshua
Light therapy has been shown to produce several beneficial physiological effects in a wide range of tissues. The musculoskeletal system can be irradiated with deeply penetrating wavelengths in near infrared (NIR) regions. Photobiomodulation therapy (PBMT) reduces pain and inflammation and enhances physical performance. However, the mechanism(s) of cellular responses to PBMT in muscle is not clearly understood. Therefore, the goal of this study is to improve our understanding of the mechanism(s) of action of PBMT effects in exercised and sedentary muscle. In sedentary mice, PBMT using a wavelength of 830 nm increased the gene expression for muscle tissue development, including cFos, which is critical for activating interstitial and satellite cells that repair muscle. Immunostaining for cFOS expression confirmed an increase in the number of activated cells in PBMT-treated muscle. We observed that PBMT-treated mice showed increased performance on the treadmill, reduced muscle fiber damage, and altered mitochondrial structure. RNA sequencing from fatigued TA tissue suggested that PBMT treatment increased the gene expression of tissue regeneration and remodeling, suggesting tissue adaptation and muscle repair after exercise with PBMT. In conclusion, our study suggests that the 830 nm wavelength may have altered the muscle by activating regenerative genes that protect the tissue from exercise-induced cellular stress.
</description>
<pubDate>Thu, 23 Oct 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164462</guid>
<dc:date>2025-10-23T00:00:00Z</dc:date>
</item>
<item>
<title>Integrating Physiologic Assessment into Virtual Reality-Based Pediatric Pain Intervention: A Feasibility Study</title>
<link>https://hdl.handle.net/1721.1/164461</link>
<description>Integrating Physiologic Assessment into Virtual Reality-Based Pediatric Pain Intervention: A Feasibility Study
Marwah, Harsheen; Moldovanu, Stefania R.; Reks, Talis; Anthony, Brian; Logan, Deirdre E.
This feasibility study explored the integration of physiological monitoring into a virtual reality (VR) intervention for pediatric pain management. The goal of this study is to identify a feasible strategy for collecting physiologic data in the context of a VR intervention currently being developed for youth with chronic pain. We assess the potential of Cognitive Load (CL)—derived from heart rate and pupillometry/eye-tracking data—as a marker of arousal and user engagement in a VR simulation to promote school functioning in youth with chronic pain. The HP Reverb G2 Omnicept headset and Polar H10 heart-rate sensor were utilized. The Child Presence Questionnaire (CPQ) assessed participants’ self-reported immersion and engagement. Data collection focused on feasibility and utility of physiologic data in assessing arousal and correlations with self-reported experience. Nine participants engaged in the simulation, with eight yielding complete data. The simulation and headset were well tolerated. CPQ Transportation subscale showed trend-level correlation with mean CL. Due to small sample and feasibility focus, individual-level results were examined. Combining multiple physiologic markers into a construct like CL is intriguing, but data interpretability was limited. Pupillometry and related metrics show promise as feasible markers of engagement and arousal for VR-based intervention but require appropriate expertise to fully interpret. The study found that integration of physiologic monitoring is feasible, but further work is needed to standardize metrics and identify the most useful and user-friendly markers.
</description>
<pubDate>Wed, 22 Oct 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164461</guid>
<dc:date>2025-10-22T00:00:00Z</dc:date>
</item>
<item>
<title>Two Decades of CARICOMP Mangrove Monitoring (1992–2013) Reveal Variability in Tree Structure and Productivity of Rhizophora mangle Across the Wider Caribbean</title>
<link>https://hdl.handle.net/1721.1/164460</link>
<description>Two Decades of CARICOMP Mangrove Monitoring (1992–2013) Reveal Variability in Tree Structure and Productivity of Rhizophora mangle Across the Wider Caribbean
Kjerfve, Björn; Oxenford, Hazel A.; Collin, Rachel; Pestana, Inácio Abreu; Samper-Villarreal, Jimena; Medina-Gómez, Israel; Cortés, Jorge; Smith, Struan R.; Koltes, Karen; Feller, Ilka C.; Bastidas, Carolina; Juman, Rahanna; Geraldes, Francisco X.; Filippo, Alessandro; Varela, Ramon; McCoy, Croy; Garzón-Ferreira, Jaime; Polanía, Jaime; Capelo, Juan C.; Ogden, John
The Caribbean Coastal Marine Productivity (CARICOMP) program was conceptualized in 1985 to monitor coral reefs, seagrass beds, and mangrove forests at multiple sites across the wider Caribbean. Mangrove monitoring was focused on the dominant Caribbean species, red mangrove (Rhizophora mangle). Forest structure and productivity were monitored at 21 sites (18 countries) across different geomorphological settings, from tropical to subtropical mainland and island systems. Here, we provide the key findings from the CARICOMP mangrove data collected, mostly from 1992 to 2013, to assess spatial and temporal variability across the region. Red mangrove above-ground biomass averaged 190 t ha−1 (far higher than previously reported) but ranged widely across sites from 33 to 590 t ha−1, equating to an average above-ground ‘blue carbon’ of 84 t ha−1 (range 15–260 t ha−1). Tree density averaged 3237 trees ha−1, tree basal area averaged 19.7 m2 ha−1, tree height averaged 6.1 ± 2.8 m, and seedling density varied from 1.2 to 74 seedlings m−2 across the sites. Among the environmental factors that influence mangroves, local temperature and rainfall explained 48% of the variability in measured tree structure parameters. Annual litterfall, as a proxy for productivity, measured on average 1.24 ± 0.70 kg m−2 yr−1, with 60% of the total litterfall composed of leaves. Litterfall varied seasonally by 42%. No relationship was apparent between litterfall and seasonal ocean–atmosphere climate indices (ONI and AMM). With exception of the three most southwesterly CARICOMP sites, hurricanes and tropical storms impacted the mangrove sites repeatedly, resulting in considerable damage. A direct strike by a category-4 hurricane in 1998 in Dominican Republic killed 67% of the red mangrove trees, lowered above-ground biomass by 91%, basal area by 89%, litterfall by 63%, and resulted in the subsequent growth of many tall and thin saplings, totally changing the structure of the forest ecosystem in the first few years after the hurricane. In comparing mangrove systems, major differences may be explained by time elapsed since the last destructive event (hurricane) affecting each site. This highlights the fact that despite an increasing focus on preserving these valuable ecosystems, they are still highly vulnerable to natural hazards and likely to face a poor outcome under ongoing climate change.
</description>
<pubDate>Mon, 01 Dec 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164460</guid>
<dc:date>2025-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Thallium(I) Uptake and Accumulation by Wheat and Rice Plants</title>
<link>https://hdl.handle.net/1721.1/164459</link>
<description>Thallium(I) Uptake and Accumulation by Wheat and Rice Plants
Yang, Puu-Tai; Chang, Hsin-Fang; Huang, Liang-Sin; Chuang, Tsung-Ju; Wang, Shan-Li
Thallium (Tl) is a highly toxic trace metal of increasing concern in agricultural soils. This study investigated the uptake, accumulation, and tissue-level distribution of Tl(I) in rice (Oryza sativa L.) and wheat (Triticum aestivum L.) grown in three agricultural soils differing in soil pH and texture. In the seedling pot experiment (0–100 mg kg−1 soil Tl), plant Tl concentrations increased dose-dependently, and were at least an order of magnitude lower in the alkaline soil than in the acidic soils. Bioaccumulation factors of roots and shoots generally exceeded unity and declined with increasing Tl dose in acidic soils, consistent with uptake saturation and physiological stress at high exposure. To elucidate how soil Tl speciation and pH regulate Tl availability, X-ray absorption spectroscopy (XAS) was used; it showed that Tl(I)—sorbed on illite was the predominant species in all soils (89–95%), with a minor fraction (5–11%) associated with non-specific adsorption. In maturity pots (5 mg kg−1 soil Tl), both crops grown in the moderately acidic, coarse-textured soil translocated a small fraction of absorbed Tl to grains, with wheat and rice containing 0.24 and 0.10 mg kg−1 Tl, respectively. Comparatively, plants in the more acidic soil failed to reach maturity, and grain Tl was not detected in the alkaline soil. LA-ICP-MS mapping revealed Tl enrichment in the bran and embryo of rice and in the crease, bran, and embryo of wheat, indicating that unpolished grains may pose higher dietary exposure risks than polished products. Overall, these findings demonstrate the key roles of soil pH and mineral composition in governing soil Tl availability and plant Tl uptake, whereas plant transport processes regulate grain Tl loading. In the absence of food-safety standards for Tl, the results of this study underscore the need to better understand and mitigate Tl transfer from contaminated soils into human food chains via cereal crops.
</description>
<pubDate>Wed, 17 Dec 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164459</guid>
<dc:date>2025-12-17T00:00:00Z</dc:date>
</item>
<item>
<title>Proximity Loses: Real-Time Resolution of Ambiguous Wh-Questions in Japanese</title>
<link>https://hdl.handle.net/1721.1/164441</link>
<description>Proximity Loses: Real-Time Resolution of Ambiguous Wh-Questions in Japanese
Nakamura, Chie; Flynn, Suzanne; Miyamoto, Yoichi; Yusa, Noriaki
This study investigated how Japanese speakers interpret structurally ambiguous wh-questions, testing whether filler–gap resolution is guided by syntactic resolution based on hierarchical structure or linear locality based on surface word order. We combined behavioral key-press responses with fine-grained eye-tracking data and applied cluster-based permutation analysis to capture the moment-by-moment time course of syntactic interpretation as sentences were processed in real time. Key-press responses revealed a preference for resolving the dependency at the main clause (MC) gap position. Eye-tracking data showed early predictive fixations to the MC picture, followed by shifts to the embedded clause (EC) picture as the embedded event was described. These shifts occurred prior to the appearance of syntactic cues that signal the presence of an EC structure, such as the complementizer -to, and were therefore most likely guided by referential alignment with the linguistic input rather than by syntactic reanalysis. A subsequent return of the gaze to the MC picture occurred when the clause-final question particle -ka became available, confirming the interrogative use of the wh-phrase. Both key-press and eye-tracking data showed that participants did not commit to the first grammatically available EC interpretation but instead waited until clause-final particle information confirmed the interrogative use of the wh-phrase, ultimately favoring the MC interpretation. This pattern supports the view that filler–gap resolution is guided by structural locality rather than linear locality. By using high-resolution temporal data and statistically robust analytic techniques, this study demonstrates that Japanese comprehenders engage in predictive yet structurally cautious parsing. These findings challenge earlier claims that filler–gap resolution in Japanese is primarily driven by linear locality and instead showed a preference for resolving dependencies at the structurally higher MC position, consistent with parsing biases previously observed in English, despite typological differences in word order between the two languages. This preference also reflects sensitivity to language-specific morpho-syntactic cues in Japanese, such as clause-final particles.
</description>
<pubDate>Tue, 25 Nov 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164441</guid>
<dc:date>2025-11-25T00:00:00Z</dc:date>
</item>
<item>
<title>Engineering peroxisomal biosynthetic pathways for maximization of triterpene production in Yarrowia lipolytica</title>
<link>https://hdl.handle.net/1721.1/164440</link>
<description>Engineering peroxisomal biosynthetic pathways for maximization of triterpene production in Yarrowia lipolytica
Ma, Yongshuo; Shang, Yi; Stephanopoulos, Gregory
Constructing efficient cell factories for product synthesis is frequently hampered by competing pathways and/or insufficient precursor supply. This is particularly evident in the case of triterpenoid biosynthesis in Yarrowia lipolytica, where squalene biosynthesis is tightly coupled to cytosolic biosynthesis of sterols essential for cell viability. Here, we addressed this problem by reconstructing the complete squalene biosynthetic pathway, starting from acetyl-CoA, in the peroxisome, thus harnessing peroxisomal acetyl-CoA pool and sequestering squalene synthesis in this organelle from competing cytosolic reactions. This strategy led to increasing the squalene levels by 1,300-fold relatively to native cytosolic synthesis. Subsequent enhancement of the peroxisomal acetyl-CoA supply by two independent approaches, 1) converting cellular lipid pool to peroxisomal acetyl-CoA and 2) establishing an orthogonal acetyl-CoA shortcut from CO2-derived acetate in the peroxisome, further significantly improved local squalene accumulation. Using these approaches, we constructed squalene-producing strains capable of yielding 32.8 g/L from glucose, and 31.6 g/L from acetate by employing a cofeeding strategy, in bioreactor fermentations. Our findings provide a feasible strategy for protecting intermediate metabolites that can be claimed by multiple reactions by engineering peroxisomes in Y. lipolytica as microfactories for the production of such intermediates and in particular acetyl-CoA-derived metabolites.
</description>
<pubDate>Tue, 23 Jan 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164440</guid>
<dc:date>2024-01-23T00:00:00Z</dc:date>
</item>
<item>
<title>Drug screening in human physiologic medium identifies uric acid as an inhibitor of rigosertib efficacy</title>
<link>https://hdl.handle.net/1721.1/164439</link>
<description>Drug screening in human physiologic medium identifies uric acid as an inhibitor of rigosertib efficacy
Rawat, Vipin; DeLear, Patrick; Prashanth, Prarthana; Ozgurses, Mete Emir; Tebeje, Anteneh; Burns, Philippa A; Conger, Kelly O; Solís, Christopher; Hasnain, Yasir; Novikova, Anna; Endress, Jennifer E; González-Sánchez, Paloma; Dong, Wentao; Stephanopoulos, Greg; DeNicola, Gina M; Harris, Isaac S; Sept, David; Mason, Frank M; Coloff, Jonathan L
The nonphysiological nutrient levels found in traditional culture media have been shown to affect numerous aspects of cancer cell physiology, including how cells respond to certain therapeutic agents. Here, we comprehensively evaluated how physiological nutrient levels affect therapeutic response by performing drug screening in human plasma-like medium. We observed dramatic nutrient-dependent changes in sensitivity to a variety of FDA-approved and clinically trialed compounds, including rigosertib, an experimental cancer therapeutic that recently failed in phase III clinical trials. Mechanistically, we found that the ability of rigosertib to destabilize microtubules is strongly inhibited by the purine metabolism end product uric acid, which is uniquely abundant in humans relative to traditional in vitro and in vivo cancer models. These results demonstrate the broad and dramatic effects nutrient levels can have on drug response and how incorporation of human-specific physiological nutrient medium might help identify compounds whose efficacy could be influenced in humans.
</description>
<pubDate>Thu, 30 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164439</guid>
<dc:date>2024-05-30T00:00:00Z</dc:date>
</item>
<item>
<title>Metabolic Engineering of E. coli for Enhanced Diols Production from Acetate</title>
<link>https://hdl.handle.net/1721.1/164438</link>
<description>Metabolic Engineering of E. coli for Enhanced Diols Production from Acetate
Ricci, Luca; Cen, Xuecong; Zu, Yuexuan; Antonicelli, Giacomo; Chen, Zhen; Fino, Debora; Pirri, Fabrizio C; Stephanopoulos, Gregory; Woolston, Benjamin M; Re, Angela
Effective employment of renewable carbon sources is highly demanded to develop sustainable biobased manufacturing. Here, we developed Escherichia coli strains to produce 2,3-butanediol and acetoin (collectively referred to as diols) using acetate as the sole carbon source by stepwise metabolic engineering. When tested in fed-batch experiments, the strain overexpressing the entire acetate utilization pathway was found to consume acetate at a 15% faster rate (0.78 ± 0.05 g/g/h) and to produce a 35% higher diol titer (1.16 ± 0.01 g/L) than the baseline diols-producing strain. Moreover, singularly overexpressing the genes encoding alternative acetate uptake pathways as well as alternative isoforms of genes in the malate-to-pyruvate pathway unveiled that leveraging ackA-pta and maeA is more effective in enhancing acetate consumption and diols production, compared to acs and maeB. Finally, the increased substrate consumption rate and diol production obtained in flask-based experiments were confirmed in bench-scale bioreactors operated in fed-batch mode. Consequently, the highest titer of 1.56 g/L achieved in this configuration increased by over 30% compared to the only other similar effort carried out so far.
</description>
<pubDate>Fri, 18 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164438</guid>
<dc:date>2025-04-18T00:00:00Z</dc:date>
</item>
<item>
<title>Constant Degree Networks for Almost-Everywhere Reliable Transmission</title>
<link>https://hdl.handle.net/1721.1/164437</link>
<description>Constant Degree Networks for Almost-Everywhere Reliable Transmission
Bafna, Mitali; Minzer, Dor
In the almost-everywhere reliable message transmission problem, introduced by [Dwork, Pippenger, Peleg, Upfal’86], the goal is to design a sparse communication network G that supports efficient, fault-tolerant protocols for interactions between all node pairs. By fault-tolerant, we mean that that even if an adversary corrupts a small fraction of vertices in G, then all but a small fraction of vertices can still communicate perfectly via the constructed protocols. Being successful to do so allows one to simulate, on a sparse graph, any fault-tolerant distributed computing task and secure multi-party computation protocols built for a complete network, with only minimal overhead in efficiency. Previous works on this problem achieved either constant-degree networks tolerating o(1) faults, constant-degree networks tolerating a constant fraction of faults via inefficient protocols (exponential work complexity), or poly-logarithmic degree networks tolerating a constant fraction of faults. We show a construction of constant-degree networks with efficient protocols (i.e., with polylogarithmic work complexity) that can tolerate a constant fraction of adversarial faults, thus solving the main open problem of Dwork et al. Our main contribution is a composition technique for communication networks, based on graph products. Our technique combines two networks tolerant to adversarial edge-faults to construct a network with a smaller degree while maintaining efficiency and fault-tolerance. We apply this composition result multiple times, using the polylogarithmic-degree edge-fault tolerant networks constructed in a recent work of [Bafna, Minzer, Vyas’24] (that are based on high-dimensional expanders) with itself, and then with the constant-degree networks (albeit with inefficient protocols) of [Upfal’92].
STOC ’25, Prague, Czechia
</description>
<pubDate>Sun, 15 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164437</guid>
<dc:date>2025-06-15T00:00:00Z</dc:date>
</item>
<item>
<title>Quasi-Linear Size PCPs with Small Soundness from HDX</title>
<link>https://hdl.handle.net/1721.1/164436</link>
<description>Quasi-Linear Size PCPs with Small Soundness from HDX
Bafna, Mitali; Minzer, Dor; Vyas, Nikhil; Yun, Zhiwei
We construct 2-query, quasi-linear size probabilistically checkable&#13;
proofs (PCPs) with arbitrarily small constant soundness, improving&#13;
upon Dinur’s 2-query quasi-linear size PCPs with soundness 1 −&#13;
Ω(1). As an immediate corollary, we get that under the exponential&#13;
time hypothesis, for all&#120576; &gt; 0 no approximation algorithm for 3-SAT&#13;
can obtain an approximation ratio of 7/8+&#120576; in time 2&#13;
&#119899;/log&#119862; &#119899;&#13;
, where&#13;
&#119862; is a constant depending on &#120576;. Our result builds on a recent line&#13;
of independent works by Bafna, Lifshitz and Minzer, and Dikstein,&#13;
Dinur and Lubotzky, that showed the existence of linear size direct&#13;
product testers with small soundness.&#13;
The main new ingredient in our proof is a technique that embeds&#13;
a given 2-CSP into a 2-CSP on a prescribed graph, provided that the&#13;
latter is a graph underlying a sufficiently good high-dimensional&#13;
expander (HDX). We achieve this by establishing a novel connection between PCPs and fault-tolerant distributed computing, more&#13;
precisely, to the almost-everywhere reliable transmission problem&#13;
introduced by Dwork, Peleg, Pippenger and Upfal (1986). We instantiate this connection by showing that graphs underlying HDXs&#13;
admit routing protocols that are tolerant to adversarial edge corruptions, also improving upon the state of the art constructions of&#13;
sparse edge-fault-tolerant networks in the process.&#13;
Our PCP construction requires variants of the aforementioned&#13;
direct product testers with poly-logarithmic degree. The existence&#13;
and constructability of these variants is shown in the full version.
STOC ’25, Prague, Czechia
</description>
<pubDate>Sun, 15 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164436</guid>
<dc:date>2025-06-15T00:00:00Z</dc:date>
</item>
<item>
<title>Approximately Counting and Sampling Hamiltonian Motifs in Sublinear Time</title>
<link>https://hdl.handle.net/1721.1/164435</link>
<description>Approximately Counting and Sampling Hamiltonian Motifs in Sublinear Time
Eden, Talya; Levi, Reut; Ron, Dana; Rubinfeld, Ronitt
Counting small subgraphs, referred to as motifs, in large graphs is&#13;
a fundamental task in graph analysis, extensively studied across&#13;
various contexts and computational models. In the sublinear-time&#13;
regime, the relaxed problem of approximate counting has been&#13;
explored within two prominent query frameworks: the standard&#13;
model, which permits degree, neighbor, and pair queries, and the&#13;
strictly more powerful augmented model, which additionally allows&#13;
for uniform edge sampling. Currently, in the standard model, (optimal) results have been established only for approximately counting&#13;
edges, stars, and cliques, all of which have a radius of one. This&#13;
contrasts sharply with the state of affairs in the augmented model,&#13;
where algorithmic results (some of which are optimal) are known&#13;
for any input motif, leading to a disparity which we term the “scope&#13;
gap" between the two models.&#13;
In this work, we make significant progress in bridging this gap.&#13;
Our approach draws inspiration from recent advancements in the&#13;
augmented model and utilizes a framework centered on counting&#13;
by uniform sampling, thus allowing us to establish new results in&#13;
the standard model and simplify on previous results.&#13;
In particular, our first, and main, contribution is a new algorithm&#13;
in the standard model for approximately counting any Hamiltonian&#13;
motif in sublinear time, where the complexity of the algorithm&#13;
is the sum of two terms. One term equals the complexity of the&#13;
known algorithms by Assadi, Kapralov, and Khanna (ITCS 2019)&#13;
and Fichtenberger and Peng (ICALP 2020) in the (strictly stronger)&#13;
augmented model and the other is an additional, necessary, additive&#13;
overhead.&#13;
Our second contribution is a variant of our algorithm that enables nearly uniform sampling of these motifs, a capability previously limited in the standard model to edges and cliques. Our&#13;
third contribution is to introduce even simpler algorithms for stars&#13;
and cliques by exploiting their radius-one property. As a result, we&#13;
simplify all previously known algorithms in the standard model for&#13;
stars (Gonen, Ron, Shavitt (SODA 2010)), triangles (Eden, Levi, Ron Seshadhri (FOCS 2015)) and cliques (Eden, Ron, Seshadri (STOC&#13;
2018)).
STOC ’25, Prague, Czechia
</description>
<pubDate>Sun, 15 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164435</guid>
<dc:date>2025-06-15T00:00:00Z</dc:date>
</item>
<item>
<title>Sandwiching Random Geometric Graphs and Erdos-Renyi with Applications: Sharp Thresholds, Robust Testing, and Enumeration</title>
<link>https://hdl.handle.net/1721.1/164434</link>
<description>Sandwiching Random Geometric Graphs and Erdos-Renyi with Applications: Sharp Thresholds, Robust Testing, and Enumeration
Bangachev, Kiril; Bresler, Guy
The distribution RGG(n,Sd−1,p) is formed by sampling independent vectors {Vi}i = 1n uniformly on Sd−1 and placing an edge between pairs of vertices i and j for which ⟨ Vi,Vj⟩ ≥ τdp, where τdp is such that the expected density is p. Our main result is a poly-time implementable coupling between Erdős-Rényi and RGG such that G(n,p(1 − O(√np/d)))⊆ RGG(n,Sd−1,p)⊆ G(n,p(1 + O(√np/d))) edgewise with high probability when d≫ np. We apply the result to: 1) Sharp Thresholds: We show that for any monotone property having a sharp threshold with respect to the Erdős-Rényi distribution and critical probability pnc, random geometric graphs also exhibit a sharp threshold when d≫ npnc, thus partially answering a question of Perkins. 2) Robust Testing: The coupling shows that testing between G(n,p) and RGG(n,Sd−1,p) with є n2p adversarially corrupted edges for any constant є&gt;0 is information-theoretically impossible when d≫ np. We match this lower bound with an efficient (constant degree SoS) spectral refutation algorithm when d≪ np. 3) Enumeration: We show that the number of geometric graphs in dimension d is at least exp(dnlog−7n), recovering (up to the log factors) the sharp result of Sauermann.
STOC ’25, Prague, Czechia
</description>
<pubDate>Sun, 15 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164434</guid>
<dc:date>2025-06-15T00:00:00Z</dc:date>
</item>
<item>
<title>Sample-Optimal Private Regression in Polynomial Time</title>
<link>https://hdl.handle.net/1721.1/164433</link>
<description>Sample-Optimal Private Regression in Polynomial Time
Anderson, Prashanti; Bakshi, Ainesh; Majid, Mahbod; Tiegel, Stefan
We consider the task of privately obtaining prediction error guarantees in ordinary least-squares regression problems with Gaussian covariates (with unknown covariance structure). We provide the first sample-optimal polynomial time algorithm for this task under both pure and approximate differential privacy. We show that any improvement to the sample complexity of our algorithm would violate either statistical-query or information-theoretic lower bounds. Additionally, our algorithm is robust to a small fraction of arbitrary outliers and achieves optimal error rates as a function of the fraction of outliers. In contrast, all prior efficient algorithms either incurred sample complexities with sub-optimal dimension dependence, scaling with the condition number of the covariates, or obtained a polynomially worse dependence on the privacy parameters.&#13;
Our technical contributions are two-fold: first, we leverage resilience guarantees of Gaussians within the sum-of-squares framework. As a consequence, we obtain efficient sum-of-squares algorithms for regression with optimal robustness rates and sample complexity. Second, we generalize the recent robustness-to-privacy framework of Hopkins, Kamath, Majid, and Narayanan to account for the geometry induced by the covariance of the input samples. This framework crucially relies on the robust estimators to be sum-of-squares algorithms, and combining the two steps yields a sample-optimal private regression algorithm. We believe our techniques are of independent interest, and we demonstrate this by obtaining an efficient algorithm for covariance-aware mean estimation, with an optimal dependence on the privacy parameters.
STOC ’25, Prague, Czechia
</description>
<pubDate>Sun, 15 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164433</guid>
<dc:date>2025-06-15T00:00:00Z</dc:date>
</item>
<item>
<title>Faster Weighted and Unweighted Tree Edit Distance and APSP Equivalence</title>
<link>https://hdl.handle.net/1721.1/164432</link>
<description>Faster Weighted and Unweighted Tree Edit Distance and APSP Equivalence
Nogler, Jakob; Polak, Adam; Saha, Barna; Vassilevska Williams, Virginia; Xu, Yinzhan; Ye, Christopher
The tree edit distance (TED) between two rooted ordered trees with n nodes labeled from an alphabet Σ is the minimum cost of transforming one tree into the other by a sequence of valid operations consisting of insertions, deletions and relabeling of nodes. The tree edit distance is a well-known generalization of string edit distance and has been studied since the 1970s. Its running time has seen steady improvements starting with an O(n6) algorithm [Tai, J.ACM 1979], improved to O(n4) [Shasha, Zhang, SICOMP 1989] and to O(n3logn) [Klein, ESA 1998], and culminating in an O(n3) algorithm [Demaine, Mozes, Rossman, Weimann, ACM TALG 2010]. The latter is known to be optimal for any dynamic programming based algorithm that falls under a certain decomposition framework that captures all known sub-n4 time algorithms. Fine-grained complexity casts further light onto this hardness showing that a truly subcubic time algorithm for TED implies a truly subcubic time algorithm for All-Pairs Shortest Paths (APSP) [Bringmann, Gawrychowski, Mozes, Weimann, ACM TALG 2020]. Therefore, under the popular APSP hypothesis, a truly subcubic time algorithm for TED cannot exist. However, unlike many problems in fine-grained complexity for which conditional hardness based on APSP also comes with equivalence to APSP, whether TED can be reduced to APSP has remained unknown.&#13;
In this paper, we resolve this. Not only we show that TED is fine-grained equivalent to APSP, our reduction is tight enough, so that combined with the fastest APSP algorithm to-date [Williams, SICOMP 2018] it gives the first ever subcubic time algorithm for TED running in n3/2Ω(√logn) time.&#13;
We also consider the unweighted tree edit distance problem in which the cost of each edit (insertion, deletion, and relabeling) is one. For unweighted TED, a truly subcubic algorithm is known due to Mao [Mao, FOCS 2022], and later improved slightly by Dürr [Dürr, IPL 2023] to run in O(n2.9148) time. Since their algorithm uses bounded monotone min-plus product as a crucial subroutine, and the best running time for this product is Õ(n3+ω/2)≤ O(n2.6857) (where ω is the exponent of fast matrix multiplication), the much higher running time of unweighted TED remained unsatisfactory. In this work, we close this gap and give an algorithm for unweighted TED that runs in Õ(n3+ω/2) time.
STOC ’25, Prague, Czechia
</description>
<pubDate>Sun, 15 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164432</guid>
<dc:date>2025-06-15T00:00:00Z</dc:date>
</item>
<item>
<title>The Structure of Catalytic Space: Capturing Randomness and Time via Compression</title>
<link>https://hdl.handle.net/1721.1/164431</link>
<description>The Structure of Catalytic Space: Capturing Randomness and Time via Compression
Cook, James; Li, Jiatu; Mertz, Ian; Pyne, Edward
In the catalytic logspace (CL) model of (Buhrman et. al. STOC 2013), we are given a small work tape, and a larger catalytic tape that has an arbitrary initial configuration. We may edit this tape, but it must be exactly restored to its initial configuration at the completion of the computation. This model is of interest from a complexity-theoretic perspective as it gains surprising power over traditional space. However, many fundamental structural questions remain open.&#13;
We substantially advance the understanding of the structure of CL, addressing several questions raised in prior work. Our main results are as follows.&#13;
1: We unconditionally derandomize catalytic logspace: CBPL = CL.&#13;
2: We show time and catalytic space bounds can be achieved separately if and only if they can be achieved simultaneously: any problem in both CL and P can be solved in polynomial time-bounded CL.&#13;
3: We characterize deterministic catalytic space by the intersection of randomness and time: CL is equivalent to polytime-bounded, zero-error randomized CL.&#13;
Our results center around the compress--or--random framework.&#13;
For the second result, we introduce a simple yet novel compress--or--compute algorithm which, for any catalytic tape, either compresses the tape or quickly and successfully computes the function at hand. For our first result, we further introduce a compress--or--compress--or--random algorithm that combines runtime compression with a second compress--or--random algorithm, building on recent work on distinguish-to-predict transformations and pseudorandom generators with small-space deterministic reconstruction.
STOC ’25, Prague, Czechia
</description>
<pubDate>Sun, 15 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164431</guid>
<dc:date>2025-06-15T00:00:00Z</dc:date>
</item>
<item>
<title>Rounding Large Independent Sets on Expanders</title>
<link>https://hdl.handle.net/1721.1/164430</link>
<description>Rounding Large Independent Sets on Expanders
Bafna, Mitali; Hsieh, Jun-Ting; Kothari, Pravesh K.
We develop a new approach for approximating large independent sets when the input graph is a one-sided spectral expander - that is, the uniform random walk matrix of the graph has its second eigenvalue bounded away from 1. Consequently, we obtain a polynomial time algorithm to find linear-sized independent sets in one-sided expanders that are almost 3-colorable or are promised to contain an independent set of size (1/2−є)n. Our second result above can be refined to require only a weaker vertex expansion property with an efficient certificate. In a surprising contrast to our algorithmic result, we observe that the analogous task of finding a linear-sized independent set in almost 4-colorable one-sided expanders (even when the second eigenvalue is on(1)) is NP-hard, assuming the Unique Games Conjecture.&#13;
All prior algorithms that beat the worst-case guarantees for this problem rely on bottom eigenspace enumeration techniques (following the classical spectral methods of Alon and Kahale) and require two-sided expansion, meaning a bounded number of negative eigenvalues of magnitude Ω(1). Such techniques naturally extend to almost k-colorable graphs for any constant k, in contrast to analogous guarantees on one-sided expanders, which are Unique Games-hard to achieve for k ≥ 4.&#13;
Our rounding scheme builds on the method of simulating multiple samples from a pseudo-distribution introduced in Bafna et. al. for rounding Unique Games instances. The key to our analysis is a new clustering property of large independent sets in expanding graphs - every large independent set has a larger-than-expected intersection with some member of a small list - and its formalization in the low-degree sum-of-squares proof system.
STOC ’25, Prague, Czechia
</description>
<pubDate>Sun, 15 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164430</guid>
<dc:date>2025-06-15T00:00:00Z</dc:date>
</item>
<item>
<title>Universal SNARGs for NP from Proofs of Correctness</title>
<link>https://hdl.handle.net/1721.1/164429</link>
<description>Universal SNARGs for NP from Proofs of Correctness
Jin, Zhengzhong; Kalai, Yael Tauman; Lombardi, Alex; Mathialagan, Surya
We give new constructions of succinct non-interactive arguments (SNARGs) for NP in the settings of both non-adaptive and adaptive soundness.&#13;
Our construction of non-adaptive SNARG is universal assuming the security of a (leveled or unleveled) fully homomorphic encryption (FHE) scheme as well as a batch argument (BARG) scheme. Specifically, for any choice of parameters ℓ and L, we construct a candidate SNARG scheme for any NP language L with the following properties: (i) the proof length is ℓ· poly(λ), (ii) the common reference string crs has length L· poly(λ), and (iii) the setup is transparent (no private randomness).&#13;
We prove that this SNARG has non-adaptive soundness assuming the existence of any SNARG where the proof size is ℓ, the crs size is L, and there is a size L Extended Frege (EF) proof of completeness for the SNARG.&#13;
Moreover, we can relax the underlying SNARG to be any 2-message privately verifiable argument where the first message is of length L and the second message is of length ℓ. This yields new SNARG constructions based on any “EF-friendly” designated-verifier SNARG or witness encryption scheme. We emphasize that our SNARG is universal in the sense that it does not depend on the argument system.&#13;
We show several new implications of this construction that do not reference proof complexity: (1) a non-adaptive SNARG for NP with transparent crs from LWE under the evasive LWE heuristic. This gives a candidate lattice-based SNARG for NP. (2) a non-adaptive SNARG for NP with transparent crs assuming the (non-explicit) existence of any iO and LWE. (3) a non-adaptive SNARG for NP with a short and transparent (i.e., uniform) crs assuming LWE, FHE and the (non-explicit) existence of any hash function that makes Micali’s SNARG construction sound. (4) a non-adaptive SNARG for languages such as QR and DCR assuming only LWE.&#13;
In the setting of adaptive soundness, we show how to convert any designated verifier SNARG into publicly verifiable SNARG, assuming the underlying designated verifier SNARG has an EF proof of completeness. As a corollary, we construct an adaptive SNARG for UP with a transparent crs assuming subexponential LWE under the evasive LWE heuristic.&#13;
We prove our results by extending the encrypt-hash-and-BARG paradigm of [Jin-Kalai-Lombardi-Vaikuntanathan, STOC ’24].
STOC ’25, Prague, Czechia
</description>
<pubDate>Sun, 15 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164429</guid>
<dc:date>2025-06-15T00:00:00Z</dc:date>
</item>
<item>
<title>The Medium is the Message: How Non-Clinical Information Shapes Clinical Decisions in LLMs</title>
<link>https://hdl.handle.net/1721.1/164428</link>
<description>The Medium is the Message: How Non-Clinical Information Shapes Clinical Decisions in LLMs
Gourabathina, Abinitha; Gerych, Walter; Pan, Eileen; Ghassemi, Marzyeh
The integration of large language models (LLMs) into clinical diagnostics necessitates a careful understanding of how clinically irrelevant aspects of user inputs directly influence generated treatment recommendations and, consequently, clinical outcomes for end-users. Building on prior research that examines the impact of demographic attributes on clinical LLM reasoning, this study explores how non-clinically relevant attributes shape clinical decision-making by LLMs. Through the perturbation of patient messages, we evaluate whether LLM behavior remains consistent, accurate, and unbiased when non-clinical information is altered. These perturbations assess the brittleness of clinical LLM reasoning by replicating structural errors that may occur during electronic data processing patient questions and simulating interactions between patient-AI systems in diverse, vulnerable patient groups. Our findings reveal notable inconsistencies in LLM treatment recommendations and significant degradation of clinical accuracy in ways that reduce care allocation to patients. Additionally, there are significant disparities in treatment recommendations between gender subgroups as well as between model-inferred gender subgroups. We also apply our perturbation framework to a conversational clinical dataset to find that even in conversation, LLM clinical accuracy decreases post-perturbation, and disparities exist in how perturbations impact gender subgroups. By analyzing LLM outputs in response to realistic yet modified clinical contexts, our work deepens understanding of the sensitivity, inaccuracy, and biases inherent in medical LLMs, offering critical insights for the deployment of patient-AI systems.
FAccT ’25, Athens, Greece
</description>
<pubDate>Mon, 23 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164428</guid>
<dc:date>2025-06-23T00:00:00Z</dc:date>
</item>
<item>
<title>High-Performance Mixed-Precision Matrix Multiplication via Tile-Centric Design on Modern Architectures</title>
<link>https://hdl.handle.net/1721.1/164427</link>
<description>High-Performance Mixed-Precision Matrix Multiplication via Tile-Centric Design on Modern Architectures
Zhang, Qiao; Alomairy, Rabab; Wang, Dali; Gu, Zhuowei; Cao, Qinglei
General Matrix Multiplication (GEMM) is a critical operation underpinning a wide range of applications in high-performance computing (HPC) and artificial intelligence (AI). The emergence of hardware optimized for low-precision arithmetic necessitates a reevaluation of numerical algorithms to leverage mixed-precision computations, achieving improved performance and energy efficiency. This research presents an adaptive mixed-precision GEMM framework that enables support for various precision formats at fine-grained tile and block levels, offering a reliable foundation for trustworthy mixed-precision computations. Furthermore, we leverage the PaRSEC runtime system to effectively balance workloads across diverse architectures. The performance exhibits strong scalability across both homogeneous platforms (Intel CPU-based systems and the ARM CPU-based Fugaku supercomputer) and heterogeneous systems (Nvidia V100, A100, and H100 GPU-based platforms, as well as the AMD GPU-based Frontier supercomputer). This work aims to improve computational efficiency and accuracy by bridging algorithmic innovations with hardware capabilities, fostering transformative advancements across a wide range of applications.
</description>
<pubDate>Sat, 20 Dec 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164427</guid>
<dc:date>2025-12-20T00:00:00Z</dc:date>
</item>
<item>
<title>Search for t-channel scalar and vector leptoquark exchange in the high-mass dimuon and dielectron spectra in proton-proton collisions at √s = 13 TeV</title>
<link>https://hdl.handle.net/1721.1/164426</link>
<description>Search for t-channel scalar and vector leptoquark exchange in the high-mass dimuon and dielectron spectra in proton-proton collisions at √s = 13 TeV
Hayrapetyan, A.; Tumasyan, A.; Adam, W.; Andrejkovic, J. W.; Benato, L.; Bergauer, T.; Chatterjee, S.; Damanakis, K.; Dragicevic, M.; Hussain, P. S.; Jeitler, M.; Krammer, N.; Li, A.; Liko, D.; Mikulec, I.; Schieck, J.; Schöfbeck, R.
A search for t-channel exchange of leptoquarks (LQs) is performed in dimuon&#13;
and dielectron spectra using proton-proton collision data collected at √&#13;
s = 13 TeV with&#13;
the CMS detector at the CERN LHC. The data correspond to an integrated luminosity of&#13;
138 fb−1&#13;
. Eight scenarios are considered, in which up or down quarks couple to muons or&#13;
electrons via a scalar or vector LQ exchange, for dilepton invariant masses above 500 GeV.&#13;
The LQ masses are probed up to 5 TeV, beyond a regime probed by previous pair-production&#13;
and single-production searches. The differential distributions of dilepton events are fit to&#13;
templates that model the nonresonant LQ exchange and various standard model background&#13;
processes. Limits are set on LQ-fermion coupling strengths for scalar and vector LQ masses&#13;
in the 1–5 TeV range at 95% confidence level, establishing stringent limits on first- and&#13;
second-generation LQs.
</description>
<pubDate>Tue, 09 Dec 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164426</guid>
<dc:date>2025-12-09T00:00:00Z</dc:date>
</item>
<item>
<title>Search for charged-lepton flavour violation in top quark interactions with an up-type quark, a muon, and a τ lepton in proton-proton collisions at √s = 13 TeV</title>
<link>https://hdl.handle.net/1721.1/164425</link>
<description>Search for charged-lepton flavour violation in top quark interactions with an up-type quark, a muon, and a τ lepton in proton-proton collisions at √s = 13 TeV
Hayrapetyan, A.; Tumasyan, A.; Adam, W.; Andrejkovic, J. W.; Benato, L.; Bergauer, T.; Chatterjee, S.; Damanakis, K.; Dragicevic, M.; Hussain, P. S.; Jeitler, M.; Krammer, N.; Li, A.; Liko, D.; Mikulec, I.; Schieck, J.
A search for charged-lepton flavour violation (CLFV) in top quark (t) production&#13;
and decay is presented. The search uses proton-proton collision data corresponding to 138 fb−1&#13;
collected with the CMS experiment at √&#13;
s = 13 TeV. The signal consists of the production&#13;
of a single top quark via a CLFV interaction or top quark pair production followed by a&#13;
CLFV decay. The analysis selects events containing a hadronically decaying τ lepton and&#13;
a muon of opposite electric charge, as well as at least three jets, one of which is identified&#13;
as originating from the fragmentation of a bottom quark. Machine learning classification&#13;
techniques are used to distinguish signal from standard model background events. The results&#13;
of this search are consistent with the standard model expectations. The upper limits at 95%&#13;
confidence level on the branching fraction B for CLFV top quark decays to a muon, a τ&#13;
lepton, and an up or a charm quark are set at B(t → µτu) &lt; (0.04, 0.08, and 0.12) × 10−6&#13;
,&#13;
and B(t → µτ c) &lt; (0.81, 1.71, and 2.05) × 10−6&#13;
for scalar, vector, and tensor-like operators,&#13;
respectively.
</description>
<pubDate>Wed, 10 Dec 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164425</guid>
<dc:date>2025-12-10T00:00:00Z</dc:date>
</item>
<item>
<title>Anyon delocalization transitions out of a disordered fractional quantum anomalous Hall insulator</title>
<link>https://hdl.handle.net/1721.1/164424</link>
<description>Anyon delocalization transitions out of a disordered fractional quantum anomalous Hall insulator
Shi, Zhengyan Darius; Todadri, Senthil
Motivated by the experimental discovery of the fractional quantum anomalous Hall&#13;
effect, we develop a theory of doping-induced transitions out of the  = 2/3 lattice&#13;
Jain state in the presence of quenched disorder. We show that disorder strongly&#13;
affects the evolution into the conducting phases described in our previous work.&#13;
The delocalization of charge 2/3 anyons leads to a chiral superconductor through&#13;
a direct second-order transition for a smooth random potential with long-wavelength&#13;
modulations. The longitudinal resistance has a universal peak at the associated quantum&#13;
critical point. Close to the transition, we show that the superconducting ground state&#13;
is an “Anomalous Vortex Glass” stabilized in the absence of an external magnetic&#13;
field. For short-wavelength disorder, this transition generically splits into three distinct&#13;
ones with intermediate insulating topological phases. If instead, the charge 1/3 anyon&#13;
delocalizes, then at low doping the resulting phase is a Reentrant Integer Quantum&#13;
Hall state with xy = h/e&#13;
2&#13;
. At higher doping this undergoes a second transition to a&#13;
Fermi liquid metal. We show that this framework provides a plausible explanation for&#13;
the complex phase diagram recently observed in twisted MoTe2 near  = 2/3 and&#13;
discuss future experiments that can test our theory in more detail.
</description>
<pubDate>Fri, 19 Dec 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164424</guid>
<dc:date>2025-12-19T00:00:00Z</dc:date>
</item>
<item>
<title>From Data to Transformative Change: Designing Interactive Systems for Citizen Science Empowerment</title>
<link>https://hdl.handle.net/1721.1/164423</link>
<description>From Data to Transformative Change: Designing Interactive Systems for Citizen Science Empowerment
Prandi, Catia; Herodotou, Christothea; Dionisio, Mara; Reeves, Neal; Reitsma, Lizette; Mora, Simone
Citizen Science (CS) is a research approach in which scientists and everyday people collaborate to address a research problem. Advancements in digital technologies have significantly expanded the reach of Citizen Science by enabling large-scale data collection and collaboration. In addition to its scientific benefits, citizen science enhances participants’ science literacy, fosters public engagement, and promotes collaborative problem-solving. Despite this being true, we believe that the true potential of CS has not yet been fully explored as a collaborative practice for transformative change. With this in mind, we planned a one-day workshop as a forum for critical discussions and reflections on the role of HCI researchers, designers, and practitioners in designing CS-empowered interactive systems for increasing awareness about social good and societal issues and promoting concrete actions and behavioural change, from data to sustainable futures. Participants will have the possibility to reflect on and discuss the main open challenges still affecting the design of CS-empowered interactive systems, and to prototype, exploiting data physicalization and co-design, solutions that focus on a specific real-world challenge as presented by experts of the Madeira Island that offers a unique ecosystem to spark reflections on the interplay between sustainability, technology and CS.
DIS ’25 Companion, Funchal, Portugal
</description>
<pubDate>Sat, 05 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164423</guid>
<dc:date>2025-07-05T00:00:00Z</dc:date>
</item>
<item>
<title>Lost in Transplantation: Characterizing Racial Gaps in Physician Organ Offer Acceptance</title>
<link>https://hdl.handle.net/1721.1/164422</link>
<description>Lost in Transplantation: Characterizing Racial Gaps in Physician Organ Offer Acceptance
Adam, Hammaad; Bermea, Rene; Yang, Ming Ying; Celi, Leo Anthony; Ghassemi, Marzyeh
There are known racial disparities in the organ transplant allocation system in the United States. While recent research has focused on designing scores and matching algorithms for organ allocation, prior work has yet to study how transplant center physician decisions on offer acceptance—the final step in the allocation process—contribute to the observed disparities. In this paper, we use data from the Scientific Registry of Transplant Recipients to examine the role of candidate race in the acceptance of heart, liver, and lung transplant offers. We find that Black race was associated with significantly lower odds of offer acceptance for livers and lungs. Further, existing allocation scores such as MELD and LAS did not account for clinical factors that made Black patients harder to match. Our analysis also revealed that donor candidate race-match was associated with significantly higher odds of offer acceptance for hearts, livers, and lungs. Finally, we found that rejecting an offer was associated with lower survival times for all three organs. Our findings demonstrate the additional barriers that Black patients face in accessing organ transplants and the consequences of these barriers on patient survival. Overall, our work highlights the limitations of technical solutions to socio-technical problems; new allocation scores and other algorithmic updates will not improve equity if they do not explicitly account for gaps in the ensuing human decisions.
FAccT ’25, Athens, Greece
</description>
<pubDate>Mon, 23 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164422</guid>
<dc:date>2025-06-23T00:00:00Z</dc:date>
</item>
<item>
<title>Coboundary Expansion of Coset Complexes</title>
<link>https://hdl.handle.net/1721.1/164421</link>
<description>Coboundary Expansion of Coset Complexes
Kaufman, Tali; Oppenheim, Izhar; Weinberger, Shmuel
Coboundary expansion is a high dimensional generalization of the Cheeger constant to simplicial complexes. Originally, this notion was motivated by the fact that it implies topological expansion, but nowadays a significant part of the motivation stems from its deep connection to problems in theoretical computer science such as list agreement expansion and agreement expansion in the low soundness regime. In this paper, we prove coboundary expansion with non-Abelian coefficients for the coset complex construction of Kaufman and Oppenheim. Our proof uses a novel global argument, as opposed to the local-to-global arguments that are used to prove cosystolic expansion.
STOC ’25, Prague, Czechia
</description>
<pubDate>Sun, 15 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164421</guid>
<dc:date>2025-06-15T00:00:00Z</dc:date>
</item>
<item>
<title>Disclosure without Engagement: An Empirical Review of Positionality Statements at FAccT</title>
<link>https://hdl.handle.net/1721.1/164420</link>
<description>Disclosure without Engagement: An Empirical Review of Positionality Statements at FAccT
Schroeder, Hope; Pareek, Akshansh; Barocas, Solon
Positionality statements have become more common in engineering fields in recent years, despite ongoing debates across many fields about the merits of the practice. In 2024, the Program Chairs of FAccT recommended that authors include positionality statements with their paper submissions, dramatically increasing their use at the conference. In this paper, we analyze all positionality statements at FAccT from 2018 to 2024, highlighting the different aspects of identity commonly disclosed by authors and the degree to which authors explore the potential impact of these aspects of their positionality on their research. While we encountered and highlight a number of thoughtful positionality statements, we also identified and describe several concerning trends, including patterns of identity disclosure without discussion of corresponding impacts, a notable lack of reflection on the potential impacts of industry affiliation, and cases where identity is invoked to excuse what are really methodological choices, among others. We raise particular concerns about the possibility that disclosure without engagement may cause readers to rely on stereotypes to make guesses about the perspectives that individuals from certain groups bring to their work. We conclude by considering potential mechanisms for encouraging reflexivity in the FAccT community, with a focus on setting policies that protect researchers from risks, supporting researchers from backgrounds without existing traditions of reflexive practice, and empirically evaluating the efficacy of interventions designed to foster reflexivity.
FAccT ’25, Athens, Greece
</description>
<pubDate>Mon, 23 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164420</guid>
<dc:date>2025-06-23T00:00:00Z</dc:date>
</item>
<item>
<title>LEAD: Towards Learning-Based Equity-Aware Decarbonization in Ridesharing Platforms</title>
<link>https://hdl.handle.net/1721.1/164419</link>
<description>LEAD: Towards Learning-Based Equity-Aware Decarbonization in Ridesharing Platforms
Sahebdel, Mahsa; Zeynali, Ali; Bashir, Noman; Shenoy, Prashant; Hajiesmaili, Mohammad
Ridesharing platforms such as Uber, Lyft, and DiDi have grown in popularity due to their on-demand availability, ease of use, and commute cost reductions, among other benefits. However, not all ridesharing promises have panned out. Recent studies demonstrate that the expected drop in traffic congestion and reduction in greenhouse gas (GHG) emissions have not materialized. This is primarily due to the substantial distances traveled by the ridesharing vehicles without passengers between rides, known as deadhead miles. Recent work has focused on reducing the impact of deadhead miles while considering additional metrics such as rider waiting time, GHG emissions from deadhead miles, or driver earnings. However, most prior studies consider these environmental and equity-based metrics individually despite them being interrelated. In this paper, we propose a Learning-based Equity-Aware Decarabonization approach, LEAD, for ridesharing platforms. LEAD targets minimizing emissions while ensuring that the driver’s utility, defined as the difference between the trip distance and the deadhead miles, is fairly distributed. LEAD uses reinforcement learning to match riders with drivers based on the expected future utility of drivers and the expected carbon emissions of the platform without increasing the rider waiting times. Extensive experiments based on a real-world ridesharing dataset show that LEAD improves the defined notion of fairness by 150% when compared to emission-aware ride-assignment and reduces emissions by 14.6% while ensuring fairness within 28–52% of the fairness-focused baseline. It also reduces the rider wait time, by at least 32.1%, compared to a fairness-focused baseline.
FAccT ’25, Athens, Greece
</description>
<pubDate>Mon, 23 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164419</guid>
<dc:date>2025-06-23T00:00:00Z</dc:date>
</item>
<item>
<title>LuciEntry: Towards Understanding the Design of Lucid Dream Induction</title>
<link>https://hdl.handle.net/1721.1/164418</link>
<description>LuciEntry: Towards Understanding the Design of Lucid Dream Induction
Wang, Po-Yao (Cosmos); Fang, Xiao Zoe; Ducos, Gabriel; Lee, Nathaniel Yung Xiang; Loose, Antony; Rajesh, Rohit; Botheju, Nethmini; Chen, Eric; Montoya, Maria; Kitson, Alexandra; Konkoly, Karen; Sagi, Rohan; Patibanda, Rakesh; Whitmore, Nathan; Jafarzadeh Esfahani, Mahdad; Deng, Jialin; Bu, Jiajun; Dresler, Martin; Elvitigala, Don Samitha; Semertzidis, Nathan; Mueller, Florian
Lucid dreaming, a state in which people become aware that they are dreaming, is known for its many mental and physical health benefits. However, most lucid dream induction techniques, such as reality testing, require significant time and effort to master, creating a barrier for people seeking these experiences. We designed LuciEntry, a portable interactive prototype aimed at helping people induce lucid dreaming through well-timed visual and auditory cues. We conducted a lab and a field study to understand LuciEntry’s user experience. The interview data allowed us to identify three themes. Building on these findings and our design practice, we derived seven considerations to guide the design of future lucid dream systems. Ultimately, this work aims to inspire further research into interactive technologies for altered states of consciousness.
DIS ’25, Funchal, Portugal
</description>
<pubDate>Fri, 04 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164418</guid>
<dc:date>2025-07-04T00:00:00Z</dc:date>
</item>
<item>
<title>The Reality of AI and Biorisk</title>
<link>https://hdl.handle.net/1721.1/164417</link>
<description>The Reality of AI and Biorisk
Peppin, Aidan; Reuel, Anka; Casper, Stephen; Jones, Elliot; Strait, Andrew; Anwar, Usman; Agrawal, Anurag; Kapoor, Sayash; Koyejo, Sanmi; Pellat, Marie; Bommasani, Rishi; Frosst, Nick; Hooker, Sara
To accurately and confidently answer the question “could an AI model or system increase biorisk”, it is necessary to have both a sound theoretical threat model for how AI models or systems could increase biorisk and a robust method for testing that threat model. This paper provides an analysis of existing available research surrounding two AI and biorisk threat models: 1) access to information and planning via large language models (LLMs), and 2) the use of AI-enabled biological tools (BTs) in synthesizing novel biological artifacts. We find that existing studies around AI-related biorisk are nascent, often speculative in nature, or limited in terms of their methodological maturity and transparency. The available literature suggests that current LLMs and BTs do not pose an immediate risk, and more work is needed to develop rigorous approaches to understanding how future models could increase biorisks. We end with recommendations about how empirical work can be expanded to more precisely target biorisk and ensure rigor and validity of findings.
FAccT ’25, Athens, Greece
</description>
<pubDate>Mon, 23 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164417</guid>
<dc:date>2025-06-23T00:00:00Z</dc:date>
</item>
<item>
<title>SoS Certifiability of Subgaussian Distributions and Its Algorithmic Applications</title>
<link>https://hdl.handle.net/1721.1/164416</link>
<description>SoS Certifiability of Subgaussian Distributions and Its Algorithmic Applications
Diakonikolas, Ilias; Hopkins, Samuel; Pensia, Ankit; Tiegel, Stefan
We prove that there is a universal constant C&gt;0 so that for every d ∈ ℕ, every centered subgaussian distribution D on ℝd, and every even p ∈ ℕ, the d-variate polynomial (Cp)p/2 · ||v||2p − EX ∼ D ⟨ v,X⟩p is a sum of square polynomials. This establishes that every subgaussian distribution is SoS-certifiably subgaussian—a condition that yields efficient learning algorithms for a wide variety of high-dimensional statistical tasks. As a direct corollary, we obtain computationally efficient algorithms with near-optimal guarantees for the following tasks, when given samples from an arbitrary subgaussian distribution: robust mean estimation, list-decodable mean estimation, clustering mean-separated mixture models, robust covariance-aware mean estimation, robust covariance estimation, and robust linear regression. Our proof makes essential use of Talagrand’s generic chaining/majorizing measures theorem.
STOC ’25, Prague, Czechia
</description>
<pubDate>Sun, 15 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164416</guid>
<dc:date>2025-06-15T00:00:00Z</dc:date>
</item>
<item>
<title>Using collective dialogues and AI to find common ground between Israeli and Palestinian peacebuilders</title>
<link>https://hdl.handle.net/1721.1/164415</link>
<description>Using collective dialogues and AI to find common ground between Israeli and Palestinian peacebuilders
Konya, Andrew; Thorburn, Luke; Almasri, Wasim; Leshem, Oded Adomi; Procaccia, Ariel; Schirch, Lisa; Bakker, Michiel
A growing body of work has shown that AI-assisted methods — leveraging large language models, social choice methods, and collective dialogues — can help navigate polarization and surface common ground in controlled lab settings. But what can these approaches contribute in real-world contexts? We present a case study applying these techniques to find common ground between Israeli and Palestinian peacebuilders in the period following October 7th, 2023. From April to July 2024 an iterative deliberative process combining LLMs, bridging-based ranking, and collective dialogues was conducted in partnership with the Alliance for Middle East Peace. Around 138 civil society peacebuilders participated including Israeli Jews, Palestinian citizens of Israel, and Palestinians from the West Bank and Gaza. The process resulted in a set of collective statements, including demands to world leaders, with at least 84% agreement from participants on each side. In this paper, we document the process, results, challenges, and important open questions.
FAccT ’25, Athens, Greece
</description>
<pubDate>Mon, 23 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164415</guid>
<dc:date>2025-06-23T00:00:00Z</dc:date>
</item>
<item>
<title>Recourse, Repair, Reparation, &amp; Prevention: A Stakeholder Analysis of AI Supply Chains</title>
<link>https://hdl.handle.net/1721.1/164414</link>
<description>Recourse, Repair, Reparation, &amp; Prevention: A Stakeholder Analysis of AI Supply Chains
Hopkins, Aspen; Struckman, Isabella; Klyman, Kevin; Silbey, Susan S.
The AI industry is exploding in popularity, with increasing attention to potential harms and unwanted consequences. In the current digital ecosystem, AI deployments are often the product of AI supply chains (AISC): networks of outsourced models, data, and tooling through which multiple entities contribute to AI development and distribution. AI supply chains lack the modularity, redundancies, or conventional supply chain practices that enable identification, isolation, and easy correction of failures, exacerbating the already difficult processes of responding to ML-generated harms. As the stakeholders participating in and impacted by AISCs have scaled and diversified, so too have the risks they face. In this stakeholder analysis of AI supply chains, we consider who participates in AISCs, what harms they face, where sources of harm lie, and how market dynamics and power differentials inform the type and probability of remedies. Because AI supply chains are purposely invented and implemented, they may be designed to account for, rather than ignore, the complexities, consequences, and risks of deploying AI systems. To enable responsible design and management of AISCs, we offer a typology of responses to AISC-induced harms: recourse, repair, reparation or prevention. We apply this typology to stakeholders participating in a health-care AISC across three stylized markets—vertical integration, horizontal integration, free market—to illustrate how stakeholder positioning and power within an AISC may shape responses to an experienced harm.
FAccT ’25, Athens, Greece
</description>
<pubDate>Mon, 23 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164414</guid>
<dc:date>2025-06-23T00:00:00Z</dc:date>
</item>
<item>
<title>When to Ask a Question: Understanding Communication Strategies in Generative AI Tools</title>
<link>https://hdl.handle.net/1721.1/164413</link>
<description>When to Ask a Question: Understanding Communication Strategies in Generative AI Tools
Park, Charlotte; Donahue, Kate; Raghavan, Manish
Generative AI tools (GAITs) fundamentally differ from traditional machine learning tools in that they allow users to provide as much or as little information as they choose in their inputs. This flexibility often leads users to omit certain details, relying on the GAIT to infer and fill in less critical information based on distributional knowledge of user preferences. Inferences about preferences lead to natural questions about fairness, since a GAIT’s “best guess” may skew towards the preferences of larger groups at the expense of smaller ones. Unlike more traditional recommender systems, GAITs can acquire additional information about a user’s preferences through feedback or by explicitly soliciting it. This creates an interesting communication challenge: the user is aware of their specific preference, while the GAIT has knowledge of the overall distribution of preferences, and both parties can only exchange a limited amount of information. In this work, we present a mathematical model to describe human-AI co-creation of content under information asymmetry. Our results suggest that GAITs can use distributional information about overall preferences to determine the “right” questions to ask to maximize both welfare and fairness, opening up a rich design space in human-AI collaboration.
UMAP Adjunct ’25, New York City, NY, USA
</description>
<pubDate>Thu, 12 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164413</guid>
<dc:date>2025-06-12T00:00:00Z</dc:date>
</item>
<item>
<title>The Cloud Next Door: Investigating the Environmental and Socioeconomic Strain of Datacenters on Local Communities</title>
<link>https://hdl.handle.net/1721.1/164412</link>
<description>The Cloud Next Door: Investigating the Environmental and Socioeconomic Strain of Datacenters on Local Communities
Ngata, Wacuka M; Bashir, Noman; Westerlaken, Michelle; Liote, Laurent; Chandio, Yasra; Olivetti, Elsa
Datacenters have become the backbone of modern digital infrastructure, powering the rapid rise of artificial intelligence and promising economic growth and technological progress. However, this expansion has brought growing tensions in the local communities where datacenters are already situated or being proposed. While the mainstream discourse often focuses on energy usage and carbon footprint of the computing sector at a global scale, the local socio-environmental consequences—such as health impacts, water usage, noise pollution, infrastructural strain, and economic burden—remain largely underexplored and poorly addressed. In this work1, we surface these community-level consequences through a mixed-methods study that combines quantitative data with qualitative insights. Focusing on Northern Virginia’s “Data Center Alley,” we highlight how datacenter growth reshapes local environments and everyday life, and examine the power dynamics that determine who benefits and who bears the costs. Our goal is to bring visibility to these impacts and prompt more equitable and informed decisions about the future of digital infrastructure.
COMPASS ’25, Toronto, ON, Canada
</description>
<pubDate>Mon, 21 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164412</guid>
<dc:date>2025-07-21T00:00:00Z</dc:date>
</item>
<item>
<title>SuperSONIC: Cloud-Native Infrastructure for ML Inferencing</title>
<link>https://hdl.handle.net/1721.1/164411</link>
<description>SuperSONIC: Cloud-Native Infrastructure for ML Inferencing
Kondratyev, Dmitry; Riedel, Benedikt; Chou, Yuan-Tang; Cochran-Branson, Miles; Paladino, Noah; Schultz, David; Liu, Mia; Duarte, Javier; Harris, Philip; Hsu, Shih-Chieh
The increasing computational demand from growing data rates and complex machine learning (ML) algorithms in large-scale scientific experiments has driven the adoption of the Services for Optimized Network Inference on Coprocessors (SONIC) approach. SONIC accelerates ML inference by offloading it to local or remote coprocessors to optimize resource utilization. Leveraging its portability to different types of coprocessors, SONIC enhances data processing and model deployment efficiency for cutting-edge research in high energy physics (HEP) and multi-messenger astrophysics (MMA). We developed the SuperSONIC project, a scalable server infrastructure for SONIC, enabling the deployment of computationally intensive tasks to Kubernetes clusters equipped with graphics processing units (GPUs). Using NVIDIA Triton Inference Server, SuperSONIC decouples client workflows from server infrastructure, standardizing communication, optimizing throughput, load balancing, and monitoring. SuperSONIC has been successfully deployed for the CMS and ATLAS experiments at the CERN Large Hadron Collider (LHC), the IceCube Neutrino Observatory (IceCube), and the Laser Interferometer Gravitational-Wave Observatory (LIGO) and tested on Kubernetes clusters at Purdue University, the National Research Platform (NRP), and the University of Chicago. SuperSONIC addresses the challenges of the Cloud-native era by providing a reusable, configurable framework that enhances the efficiency of accelerator-based inference deployment across diverse scientific domains and industries.
PEARC ’25, Columbus, OH, USA
</description>
<pubDate>Fri, 18 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164411</guid>
<dc:date>2025-07-18T00:00:00Z</dc:date>
</item>
<item>
<title>What I Don’t Get About AI . . .</title>
<link>https://hdl.handle.net/1721.1/164409</link>
<description>What I Don’t Get About AI . . .
Wright, Randall S.
In a recent MIT News article titled “Explained: Generative AI,” Adam Zewe (Citation2023) writes&#13;
&#13;
But what do people really mean when they say ‘generative AI?’&#13;
&#13;
Before the generative AI boom of the past few years, when people talked about AI, typically they were talking about machine-learning models that can learn to make a prediction based on data. For instance, such models are trained, using millions of examples, to predict whether a certain X-ray shows signs of a tumor or if a particular borrower is likely to default on a loan.&#13;
&#13;
Generative AI can be thought of as a machine-learning model that is trained to create new data, rather than making a prediction about a specific dataset. A generative AI system is one that learns to generate more objects that look like the data it was trained on.
</description>
<pubDate>Tue, 05 Nov 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164409</guid>
<dc:date>2024-11-05T00:00:00Z</dc:date>
</item>
<item>
<title>Bringing a Working-Class Archive Online: Multimodal Storytelling in a Post-Industrial City</title>
<link>https://hdl.handle.net/1721.1/164408</link>
<description>Bringing a Working-Class Archive Online: Multimodal Storytelling in a Post-Industrial City
Walley, Christine
We find it familiar to consider objects as useful or aesthetic, as necessities or vain indulgences. We are on less familiar ground when we consider objects as companions to our emotional lives or as provocations to thought. The notion of evocative objects brings together these two less familiar ideas, underscoring the inseparability of thought and feeling in our relationship to things. We think with the objects we love; we love the objects we think with.
</description>
<pubDate>Fri, 03 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164408</guid>
<dc:date>2024-05-03T00:00:00Z</dc:date>
</item>
<item>
<title>What is a Right?</title>
<link>https://hdl.handle.net/1721.1/164407</link>
<description>What is a Right?
Setiya, Kieran
This paper argues for a theory of natural rights on which they are explained in terms of reasons supplied by rational consent. When B has a claim-right against A that A φ, A’s non-consent is not a reason for B not to simply make A φ. This theory solves a puzzle that defeats alternative views, including standard will and interest theories, the demand theory of rights, and the view that rights are irreducible or primitive.
</description>
<pubDate>Thu, 02 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164407</guid>
<dc:date>2025-01-02T00:00:00Z</dc:date>
</item>
<item>
<title>The Afterlife of Energy: Post-carbon and Feminist Post-work Politics</title>
<link>https://hdl.handle.net/1721.1/164406</link>
<description>The Afterlife of Energy: Post-carbon and Feminist Post-work Politics
Ghosn, Rania; Vronskaya, Alla; Jia, Ruo; Pohl, Ethel Baraona; Dharia, Namita Vijay; Aidoo, Fallon Samuels; Wolff, Ilze
In the conclusion to her book The Birth of Energy: Fossil Fuels, Thermodynamics, and the Politics of Work, political scientist Cara Daggett considers “A Post-Work Energy Politics” in which she examines the historical coupling of energy and work—meaning human, waged work—in an invitation to disassociate their values and futures. The exponential power of fossil fuels animated the pipedream that powerful, inorganic slaves could substitute unfree human labor, ideas that have driven European imperialism. Fossil fuel systems did not lead, however, to a world beyond work. Rather, today’s “patriarchal slave states” continue to manage the project of putting the world to work through the maximization of productivity, and the subordination of racialized, immigrant, and gendered bodies—who would work for lower, or for no, wages. “The project of work,” Daggett argues, “is in tension with the project of life.” 1 And the rise of “work–life balance” is a mere tactic of governance in which the enemy is fatigue, exhaustion, and burn-out. She suggests, in turn, an alliance between post-carbon and feminist post-work politics and asks: what might it mean for energy politics to refer to the politics of ensuring public vitality? In order to advance a feminist revaluation of work, Daggett draws on Kathi Weeks’s The Problem with Work to outline a project that makes two utopian demands. One demand articulates a paradoxical relationship between the pragmatism of (present) demands and the speculative seeds of possibility; a second demand outlines a utopian form for such politics: partial, fragmented kin to the genre of the manifesto. Daggett concludes with an invitation that “a radical planet politics, if it seeks to contest ecomodernist claims, needs its own politics of pleasure.” 2 In an echo to Daggett’s invitation, the authors of this Educators’ Roundtable were invited to contribute a short text that picks up on the possibilities of a post-carbon, post-work politics.
</description>
<pubDate>Tue, 02 Jul 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164406</guid>
<dc:date>2024-07-02T00:00:00Z</dc:date>
</item>
<item>
<title>‘May Our Egos Die So That the World May Live’</title>
<link>https://hdl.handle.net/1721.1/164405</link>
<description>‘May Our Egos Die So That the World May Live’
Gupta, Huma
This image-based essay reflects upon the author’s experience of running an experimental filmmaking workshop titled Climate Futures, Cities Past in the spring of 2023 at MIT’s School of Architecture featuring stills from four student films set in Greece, Italy, Pakistan, and Syria. It explores how architectural pedagogy can intersect with filmmaking to offer a critical space outside the studio or seminar paper. Engaging eco-critical and narrative approaches of Stefanie K. Dunning, Jennifer Fay, Ursula K. Le Guin, Donna Harraway, Saidiya Hartman, Adrian J. Ivakhiv, and Ousmane Sembène, it explores how ‘cinema might teach us to die’ or rather, embrace a different eschatological paradigm that moves beyond individual authorship, accomplishment, and post-mortem legacy towards more mutualist, collectivist, and anarchic models of existence. It argues that filmmaking as inquiry can offer a way to collect different kinds of stories that help facilitate the messy, uncomfortable, and wildly creative processes of unworlding and reworlding.
</description>
<pubDate>Tue, 02 Jul 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164405</guid>
<dc:date>2024-07-02T00:00:00Z</dc:date>
</item>
<item>
<title>From Oil to Information: Caudill, Rowlett, Scott and Architectures of the Energy Crisis</title>
<link>https://hdl.handle.net/1721.1/164404</link>
<description>From Oil to Information: Caudill, Rowlett, Scott and Architectures of the Energy Crisis
Hanly, B. Jack
This paper traces the history of architecture-engineering firm Caudill Rowlett Scott (CRS), roughly 1948–1983, in the context of the postwar oil economy and the 1973 energy crisis. The paper examines CRS’s transformation from a design firm into an energy conglomerate over the course of three decades, as it both concretized the fossil economy between Houston and Saudi Arabia and modeled its own corporate structure after its oil clientele. Analyzing numerous CRS projects designed and built for the oil industry, from corporate office towers to industrial training colleges, the paper looks at a moment in which energy systems and the architectural profession were coproduced through the discourses, practices, and institutions of oil at its most vulnerable historical inflection points. CRS thereby epitomized an energy transition from oil as a substance to oil as information, where a growing postindustrial society would leverage the immaterial dimensions of energy as a foundation for building.
</description>
<pubDate>Tue, 02 Jul 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164404</guid>
<dc:date>2024-07-02T00:00:00Z</dc:date>
</item>
<item>
<title>Demonstrating Xstrings: 3D Printing Cable-driven Mechanism for Actuation, Deformation, and Manipulation</title>
<link>https://hdl.handle.net/1721.1/164403</link>
<description>Demonstrating Xstrings: 3D Printing Cable-driven Mechanism for Actuation, Deformation, and Manipulation
Li, Jiaji; Feng, Shuyue; Perroni-Scharf, Maxine; Liu, Yujia; Guan, Emily; Mueller, Stefanie
In this Demo, we present Xstrings, a method for designing and fabricating 3D printed objects with integrated cable-driven mechanisms that can be printed in one go without the need for manual assembly. Xstrings supports four types of cable-driven interactions—bend, coil, screw and compress—which are activated by applying an input force to the cables. To facilitate the design of Xstrings objects, we developed a design tool that allows users to embed cable-driven mechanisms into the object geometry based on the desired interaction by automatically placing joints and cables at the respective locations. The application potential of Xstrings is demonstrated through examples such as manipulable gripping, bionic robot manufacturing, and dynamic prototyping.
CHI EA ’25, Yokohama, Japan
</description>
<pubDate>Fri, 25 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164403</guid>
<dc:date>2025-04-25T00:00:00Z</dc:date>
</item>
<item>
<title>My CXL Pool Obviates Your PCIe Switch</title>
<link>https://hdl.handle.net/1721.1/164402</link>
<description>My CXL Pool Obviates Your PCIe Switch
Zhong, Yuhong; Berger, Daniel; Zardoshti, Pantea; Saurez, Enrique; Nelson, Jacob; Psistakis, Antonis; Fried, Joshua; Cidon, Asaf
Pooling PCIe devices across multiple hosts offers a promising solution to mitigate stranded I/O resources, enhance device utilization, address device failures, and reduce total cost of ownership. The only viable option today are PCIe switches, which decouple PCIe devices from hosts by connecting them through a hardware switch. However, the high cost and limited flexibility of PCIe switches hinder their widespread adoption beyond specialized datacenter use cases.&#13;
This paper argues that PCIe device pooling can be effectively implemented in software using CXL memory pools. CXL memory pools improve memory utilization and already have positive return on investment. We find that, once CXL pools are in place, they can serve as a building block for pooling any kind of PCIe device. We demonstrate that PCIe devices can directly use CXL memory as I/O buffers without device modifications, which enables routing PCIe traffic through CXL pool memory. This software-based approach is deployable on today's hardware and is more flexible than hardware PCIe switches. In particular, we explore how disaggregating devices such as NICs can transform datacenter infrastructure.
HOTOS 25, May 14–16, 2025, Banff, AB, Canada
</description>
<pubDate>Fri, 06 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164402</guid>
<dc:date>2025-06-06T00:00:00Z</dc:date>
</item>
<item>
<title>PolyMOF nanoparticles constructed from intrinsically microporous polymer ligand towards scalable composite membranes for CO2 separation</title>
<link>https://hdl.handle.net/1721.1/164401</link>
<description>PolyMOF nanoparticles constructed from intrinsically microporous polymer ligand towards scalable composite membranes for CO2 separation
Lee, Tae Hoon; Lee, Byung Kwan; Yoo, Seung Yeon; Lee, Hyunhee; Wu, Wan-Ni; Smith, Zachary P; Park, Ho Bum
Integrating different modification strategies into a single step to achieve the desired properties of metal–organic frameworks (MOFs) has been very synthetically challenging, especially in developing advanced MOF/polymer mixed matrix membranes (MMMs). Herein, we report a polymer–MOF (polyMOF) system constructed from a carboxylated polymer with intrinsic microporosity (cPIM-1) ligand. This intrinsically microporous ligand could coordinate with metals, leading to ~100 nm-sized polyMOF nanoparticles. Compared to control MOFs, these polyMOFs exhibit enhanced ultramicroporosity for efficient molecular sieving, and they have better dispersion properties in casting solutions to prepare MMMs. Ultimately, integrating coordination chemistries through the cPIM-1 and polymer-based functionality into porous materials results in polyMOF/PIM-1 MMMs that display excellent CO2 separation performance (surpassing the CO2/N2 and CO2/CH4 upper bounds). In addition to exploring the physicochemical and transport properties of this polyMOF system, scalability has been demonstrated by converting the developed MMM material into large-area (400 cm2) thin-film nanocomposite (TFN) membranes.
</description>
<pubDate>Thu, 14 Dec 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164401</guid>
<dc:date>2023-12-14T00:00:00Z</dc:date>
</item>
<item>
<title>Single Layer Silk and Cotton Woven Fabrics for Acoustic Emission and Active Sound Suppression</title>
<link>https://hdl.handle.net/1721.1/164400</link>
<description>Single Layer Silk and Cotton Woven Fabrics for Acoustic Emission and Active Sound Suppression
Yang, Grace H; Lin, Jinuan; Cheung, Henry; Rui, Guanchun; Zhao, Yongyi; Balachander, Latika; Joo, Taigyu; Lee, Hyunhee; Smith, Zachary P; Zhu, Lei; Ma, Chu; Fink, Yoel
Whether intentionally generating acoustic waves or attempting to mitigate unwanted noise, sound control is an area of challenge and opportunity. This study investigates traditional fabrics as emitters and suppressors of sound. When attached to a single strand of a piezoelectric fiber actuator, a silk fabric emits up to 70 dB of sound. Despite the complex fabric structure, vibrometer measurements reveal behavior reminiscent of a classical thin plate. Fabric pore size relative to the viscous boundary layer thickness is found—through comparative fabric analysis—to influence acoustic‐emission efficiency. Sound suppression is demonstrated using two distinct mechanisms. In the first, direct acoustic interference is shown to reduce sound by up to 37 dB. The second relies on pacifying the fabric vibrations by the piezoelectric fiber, reducing the amplitude of vibration waves by 95% and attenuating the transmitted sound by up to 75%. Interestingly, this vibration‐mediated suppression in principle reduces sound in an unlimited volume. It also allows the acoustic reflectivity of the fabric to be dynamically controlled, increasing by up to 68%. The sound emission and suppression efficiency of a 130 µm silk fabric presents opportunities for sound control in a variety of applications ranging from apparel to transportation to architecture.
</description>
<pubDate>Mon, 01 Apr 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164400</guid>
<dc:date>2024-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Implications of changing the base raw material – the case of license plate manufacturing</title>
<link>https://hdl.handle.net/1721.1/164399</link>
<description>Implications of changing the base raw material – the case of license plate manufacturing
Uygun, Yilmaz; Mohammadian, Noushin; Un Nisa, Mehr
License plates to uniquely identify vehicles mainly use aluminum as the base material. Currently, there is no distinction of different use cases of license plates, such as short-time usage for test drive and transportation purposes that do not need such long-lasting materials not only from cost but also from sustainability perspectives. This paper presents a methodology to select the best material for different use cases under the holistic consideration of specifically defined criteria as to material properties, sustainability aspects, and supply chain implications. We show that there are several candidate materials for different use cases that stick out by changing the importance of these numerous criteria. In addition, the paper delves deeper into the sustainability aspect by means of a comprehensive System Dynamics model. We show that a scenario in which the company picks up used license plates by relying on a logistics service provider to get them delivered to an external recycling service provider yields the best results.
</description>
<pubDate>Tue, 31 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164399</guid>
<dc:date>2024-12-31T00:00:00Z</dc:date>
</item>
<item>
<title>Metalite, a new class of composite laminates with unique properties</title>
<link>https://hdl.handle.net/1721.1/164398</link>
<description>Metalite, a new class of composite laminates with unique properties
Miravete, Antonio
Metalite is a new class of antisymmetric composite laminates composed of angle-plies, 0-degree, and 90-degree plies, presenting unique properties. These include extremely thin laminates suitable for minimum gauge applications, remarkable weight savings compared to conventional quads, adjustable zero and negative coefficients of thermal expansion (CTE), ease of manufacturing, excellent ability to adjust mode frequency, change sound radiation characteristics, and high tunability. In this study, Metalite laminates ranging from 3 to 8 plies are described using their feasible spaces and compared with quads, detailing the weight savings achieved for hard, soft, and neutral laminates. Through an experimental study, the CTE value of a hybrid Metalite is correlated with theory, demonstrating how to tune zero and negative CTE values. The proposed work offers significant benefits through practical solutions for designing and manufacturing lightweight composite laminates with unique properties.
</description>
<pubDate>Sun, 21 Jul 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164398</guid>
<dc:date>2024-07-21T00:00:00Z</dc:date>
</item>
<item>
<title>Tertiary-Amine-Functional Poly(arylene ether)s for Acid-Gas Separations</title>
<link>https://hdl.handle.net/1721.1/164397</link>
<description>Tertiary-Amine-Functional Poly(arylene ether)s for Acid-Gas Separations
Dean, Pablo A; Wu, Yifan; Guo, Sheng; Swager, Timothy M; Smith, Zachary P
Competitive sorption enables the emergent phenomenon of enhanced CO&lt;sub&gt;2&lt;/sub&gt;-based selectivities for gas separation membranes when using microporous polymers with primary amines. However, strong secondary forces in these polymers through hydrogen bonding results in low solvent solubility, precluding standard solution processing approaches to form these polymers into membrane films. Herein, we circumvent these manufacturing constraints while maintaining competitive-sorption enhancements by synthesizing eight representative microporous poly(arylene ether)s (PAEs) with tertiary amines. High-pressure H&lt;sub&gt;2&lt;/sub&gt;S, CO&lt;sub&gt;2&lt;/sub&gt;, and CH&lt;sub&gt;4&lt;/sub&gt; sorption isotherms were collected for these samples to demonstrate enhanced affinity for acid gases relative to the unfunctional control polymer. Although competitive sorption was observed for all samples, improvements were less pronounced than for primary-amine-functional analogs. For H&lt;sub&gt;2&lt;/sub&gt;S-based separations, the benefits of competitive sorption offset decreases in selectivity due to plasticization. This detailed study helps to elucidate the role of tertiary amines for acid gas separations in solution-processable microporous PAEs.
</description>
<pubDate>Wed, 02 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164397</guid>
<dc:date>2024-10-02T00:00:00Z</dc:date>
</item>
<item>
<title>Weathering the storm: examining how organisations navigate the sea of cybersecurity regulations</title>
<link>https://hdl.handle.net/1721.1/164396</link>
<description>Weathering the storm: examining how organisations navigate the sea of cybersecurity regulations
Proudfoot, Jeffrey G; Cram, W Alec; Madnick, Stuart
Governments around the world routinely regulate the activities of private enterprises to guide the behaviour of individuals and organisations towards acceptable norms. This holds true in a cybersecurity context. However, practitioners report that cybersecurity regulations are often out of date and compliance is confusing, expensive, and time consuming. As a result, organisational leaders are often uncertain about the practicalities of adopting and implementing the various rules, which can lead to trickle-down effects on the robustness of lower-level cybersecurity controls and compliance activities. In this research, we aim to clarify how cybersecurity regulations are operationalised in organisations, as well as reveal the compliance and performance consequences of cybersecurity regulations. To do so, we interviewed 22 senior leaders with expertise in cybersecurity regulations. Our analysis reveals 7 distinct themes (i.e., concept groupings) that are ordered within four phases (i.e., temporal stages), which we use to create the Institutional Cybersecurity Regulations Model (ICRM). The results provide a holistic view of the cybersecurity regulations process in organisations that can serve to clarify current theory relationships and inform future research. As well, the ICRM can provide a practical roadmap for managers to navigate regulatory cybersecurity challenges in their own companies.
</description>
<pubDate>Sun, 04 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164396</guid>
<dc:date>2025-05-04T00:00:00Z</dc:date>
</item>
<item>
<title>Assessment of Technoeconomic Opportunities in Automation for Nuclear Microreactors</title>
<link>https://hdl.handle.net/1721.1/164395</link>
<description>Assessment of Technoeconomic Opportunities in Automation for Nuclear Microreactors
Naranjo de Candido, Isabel; Al Rashdan, Ahmad; Abou Jaoude, Abdalla; Buongiorno, Jacopo
Achieving full decarbonization of all economic sectors remains a challenge, especially in niche markets. For example, remote communities and industrial or mining activities detached from the main electric grid heavily rely on fossil fuels, similar to urban and industrial microgrids with combined heat and power needs. A combination of renewables and energy storage is often not suitable due to cost, reliability, intermittency, and large storage requirements. Small nuclear reactors with a flexible purpose could serve these applications. Microreactors (MR) are a class of reactors that are compact, factory manufactured, transportable, and self-regulating. Typically, they generate much less power than their large reactor counterparts. The main advantages of microreactors include the versatile nature of the energy produced, the reliability of supply, and freedom from having to transport and store large quantities of fuels on-site, coupled with the absence of dependence on an electrical grid. A strong business case is needed to move from the microreactor prototype to the commercialization phase. In fact, fossil fuels are still relatively inexpensive, and in the near term, carbon credits will be available to virtually compensate for emissions. For microreactors, one of the main costs in operation and maintenance (O&amp;M) is their staffing levels. In this study, we investigate how to optimize the number (and thus the cost) of workers, moving from a traditional, fully manned, on-site personnel approach to an unmanned, remote personnel approach. We examine four different staffing models that can be implemented as the technology matures and evolves. We estimate the staffing needs of each model and build a business case to justify the substitution of on-site personnel with adequate technologies. To do so, we propose a cost model to quantify potential cost reductions from automating O&amp;M activities. The model accounts for both the reduction in cost derived from the reduced number of full-time-equivalent (FTE) employees and the increase in cost derived from the need to buy new control hardware as needed. Applying the cost model that we created to different scenarios, an on-site O&amp;M cost reduction exceeding 80% can be expected. Additionally, we found that it is more impactful to focus on automating routine O&amp;M tasks rather than attempting to automate transient management (shutdowns, restarts, monitoring condition deviations). In fact, transients typically account for less than 1% of the total FTE time spent on the reactors.
</description>
<pubDate>Wed, 24 Jul 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164395</guid>
<dc:date>2024-07-24T00:00:00Z</dc:date>
</item>
<item>
<title>Enhancing acid–gas separations using free volume manipulation for microporous poly(arylene ether)s</title>
<link>https://hdl.handle.net/1721.1/164394</link>
<description>Enhancing acid–gas separations using free volume manipulation for microporous poly(arylene ether)s
Joo, Taigyu; Wu, Yifan; Lee, Tae Hoon; Dean, Pablo A; Wu, Wan-Ni; Swager, Timothy M; Smith, Zachary P
To address global energy needs, traditional and renewable natural gas will likely be key energy sources for years to come. However, raw feeds require removal of impurities like hydrogen sulfide (H2S) and carbon dioxide (CO2) before use. In this study, we illustrate the key challenges of using traditional post-synthetic modification approaches to simultaneously enhance H2S/CH4 and CO2/CH4 selectivities in microporous polymer membranes, while also demonstrating how free volume manipulation (FVM) can overcome some of these challenges. By integrating tert-butoxycarbonyl-protected piperazinyl (PIP-tBOC) groups into a microporous poly(arylene ether) (PAE-1) and applying thermal treatment with oxygen to degrade the incorporated units in solid-state films, we successfully increased sorption capacity and diffusion selectivity. This modification enhanced the mixed-gas selectivity of H2S/CH4 and CO2/CH4 by 88% and 114%, respectively, compared to the original PAE-1 films. Consequently, the films achieved a combined acid gas (CAG) selectivity of 48, which approached the CAG upper bound for glassy polymers. The FVM process not only improved the selectivity of these membrane films but also markedly increased their resistance to plasticization, making them more suitable for industrial applications in acid–gas separation. This post-synthetic modification strategy, applicable to any glassy polymer containing a nucleophilic aromatic unit, provides a means to leverage the competitive sorption of H2S molecules and the molecular sieving properties of the polymer.
</description>
<pubDate>Mon, 27 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164394</guid>
<dc:date>2025-01-27T00:00:00Z</dc:date>
</item>
<item>
<title>Knowledge and ignorance in forensic identification: the origins of a contested human rights fact</title>
<link>https://hdl.handle.net/1721.1/164393</link>
<description>Knowledge and ignorance in forensic identification: the origins of a contested human rights fact
Medina, Eden
In 2006, DNA testing revealed that the Chilean Medical Legal Service had misidentified at least half of the 96 human rights victims whose remains had been exhumed in 1991 from a lot in the Santiago General Cemetery known as Patio 29. Years earlier the government had returned those remains to the victims' families. This examination of the history of that forensic misidentification uncovers the role played by the shifting relations of knowledge and ignorance in establishing the legal facts of those identities. Building on the growing literature in agnotology, the article demonstrates the ways in which the context of dictatorship created varied and overlapping forms of ignorance that continued to shape the outcome of the forensic work even after Chile returned to democracy. By detailing different examples of ignorance production by the state, a human rights organization, and a university department under military surveillance, the article illuminates the diverse ways that the civil–military dictatorship worked against knowledge production in the domains of science and human rights.
</description>
<pubDate>Tue, 31 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164393</guid>
<dc:date>2024-12-31T00:00:00Z</dc:date>
</item>
<item>
<title>Tweeting during the Pandemic in New York City: Unveiling the Evolving Sentiment Landscape of NYC through a Spatiotemporal Analysis of Geolocated Tweets</title>
<link>https://hdl.handle.net/1721.1/164392</link>
<description>Tweeting during the Pandemic in New York City: Unveiling the Evolving Sentiment Landscape of NYC through a Spatiotemporal Analysis of Geolocated Tweets
Ignaccolo, Carmelo; Wibisono, Kevin; Sutto, Maria Paola; Plunz, Richard A.
This article explores the relationship between spatial factors, socioeconomic conditions, and Twitter (now called X) sentiment in New York City (NYC) during the COVID-19 pandemic. Using Twitter data, the study investigates how sentiment varied across different geographies. It examines whether sentiment scores, unemployment rates, and COVID-19 hospitalization rates in NYC zip codes revealed spatial associations. The research employs sentiment analysis, a natural language processing technique used to algorithmically determine the emotional tone of a text, on a database of geo-located tweets spanning January to December 2020. The findings reveal a shift towards more negative sentiment during the initial year of the pandemic. Moreover, the study uncovers variations in sentiment trends across boroughs and zip codes. Additionally, a zip code-level fixed-effects model demonstrates a statistically significant relationship between sentiment scores and unemployment rates. In summary, this article makes a two-fold contribution: firstly, it adds a spatial lens to the scholarly debate regarding the use of Twitter data as an indicator of publicly expressed sentiment; secondly, it provides empirical evidence on the spatial interconnectedness of sentiment, health (hospitalization), and socioeconomic factors (unemployment). Overall, this research sheds light on the nuanced relationship between sentiment and space during the COVID-19 pandemic in NYC.
</description>
<pubDate>Sun, 26 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164392</guid>
<dc:date>2024-05-26T00:00:00Z</dc:date>
</item>
<item>
<title>Solution‐Processable, Ladder‐Branched Polyimides of Intrinsic Microporosity by [4+4] Cycloaddition for Membrane Gas Separation</title>
<link>https://hdl.handle.net/1721.1/164391</link>
<description>Solution‐Processable, Ladder‐Branched Polyimides of Intrinsic Microporosity by [4+4] Cycloaddition for Membrane Gas Separation
Lee, Tae Hoon; Dean, Pablo A; Yeo, Jing Ying; Smith, Zachary P
Advancements in membrane-based gas separation have the potential to address global challenges related to energy and the environment. However, new membrane materials must have excellent separation performance, stability, and processability, and simultaneously achieving all three metrics is extremely challenging. To circumvent these issues, a post-synthetic modification of polyimides of intrinsic microporosity (PIM-PIs) synthesized with a UV light (UV)-reactive anthracene co-monomer is reported. UV irradiation on the PIM-PI solution converts the anthracene units into dianthracene linkages by [4+4] cycloaddition, while the resultant PIM-PI is still solution-processable due to the branched structure. The ladder-like dianthracene moieties significantly increased both microporosity (&lt;20 Å) and ultramicroporosity (&lt;7 Å) of the precursor PIM-PI. Notably, the UV-treated PIM-PI membrane exhibits a large boost in pure-gas CO2 permeability by up to 260%, reaching 376 barrer, while maintaining CO2/CH4 ideal selectivity of 35 at 1 bar. Moreover, the developed membrane material has enhanced stability against physical aging and plasticization and showcases excellent CO2/CH4 mixed-gas selectivity (&gt;30 up to 31 bar feed pressure), which surpasses the 2018 mixed-gas upper bound.
</description>
<pubDate>Wed, 15 Oct 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164391</guid>
<dc:date>2025-10-15T00:00:00Z</dc:date>
</item>
<item>
<title>Interactive infill topology optimisation guided by user drawn patterns</title>
<link>https://hdl.handle.net/1721.1/164390</link>
<description>Interactive infill topology optimisation guided by user drawn patterns
Schiffer, Gillian; Schmidt, Martin-Pierre; Pedersen, Claus BW; Carstensen, Josephine V
Widespread use of topology optimisation as a design tool for additive manufacturing faces major inhibiting obstacles, such as high computational costs and complexity, concern for other failure modes, and manufacturability. Interactive infill topology optimisation presents an alternative approach to circumvent some of these barriers. The novel contribution of the present work prompts the user to draw a tailored infill pattern, specify regions of interest to locate the infill, and control how strictly the pattern is replicated in the material layout of the design using appearance constraints. This approach improves engineering metrics not directly included in the optimisation formulation by incorporating the user’s engineering experience, thereby avoiding increased computational costs, parameter tuning, and numerical artifacts associated with complex objective functions and constraints. Two 2D benchmark examples increase the linear buckling resistance and energy absorption, respectively, and a 2.5D example minimises compliance while reducing the quantity of overhang supports for additive manufacturing.
</description>
<pubDate>Tue, 31 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164390</guid>
<dc:date>2024-12-31T00:00:00Z</dc:date>
</item>
<item>
<title>Minimum Bucket and Car Battery Problems</title>
<link>https://hdl.handle.net/1721.1/164389</link>
<description>Minimum Bucket and Car Battery Problems
Feng, Raymond
A solar car needs 5 fully charged batteries to run, and it depletes those batteries in 5 hours. The batteries are rechargeable, and solar panels on the car are able to charge 3 batteries simultaneously. It takes 3 hours for the solar panels to finish charging 3 batteries. Furthermore, batteries cannot be charging and in use at the same time. If the car always starts running as soon as 5 full batteries are available, and the solar panels can only operate if 3 empty batteries are available, how many batteries are needed so that the car can eventually run without stopping? We investigate this resource optimization problem and its different variations.
</description>
<pubDate>Tue, 11 Jun 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164389</guid>
<dc:date>2024-06-11T00:00:00Z</dc:date>
</item>
<item>
<title>Developers Grappling with Flood Risks: Evaluating Boston’s Climate Resiliency Checklist</title>
<link>https://hdl.handle.net/1721.1/164388</link>
<description>Developers Grappling with Flood Risks: Evaluating Boston’s Climate Resiliency Checklist
Loescher-Montal, Angela; Mazereeuw, Miho; Shen, Kairos
Ongoing waterfront development in risky areas across the globe raises the continued paradox between resilience initiatives and broader market mechanisms. Even as flood risk increases, existing development patterns do not often adequately account for future flood risk. This research examines using resiliency checklists as a growing regulatory tool to improve predevelopment flood resilience standards. The research employs mixed quantitative and qualitative methods to evaluate how four large-scale developments interacted with Boston’s Climate Resiliency Checklist in the last decade and how its current design criteria influenced design decisions. The checklist’s format, design, and time horizon considerations are evaluated. Increased format and smaller-scale tools are considered.
</description>
<pubDate>Tue, 02 Jan 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164388</guid>
<dc:date>2024-01-02T00:00:00Z</dc:date>
</item>
<item>
<title>The “content” of intergroup contact: lessons from the Denton Women’s Interracial Fellowship</title>
<link>https://hdl.handle.net/1721.1/164387</link>
<description>The “content” of intergroup contact: lessons from the Denton Women’s Interracial Fellowship
English, Jasmine
Does the content of intergroup contact matter? Despite extensive research on the benefits of contact for intergroup relations, we know little about what happens during contact-based programs and interventions. This article addresses this gap by inductively building theory about the desired content of contact. My analysis draws on oral history interviews and archival data from the Denton Women’s Interracial Fellowship: a real-world case of intergroup contact that emerged to ease the process of school desegregation in Denton, Texas. My analysis of these data moves beyond the scope conditions suggested by (Allport, Gordon W. 1954. The Nature of Prejudice. 25th ed. Cambridge, MA: Perseus Books) to highlight the role of conversations about outgroup experiences. I illuminate how these conversations produce positive impacts on intergroup relations and draw out the implications for research on intergroup contact: namely, that forms of intergroup contact that incorporate these conversations are more likely to improve intergroup relations, and that intergroup contact interventions should explicitly encourage or incorporate these kinds of conversations.
</description>
<pubDate>Sun, 14 Apr 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164387</guid>
<dc:date>2024-04-14T00:00:00Z</dc:date>
</item>
<item>
<title>Roadmap of Graphite Moderator and Graphite-Matrix TRISO Fuel Management Options</title>
<link>https://hdl.handle.net/1721.1/164386</link>
<description>Roadmap of Graphite Moderator and Graphite-Matrix TRISO Fuel Management Options
Forsberg, CW
Most high-temperature reactors use graphite as a moderator and structural material. This includes high-temperature gas-cooled reactors with helium cooling and TRi-structural ISOtropic (TRISO) fuel particles embedded in graphite, as well as fluoride salt–cooled high-temperature reactors with clean salt coolant and TRISO fuel particles embedded in graphite and thermal spectrum molten salt reactors with a graphite moderator and fuel dissolved in the salt. The largest volume radioactive waste stream from these reactors is the irradiated graphite. We describe herein a roadmap for management of these graphite wastes that contain radioactive 14C, tritium, and other radionuclides. There may be some graphite wastes with sufficiently low radioactivity levels that can be treated as nonradioactive waste and managed like other graphite waste. Management options for the graphite include (1) direct disposal, (2) recycled back to the reactor or other nuclear applications, and (3) oxidizing the graphite with release as an effluent or underground sequestration of the carbon dioxide. Cosequestration of this carbon dioxide with carbon dioxide from industrial, biological, and cement production processes can isotopically dilute the 14C before sequestration to eliminate the possibility of exceeding individual radiation exposure limits. We also describe options for processing graphite-matrix TRISO fuel, including separating the bulk graphite to reduce the volumes of used fuel for disposal or processing to recover fissile materials. The inventories of radioactive isotopes in different carbon wastes vary by many orders of magnitude; thus, there is no single economic option for the management of all graphite waste.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164386</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Adaptive Model Reduction of High-Order Solutions of Compressible Flows via Optimal Transport</title>
<link>https://hdl.handle.net/1721.1/164385</link>
<description>Adaptive Model Reduction of High-Order Solutions of Compressible Flows via Optimal Transport
Van Heyningen, Robert Loek; Nguyen, Ngoc Cuong; Blonigan, Patrick; Peraire, Jaime
The solution of conservation laws with parametrised shock waves presents challenges for both high-order numerical methods and model reduction techniques. We introduce an r-adaptivity scheme based on optimal transport and apply it to develop reduced order models for compressible flows. The optimal transport theory allows us to compute high-order r-adaptive meshes from a starting reference mesh by solving the Monge–Ampère equation. A high-order discretization of the conservation laws enables high-order solutions to be computed on the resulting r-adaptive meshes. Furthermore, the Monge–Ampère solutions contain mappings that are used to reduce the spatial locality of the resulting solutions and make them more amenable to model reduction. We use a non-intrusive model reduction method to construct reduced order models of both the mesh and the solution. The procedure is demonstrated on three supersonic and hypersonic test cases, with the hybridisable discontinuous Galerkin method being used as the full order model.
</description>
<pubDate>Sun, 28 Apr 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164385</guid>
<dc:date>2024-04-28T00:00:00Z</dc:date>
</item>
<item>
<title>Demonstrating Thermochromorph: Dynamic Relief Printing with Thermochromic Inks</title>
<link>https://hdl.handle.net/1721.1/164384</link>
<description>Demonstrating Thermochromorph: Dynamic Relief Printing with Thermochromic Inks
Sethapakdi, Ticha; Myers, Paris; Yu, Tianyu; Covarrubias, Juliana; Leake, Mackenzie; Mueller, Stefanie
We demonstrate Thermochromorph, a novel relief printing technique that produces multicolored images that transition into each other through changes in temperature. Our process utilizes two sets of CMYK thermochromic inks that exhibit complementary color-changing behaviors: one shifting from color to transparency, the other from transparency to color at the same activation temperature. We describe our printmaking workflow, provide an open-source software toolkit, showcase prints made with our system, and explore how our system can be used in creative practice through an artist workshop. By incorporating new materials and technology with the rich history of printmaking, our work extends the expressive capabilities of relief printing as the medium continues to evolve.
CHI EA ’25, Yokohama, Japan
</description>
<pubDate>Fri, 25 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164384</guid>
<dc:date>2025-04-25T00:00:00Z</dc:date>
</item>
<item>
<title>Demonstrating OpenEarable 2.0: An AI-Powered Ear Sensing Platform</title>
<link>https://hdl.handle.net/1721.1/164383</link>
<description>Demonstrating OpenEarable 2.0: An AI-Powered Ear Sensing Platform
R?ddiger, Tobias; Zitz, Valeria; Hummel, Jonas; K?ttner, Michael; Lepold, Philipp; King, Tobias; Paradiso, Joseph; Clarke, Christopher; Beigl, Michael
In this demo, we present OpenEarable 2.0, an open-source earphone platform designed to provide an interactive exploration of physiological ear sensing and the development of AI applications. Attendees will have the opportunity to explore real-time sensor data and understand the capabilities of OpenEarable 2.0’s sensing components. OpenEarable 2.0 integrates a rich set of sensors, including two ultrasound-capable microphones (inward/outward), a 3-axis ear canal accelerometer/bone conduction microphone, a 9-axis head inertial measurement unit, a pulse oximeter, an optical temperature sensor, an ear canal pressure sensor, a microSD slot, and a microcontroller. Participants will be able to try out the web-based dashboard and mobile app for real-time control and data visualization. Furthermore, the demo will show different applications and real-time data based on OpenEarable 2.0 across physiological sensing and health monitoring, movement and activity tracking, and human-computer interaction.
CHI EA ’25, Yokohama, Japan
</description>
<pubDate>Fri, 25 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164383</guid>
<dc:date>2025-04-25T00:00:00Z</dc:date>
</item>
<item>
<title>Conductive Ceramics: Embedding Electronics in Everyday Ceramic Objects</title>
<link>https://hdl.handle.net/1721.1/164382</link>
<description>Conductive Ceramics: Embedding Electronics in Everyday Ceramic Objects
Chin, Sam; Kim, Keunwook; An, Audrey; Kuang, Quincy; Zhang, Kai
We present a method for integrating conductive traces into ceramic objects using a silver-based glaze compatible with traditional firing processes. Our glaze combines silver powder with a glass former and xanthan gum, enabling application through standard ceramic techniques while maintaining the durability of conventional ceramics. Through a material-driven experimentation approach, we characterized how glaze composition and post-processing methods affect conductivity and surface quality. We demonstrate this technique through functional prototypes including a temperature-responsive heating vessel, a touch-sensitive musical controller utilizing kintsugi repair, and an interactive marble machine. This work bridges traditional ceramic craft with interactive technology, offering ceramicists a way to incorporate electronic functionality while preserving traditional methods.
CHI EA ’25, Yokohama, Japan
</description>
<pubDate>Fri, 25 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164382</guid>
<dc:date>2025-04-25T00:00:00Z</dc:date>
</item>
<item>
<title>Towards Resource-Efficient Compound AI Systems</title>
<link>https://hdl.handle.net/1721.1/164381</link>
<description>Towards Resource-Efficient Compound AI Systems
Chaudhry, Gohar Irfan; Choukse, Esha; Goiri, ??igo; Fonseca, Rodrigo; Belay, Adam; Bianchini, Ricardo
Compound AI Systems, integrating multiple interacting components like models, retrievers, and external tools, have emerged as essential for addressing complex AI tasks. However, current implementations suffer from inefficient resource utilization due to tight coupling between application logic and execution details, a disconnect between orchestration and resource management layers, and the perceived exclusiveness between efficiency and quality.&#13;
We propose a vision for resource-efficient Compound AI Systems through a declarative workflow programming model and an adaptive runtime system for dynamic scheduling and resource-aware decision-making. Decoupling application logic from low-level details exposes levers for the runtime to flexibly configure the execution environment and resources, without compromising on quality. Enabling collaboration between the workflow orchestration and cluster manager enables higher efficiency through better scheduling and resource management.&#13;
We are building a prototype system, called Murakkab, to realize this vision. Our preliminary evaluation demonstrates speedups up to ~ 3.4× in workflow completion times while delivering ~ 4.5× higher energy efficiency, showing promise in optimizing resources and advancing AI system design.
HOTOS 25, May 14–16, 2025, Banff, AB, Canada
</description>
<pubDate>Fri, 06 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164381</guid>
<dc:date>2025-06-06T00:00:00Z</dc:date>
</item>
<item>
<title>Fits like a Flex-Glove: Automatic Design of Personalized FPCB-Based Tactile Sensing Gloves</title>
<link>https://hdl.handle.net/1721.1/164380</link>
<description>Fits like a Flex-Glove: Automatic Design of Personalized FPCB-Based Tactile Sensing Gloves
Murphy, Devin; Li, Yichen; Owens, Crystal; Stanton, Layla; Liang, Paul Pu; Luo, Yiyue; Torralba, Antonio; Matusik, Wojciech
Resistive tactile sensing gloves have captured the interest of researchers spanning diverse domains, such as robotics, healthcare, and human-computer interaction. However, existing fabrication methods often require labor-intensive assembly or costly equipment, limiting accessibility. Leveraging flexible printed circuit board (FPCB) technology, we present an automated pipeline for generating resistive tactile sensing glove design files solely from a simple hand photo on legal-size paper, which can be readily supplied to commercial board houses for manufacturing. Our method enables cost-effective, accessible production at under $130 per glove with sensor assembly times under 15 minutes. Sensor performance was characterized under varying pressure loads, and a preliminary user evaluation showcases four unique automatically manufactured designs, evaluated for their reliability and comfort.
CHI EA ’25, Yokohama, Japan
</description>
<pubDate>Fri, 25 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164380</guid>
<dc:date>2025-04-25T00:00:00Z</dc:date>
</item>
<item>
<title>Randomness, Not Representation: The Unreliability of Evaluating Cultural Alignment in LLMs</title>
<link>https://hdl.handle.net/1721.1/164379</link>
<description>Randomness, Not Representation: The Unreliability of Evaluating Cultural Alignment in LLMs
Khan, Ariba; Casper, Stephen; Hadfield-Menell, Dylan
Research on the ‘cultural alignment’ of Large Language Models (LLMs) has emerged in response to growing interest in understanding representation across diverse stakeholders. Current approaches to evaluating cultural alignment through survey-based assessments that borrow from social science methodologies often overlook systematic robustness checks. We identify and test three assumptions behind current survey-based evaluation methods: (1) Stability: that cultural alignment is a property of LLMs rather than an artifact of evaluation design, (2) Extrapolability: that alignment with one culture on a narrow set of issues predicts alignment with that culture on others, and (3) Steerability: that LLMs can be reliably prompted to represent specific cultural perspectives. Through experiments examining both explicit and implicit preferences of leading LLMs, we find a high level of instability across presentation formats, incoherence between evaluated versus held-out cultural dimensions, and erratic behavior under prompt steering. We show that these inconsistencies can cause the results of an evaluation to be very sensitive to minor variations in methodology. Finally, we demonstrate in a case study on evaluation design that narrow experiments and a selective assessment of evidence can be used to paint an incomplete picture of LLMs’ cultural alignment properties. Overall, these results highlight significant limitations of current survey-based approaches to evaluating the cultural alignment of LLMs and highlight a need for systematic robustness checks and red-teaming for evaluation results. Data and code are available at https://doi.org/akhan02/cultural-dimension-cover-letters and https://doi.org/ariba-k/llm-cultural-alignment-evaluation, respectively.
FAccT ’25, Athens, Greece
</description>
<pubDate>Mon, 23 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164379</guid>
<dc:date>2025-06-23T00:00:00Z</dc:date>
</item>
<item>
<title>Aptly: Making Mobile Apps from Natural Language</title>
<link>https://hdl.handle.net/1721.1/164378</link>
<description>Aptly: Making Mobile Apps from Natural Language
Patton, Evan; Kim, David; Granquist, Ashley; Liu, Robin; Scott, Arianna; Zamanova, Jennet; Abelson, Harold
This paper introduces Aptly, a platform designed to democratize mobile app development, particularly for young learners. Aptly integrates a Large Language Model (LLM) with App Inventor, enabling users to create apps using their natural language. User’s description is translated into a programming language that corresponds with App Inventor’s visual blocks. A preliminary study with high school students demonstrated the usability and potential of the platform. Prior programming experience influenced how users interact with Aptly. Participants identified areas for improvement and expressed a shift in perspective regarding programming accessibility and AI’s role in creative endeavors.
CHI EA ’25, Yokohama, Japan
</description>
<pubDate>Fri, 25 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164378</guid>
<dc:date>2025-04-25T00:00:00Z</dc:date>
</item>
<item>
<title>AcceloPrint: Fabricating Customizable Accelerometers with Multi-Material 3D Printing</title>
<link>https://hdl.handle.net/1721.1/164377</link>
<description>AcceloPrint: Fabricating Customizable Accelerometers with Multi-Material 3D Printing
Ozbek, Doga; AlAlawi, Marwa; Wessely, Michael
We introduce AcceloPrint, 3D-printed acceleration sensors that can be fabricated in one pass alongside a 3D object and report on its angular orientation or acceleration. AcceloPrint utilizes capacitive sensing to track the deflection of a 3D printed cantilever beam to a sensor patch. Our AcceloPrint tool integrated into a 3D editor generates a sensor with a user-defined sensing range generated by our computational model. We also propose a novel sensor design with an adjustable sensing range post-fabrication. Our technical evaluation shows our sensor can detect acceleration up to 50 m/s2, with a root mean squared error of 0.35 m/s2 (%3.57) in the range up to 10 m/s2. We demonstrate AcceloPrint with three application examples on sports performance tracking and tangible tools.
CHI EA ’25, Yokohama, Japan
</description>
<pubDate>Fri, 25 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164377</guid>
<dc:date>2025-04-25T00:00:00Z</dc:date>
</item>
<item>
<title>"How can we learn and use AI at the same time?": Participatory Design of GenAI with High School Students</title>
<link>https://hdl.handle.net/1721.1/164376</link>
<description>"How can we learn and use AI at the same time?": Participatory Design of GenAI with High School Students
Pu, Isabella; Ravi, Prerna; Dinh, Linh; Joe, Chelsea; Ogoe, Caitlin; Li, Zixuan; Breazeal, Cynthia; Ostrowski, Anastasia
As generative AI (GenAI) emerges as a transformative force, clear understanding of high school students’ perspectives is essential for GenAI’s meaningful integration in high school environments. In this work, we draw insights from a participatory design workshop where we engaged 17 high school students—a group rarely involved in prior research in this area—through the design of novel GenAI tools and school policies addressing their key concerns. Students identified challenges and developed solutions outlining their ideal features in GenAI tools, appropriate school use, and regulations. These centered around the problem spaces of combating bias &amp; misinformation, tackling crime &amp; plagiarism, preventing over-reliance on AI, and handling false accusations of academic dishonesty. Building on our participants’ underrepresented perspectives, we propose new guidelines targeted at educational technology designers for development of GenAI technologies in high schools. We also argue for further incorporation of student voices in development of AI policies in their schools.
IDC ’25, Reykjavik, Iceland
</description>
<pubDate>Mon, 23 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164376</guid>
<dc:date>2025-06-23T00:00:00Z</dc:date>
</item>
<item>
<title>Bird: A Point Cursor for Virtual Immersive Environments</title>
<link>https://hdl.handle.net/1721.1/164375</link>
<description>Bird: A Point Cursor for Virtual Immersive Environments
Simonson, Aubrey; Gretton, Dana; Harteveld, Casper
This paper introduces the Bird, a novel point cursor for immersive virtual environments (IVEs) that enables precise, one-handed control over a point in 3D space beyond arm’s reach. Interaction techniques commonly used in VR today lack this functionality. While direct manipulation allows for control of the position of an object in 3D space, it is limited to arm’s reach. Ray-casting enables interaction at a distance but specifies a line rather than a point, making it impossible to move objects closer or farther without additional mechanics. The Bird overcomes these limitations by allowing users to select any visible object and place it anywhere within view, with one hand and without requiring a controller. We explore a range of use cases that highlight the Bird’s potential to expand the design space for spatial computing.
CHI EA ’25, Yokohama, Japan
</description>
<pubDate>Fri, 25 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164375</guid>
<dc:date>2025-04-25T00:00:00Z</dc:date>
</item>
<item>
<title>On Approximability of Satisfiable &#119896;-CSPs: V</title>
<link>https://hdl.handle.net/1721.1/164374</link>
<description>On Approximability of Satisfiable &#119896;-CSPs: V
Bhangale, Amey; Khot, Subhash; Minzer, Dor
We propose a framework of algorithm vs. hardness for all Max-CSPs and demonstrate it for a large class of predicates. This framework extends the work of Raghavendra [STOC, 2008], who showed a similar result for almost satisfiable Max-CSPs. Our framework is based on a new hybrid approximation algorithm, which uses a combination of the Gaussian elimination technique (i.e., solving a system of linear equations over an Abelian group) and the semidefinite programming relaxation. We complement our algorithm with a matching dictator vs. quasirandom test that has perfect completeness. The analysis of our dictator vs. quasirandom test is based on a novel invariance principle, which we call the mixed invariance principle. Our mixed invariance principle is an extension of the invariance principle of Mossel, O’Donnell and Oleszkiewicz [Annals of Mathematics, 2010] which plays a crucial role in Raghavendra’s work. The mixed invariance principle allows one to relate 3-wise correlations over discrete probability spaces with expectations over spaces that are a mixture of Guassian spaces and Abelian groups, and may be of independent interest.
STOC ’25, Prague, Czechia
</description>
<pubDate>Sun, 15 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164374</guid>
<dc:date>2025-06-15T00:00:00Z</dc:date>
</item>
<item>
<title>Fewer Than 1% of Explainable AI Papers Validate Explainability with Humans</title>
<link>https://hdl.handle.net/1721.1/164373</link>
<description>Fewer Than 1% of Explainable AI Papers Validate Explainability with Humans
Suh, Ashley; Hurley, Isabelle; Smith, Nora; Siu, Ho Chit
This late-breaking work presents a large-scale analysis of explainable AI (XAI) literature to evaluate claims of human explainability. We collaborated with a professional librarian to identify 18,254 papers containing keywords related to explainability and interpretability. Of these, we find that only 253 papers included terms suggesting human involvement in evaluating an XAI technique, and just 128 of those conducted some form of a human study. In other words, fewer than 1% of XAI papers (0.7%) provide empirical evidence of human explainability when compared to the broader body of XAI literature. Our findings underscore a critical gap between claims of human explainability and evidence-based validation, raising concerns about the rigor of XAI research. We call for increased emphasis on human evaluations in XAI studies and provide our literature search methodology to enable both reproducibility and further investigation into this widespread issue.
CHI EA ’25, Yokohama, Japan
</description>
<pubDate>Fri, 25 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164373</guid>
<dc:date>2025-04-25T00:00:00Z</dc:date>
</item>
<item>
<title>Characteristics of Driver Peripheral Vision: How Drivers Respond to Ubiquitous Information on Wide-Area In-Vehicle Displays</title>
<link>https://hdl.handle.net/1721.1/164372</link>
<description>Characteristics of Driver Peripheral Vision: How Drivers Respond to Ubiquitous Information on Wide-Area In-Vehicle Displays
Huang, Hongwei; Li, Jiateng; Feng, Xuejing; Ma, Jun; Mehler, Bruce
Despite advancements in In-Vehicle Information Systems (IVIS) and extensive research on screen layouts, the influence of drivers’ peripheral vision on interactions with evolving multi-screen and large display technologies remains poorly understood. This study examines drivers’ responses to in-vehicle interactive information through peripheral vision, aiming to optimize visual interaction efficiency and enhance driving safety. Analyzing data from 216 participants in a driving simulator, we explored how horizontal eccentricity, screen type, cognitive load, visual crowding, and stimulus type affect perception rates and reaction times. Our findings highlight the significance of these factors and the need for driver-centered design. The results suggest designing IVIS that align with natural visual tendencies to improve interaction efficiency and driving safety.
CHI EA ’25, Yokohama, Japan
</description>
<pubDate>Fri, 25 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164372</guid>
<dc:date>2025-04-25T00:00:00Z</dc:date>
</item>
<item>
<title>Guiding Evolutionary AutoEncoder Training with Activation-Based Pruning Operators</title>
<link>https://hdl.handle.net/1721.1/164371</link>
<description>Guiding Evolutionary AutoEncoder Training with Activation-Based Pruning Operators
Jorgensen, Steven; Hemberg, Erik; Toutouh, Jamal; O'Reilly, Una-May
This study explores a novel approach to neural network pruning using evolutionary computation, focusing on simultaneously pruning the encoder and decoder of an autoencoder. We introduce two new mutation operators that use layer activations to guide weight pruning. Our findings reveal that one of these activation-informed operators outperforms random pruning, resulting in more efficient autoencoders with comparable performance to canonically trained models. Prior work has established that autoencoder training is effective and scalable with a spatial coevolutionary algorithm that cooperatively coevolves a population of encoders with a population of decoders, rather than one autoencoder. We evaluate how the same activity-guided mutation operators transfer to this context. We find that random pruning is better than guided pruning, in the coevolutionary setting. This suggests activation-based guidance proves more effective in low-dimensional pruning environments, where constrained sample spaces can lead to deviations from true uniformity in randomization. Conversely, population-driven strategies enhance robustness by expanding the total pruning dimensionality, achieving statistically uniform randomness that better preserves system dynamics. We experiment with pruning according to different schedules and present best combinations of operator and schedule for the canonical and coevolving populations cases.
GECCO ’25, July 14–18, 2025, Malaga, Spain
</description>
<pubDate>Sun, 13 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164371</guid>
<dc:date>2025-07-13T00:00:00Z</dc:date>
</item>
<item>
<title>Mirai: A Wearable Proactive AI "Inner-Voice" for Contextual Nudging</title>
<link>https://hdl.handle.net/1721.1/164370</link>
<description>Mirai: A Wearable Proactive AI "Inner-Voice" for Contextual Nudging
Fang, Cathy Mengying; Samaradivakara, Yasith; Maes, Pattie; Nanayakkara, Suranga
People often find it difficult to turn their intentions into real actions—a challenge that affects both personal growth and mental well-being. While established methods like cognitive-behavioral therapy and mindfulness training help people become more aware of their behaviors and set clear goals, these approaches cannot provide immediate guidance when people fall into automatic reactions or habits. We introduce Mirai, a novel wearable AI system with an integrated camera, real-time speech processing, and personalized voice-cloning to provide proactive and contextual nudges for positive behavior change. Mirai continuously monitors and analyzes the user’s environment to anticipate their intentions, generating contextually-appropriate responses delivered in the user’s own cloned voice. We demonstrate the application of Mirai through three scenarios focusing on dietary choices, work productivity, and communication skills. We also discuss future work on improving the proactive agent via human feedback and the need for a longitudinal study in naturalistic settings.
CHI EA ’25, Yokohama, Japan
</description>
<pubDate>Fri, 25 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164370</guid>
<dc:date>2025-04-25T00:00:00Z</dc:date>
</item>
<item>
<title>Structural Tuning of Self‐Conductive Polymer as Gas Diffusion Layer for Electrocatalytic Reactions at High Current</title>
<link>https://hdl.handle.net/1721.1/164357</link>
<description>Structural Tuning of Self‐Conductive Polymer as Gas Diffusion Layer for Electrocatalytic Reactions at High Current
Noh, Hwiyoon; Lee, Tae Hoon; Ahn, Sang Hyun; Davis, Jonathan T; Jeong, Daecheol; Gounder, Rajamani; Smith, Zachary P; Boudouris, Bryan W; Tackett, Brian M
Electrocatalytic conversions offer a promising route for sustainable chemical production using renewable energy. Gas diffusion layers (GDLs) enable selective product formation at high current densities but suffer from electrolyte flooding, and polytetrafluoroethylene (PTFE)-based GDLs typically require metal conductive layers, which constrain catalyst development. A recently developed GDL configuration, electropolymerized poly(3,4-ethylenedioxythiophene) (PEDOT)-coated PTFE, demonstrates notable flooding resistance, but suffers from gas diffusion limitations at elevated currents due to limited gas diffusion through the PEDOT layer. Here, different dopants in PEDOT are exploited to modify the physical properties and enhance gas transport. ClO4−-doped PEDOT exhibits superior performance due to optimized physical structure, leading to increased gas permeance and faradaic efficiency (FE) for CO production during electrocatalytic CO2 reduction. Further optimization of coverage and thickness achieved by adjusting charge density led to an optimal configuration at 33 mC cm−2. This GDL supports various metal electrocatalysts and demonstrates FECO of &gt; 90% for over 150 h at −200 mA cm−2 using a commercial silver electrocatalyst. This work highlights the importance of GDL engineering in enhancing performance and durability for long-term electrocatalytic processes.
</description>
<pubDate>Tue, 21 Oct 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164357</guid>
<dc:date>2025-10-21T00:00:00Z</dc:date>
</item>
<item>
<title>Navigating Emotions Through Art</title>
<link>https://hdl.handle.net/1721.1/164356</link>
<description>Navigating Emotions Through Art
Wu, Christine; Kumar, Ila; Picard, Rosalind
In this study, we design and deploy a novel system to examine the safety and efficacy of using a chatbot to conduct aspects of art therapy with youth who have experienced developmental trauma, focusing on supporting emotion identification, processing, and expression. This publication describes phase one, gathering feedback on the system from practicing art therapists (n = 17) and making recommendations for how to evolve such work in beneficial ways to meet the needs of trauma-impacted youth. Our findings highlight the potential value of chatbots for trauma-impacted youth as well as important reflection questions these chatbots should ask. Additionally, the study discusses the risk of harm associated with chatbot interventions, particularly if the conversation brings up negative emotions that the chatbot fails to help process. Finally, we end by presenting a set of practitioner-driven recommendations for chatbot designers who are interested in helping trauma-impacted youth understand and cope with their emotions, leveraging art therapy techniques.
CHI EA ’25, Yokohama, Japan
</description>
<pubDate>Fri, 25 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164356</guid>
<dc:date>2025-04-25T00:00:00Z</dc:date>
</item>
<item>
<title>Bridging Tradition and Technology: Human-AI Interface for Exploration and Co-Creation of Classical Dance Heritage</title>
<link>https://hdl.handle.net/1721.1/164355</link>
<description>Bridging Tradition and Technology: Human-AI Interface for Exploration and Co-Creation of Classical Dance Heritage
Pataranutaporn, Pat; Archiwaranguprok, Chayapatr; Bhongse-tong, Piyaporn; Maes, Pattie; Klunchun, Pichet
This paper introduces Text2Tradition, a system designed to bridge the epistemological gap between modern language processing and traditional dance knowledge by translating user-generated prompts into Thai classical dance repertoire. Our system interprets user prompts through the lens of Mae Bot Yai—the 59 foundational movements constituting the vocabulary of traditional Thai dance—and incorporates six choreographic elements that encode centuries of cultural knowledge. This research explores the fertile tension between two knowledge systems: the embodied, culturally-specific wisdom of traditional dance and the data-driven, statistically-derived, and often Western-centric intelligence of LLMs. By mediating between these epistemologies, we highlight the potential of AI-mediated systems not only to preserve traditional forms but also to foster new cultural co-creations, suggesting that these tensions can be harnessed to stimulate cultural dialogue and innovation.
CHI EA ’25, Yokohama, Japan
</description>
<pubDate>Fri, 25 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164355</guid>
<dc:date>2025-04-25T00:00:00Z</dc:date>
</item>
<item>
<title>A Write-Optimized Distributed B+Tree Index on Disaggregated Memory</title>
<link>https://hdl.handle.net/1721.1/164354</link>
<description>A Write-Optimized Distributed B+Tree Index on Disaggregated Memory
Kraska, Tim
If it were possible to scale memory independently from compute, it would be feasible to dynamically adjust the amount of memory based on the workload. It would further enable better resource utilization. Consider a dynamic workload regarding the number of queries but with very strict response time requirements, which can only be met, if data is kept in-memory. In this case, the separation of compute and memory would enable to scale the compute with the number of queries while keeping all the data constantly in-memory. This design principle is already used by services such as Google, which keeps the entire web-index in-memory.
</description>
<pubDate>Tue, 15 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164354</guid>
<dc:date>2025-04-15T00:00:00Z</dc:date>
</item>
<item>
<title>Data-Driven AI Avatars for Valuation in Dating Scenarios</title>
<link>https://hdl.handle.net/1721.1/164353</link>
<description>Data-Driven AI Avatars for Valuation in Dating Scenarios
Baradari, D?nya; Polimetla, Tejaswi; Maes, Pattie
Dating applications facilitate partner selection by presenting curated information about potential matches. However, traditional dating profiles often fail to convey the depth of a person’s personality, communication style, and lived experience, leading to inefficiencies in the match-finding process. This work-in-progress study introduces and evaluates two novel, data-driven dating interfaces: (1) a Data Dashboard, which aggregates and visualizes insights from a user’s digital footprint, and (2) an AI Avatar, an interactive, voice-enabled model using personal data to simulate real-world interactions. A user study with nine participants comparing these interfaces against traditional dating profiles reveals that the Data Dashboard enables more accurate personality assessments but imposes a high cognitive load. Meanwhile, the AI Avatar enhances engagement and enjoyability but raises concerns about trust and emotional investment. Our findings highlight the challenge of maintaining authenticity in AI-mediated interactions and bridging the gap between digital and real-life personas.
CHI EA ’25, Yokohama, Japan
</description>
<pubDate>Fri, 25 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164353</guid>
<dc:date>2025-04-25T00:00:00Z</dc:date>
</item>
<item>
<title>Cultivating a Supportive Sphere: Designing Technology to Increase Social Support for Foster-Involved Youth</title>
<link>https://hdl.handle.net/1721.1/164352</link>
<description>Cultivating a Supportive Sphere: Designing Technology to Increase Social Support for Foster-Involved Youth
Kumar, Ila; Ferguson, Craig; Wu, Jiayi; Picard, Rosalind
Approximately 400,000 youth in the US are living in foster care due to experiences with abuse or neglect at&#13;
home [17]. For multiple reasons, these youth often don’t receive adequate social support from those around&#13;
them. Despite technology’s potential, very little work has explored how these tools can provide more support&#13;
to foster-involved youth. To begin to fill this gap, we worked with current and former foster-involved youth&#13;
to develop the first digital tool that aims to increase social support for this population, creating a novel system&#13;
in which users complete reflective check-ins in an online community setting. We then conducted a pilot study&#13;
with 15 current and former foster-involved youth, comparing the effect of using the app for two weeks to&#13;
two weeks of no intervention. We collected qualitative and quantitative data, which demonstrated that this&#13;
type of interface can provide youth with types of social support that are often not provided by foster care&#13;
services and other digital interventions. The paper details the motivation behind the app, the trauma-informed&#13;
design process, and insights gained from this initial evaluation study. Finally, the paper concludes with&#13;
recommendations for designing digital tools that effectively provide social support to foster-involved youth.
</description>
<pubDate>Fri, 02 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164352</guid>
<dc:date>2025-05-02T00:00:00Z</dc:date>
</item>
<item>
<title>Allocation Multiplicity: Evaluating the Promises of the Rashomon Set</title>
<link>https://hdl.handle.net/1721.1/164351</link>
<description>Allocation Multiplicity: Evaluating the Promises of the Rashomon Set
Jain, Shomik; Wang, Margaret; Creel, Kathleen; Wilson, Ashia
The Rashomon set of equally-good models promises less discriminatory algorithms, reduced outcome homogenization, and fairer decisions through model ensembles or reconciliation. However, we argue from the perspective of allocation multiplicity that these promises may remain unfulfilled. When there are more qualified candidates than resources available, many different allocations of scarce resources can achieve the same utility. This space of equal-utility allocations may not be faithfully reflected by the Rashomon set, as we show in a case study of healthcare allocations. We attribute these unfulfilled promises to several factors: limitations in empirical methods for sampling from the Rashomon set, the standard practice of deterministically selecting individuals with the lowest risk, and structural biases that cause all equally-good models to view some qualified individuals as inherently risky.
FAccT ’25, Athens, Greece
</description>
<pubDate>Mon, 23 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164351</guid>
<dc:date>2025-06-23T00:00:00Z</dc:date>
</item>
<item>
<title>Interactive Sketchpad: A Multimodal Tutoring System for Collaborative, Visual Problem-Solving</title>
<link>https://hdl.handle.net/1721.1/164350</link>
<description>Interactive Sketchpad: A Multimodal Tutoring System for Collaborative, Visual Problem-Solving
Lee, Jimin; Chen, Steven-Shine; Liang, Paul Pu
Humans have long relied on visual aids like sketches and diagrams to support reasoning and problem-solving. Visual tools, like auxiliary lines in geometry or graphs in calculus, are essential for understanding complex ideas. However, many tutoring systems remain text-based, providing feedback only through natural language. Leveraging recent advances in Large Multimodal Models (LMMs), this paper introduces Interactive Sketchpad, a tutoring system that combines language-based explanations with interactive visualizations to enhance learning. Built on a pre-trained LMM, Interactive Sketchpad is fine-tuned to provide step-by-step guidance in both text and visuals, enabling natural multimodal interaction with the student. Accurate and robust diagrams are generated by incorporating code execution into the reasoning process. User studies conducted on math problems such as geometry, calculus, and trigonometry demonstrate that Interactive Sketchpad leads to improved task comprehension, problem-solving accuracy, and engagement levels, highlighting its potential for transforming educational technologies. All code is available at: https://stevenshinechen.github.io/interactivesketchpad/.
CHI EA ’25, Yokohama, Japan
</description>
<pubDate>Fri, 25 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164350</guid>
<dc:date>2025-04-25T00:00:00Z</dc:date>
</item>
<item>
<title>Meeting at Crossroads: An exploration of playful listening through a co-creative AI game</title>
<link>https://hdl.handle.net/1721.1/164349</link>
<description>Meeting at Crossroads: An exploration of playful listening through a co-creative AI game
Lee, Cassandra; Dimitrakopoulou, Dimitra; Roy, Deb
Active listening is a well-established cornerstone of empathetic communication and a hallmark of “civic competence”, but is a challenging and energy consuming skill. Games offer a provocative lens to consider how active listening could be explored playfully. In this paper, we present Crossroads, an interactive social game which makes active listening fun by inviting players to co-create images about one another’s personal experiences. Deployed through a tablet-mobile web app, players take turns acting in ‘listener roles’ to generate AI images, and eventually uncover a collective picture along a “crossroad” shaped map. An initial mixed-method evaluation with 36 users demonstrates that players find the experience highly engaging and feel especially heard during in-game conversations. This work contributes a novel game which uses AI to mediate empathetic dialogue, and surfaces questions about the trade-offs of gamifying listening.
CHI EA ’25, Yokohama, Japan
</description>
<pubDate>Fri, 25 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164349</guid>
<dc:date>2025-04-25T00:00:00Z</dc:date>
</item>
<item>
<title>OpenMC Interpretation of FNS SINBAD Shielding Benchmark Experiments</title>
<link>https://hdl.handle.net/1721.1/164343</link>
<description>OpenMC Interpretation of FNS SINBAD Shielding Benchmark Experiments
Ebiwonjumi, Bamidele; Segantin, Stefano; Peterson, Ethan
The Fusion Neutron Source (FNS) clean benchmark experiments on tungsten, vanadium, and beryllium assemblies from the SINBAD (Shielding Integral Benchmark Archive and Database) are analyzed to experimentally validate OpenMC (version 0.14.1-dev) fusion neutronics capabilities. The assemblies were irradiated with a 14-MeV deuterium-tritium neutron source. Neutron spectra, photon spectra, reaction rates, gamma heating rates (GHRs), and tritium production rates (TPRs) are compared to measured data in the experimental assemblies and MCNP-6.2 results. In the tungsten case, slight overestimations of the experimental data were observed in the neutron spectra, and the photon spectra agreed well with the experiments. Most of the GHRs agreed with the measured data within the range of experimental uncertainty in the tungsten and vanadium assemblies. In the vanadium assembly, the calculated neutron spectra underestimated the experiments in the low energy region while the photon spectra were well calculated when compared to experiments. The most noticeable discrepancies with experimental data in the gamma heating were observed at detector positions closest to the source. For the reaction rates, notable discrepancies with experimental data were seen at the front and rear of the assemblies. Compared to experiments, the OpenMC neutron spectra were well predicted in the beryllium assembly, whereas the calculated fission reaction rate and TPRs overestimated the experiments, an observation similar to that which has been reported by other authors. The average, overall calculation-to-experiment ratio (C/E) over nine TPR and seven GHR measurements were 1.03 ± 0.20 and 0.95 ± 0.14, respectively. In the case of verification, the OpenMC results of the benchmark calculations indicated comparable accuracy to MCNP-6.2. In general, the validation exercise showed that OpenMC can be used to analyze the fusion neutronics shielding benchmark problems.
</description>
<pubDate>Thu, 02 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164343</guid>
<dc:date>2025-01-02T00:00:00Z</dc:date>
</item>
<item>
<title>SoK: Acoustic Side Channels</title>
<link>https://hdl.handle.net/1721.1/164342</link>
<description>SoK: Acoustic Side Channels
Wang, Ping; Nagaraja, Shishir; Bourquard, Aur?lien; Gao, Haichang; Yan, Jeff
Acoustic side channels (ASCs) have been discovered for several decades, highlighting the tangible security risks posed by unintended  sound emissions from computing and electronic systems. Their existence has drawn considerable attention from researchers, driving  rapid progress in both attack methodologies and defense mechanisms across a wide range of scenarios. In this paper, we provide  a state-of-the-art analysis of ASCs, covering all the significant academic research in the area. First, we clarify existing ambiguities  and conceptual confusion, proposing a clear definition of ASC. Second, we analyse the characteristics of known ASCs, discuss their  security implications, and propose the first taxonomy. Next, we summarize attack techniques, discuss countermeasures, and identify  areas for future research. We also link side channels and inverse problems, two fields that appear to be completely isolated from each  other but have deep connections.
</description>
<pubDate>Wed, 26 Nov 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164342</guid>
<dc:date>2025-11-26T00:00:00Z</dc:date>
</item>
<item>
<title>Umm Kamel’s Affair: How Infidelity Liberated the Night Sky in Jabal ‘Amil</title>
<link>https://hdl.handle.net/1721.1/164341</link>
<description>Umm Kamel’s Affair: How Infidelity Liberated the Night Sky in Jabal ‘Amil
Nahleh, Mohamad
Weakened by the expansion of several imperial and colonial projects, the inhabitants of Jabal ‘Amil survived as second-class citizens, severed from the urban expression of Lebanese nationalism, and having to formulate their identity amid countless transgressions on their scholarship and literary production. It is thus in the spectacles of the universe and the mysteries of the cosmos that they inscribed fragments of their oral legacy, turning the night sky into an archive that no empire could burn or colonize. And yet it is light pollution, leaking from the same cities they were once forced to nourish, that quickly established itself as the main transgressor, clearing the faintest stories in their celestial library. Although distant manifestations of Islamic cosmology could no longer animate their rural nights, new alterations in the sky after dark, no matter how violent, have proven worthy carri-ers of their modern myths and legends. And it is onto the loudest object in their polluted sky, the Israeli reconnaissance drone IAI Searcher MK, that they grafted the tale of their legendary matriarch Umm Kamel. I argue that Umm Kamel’s physical and symbolic ascent into the sky was orchestrated by a modern generation of ‘Amilis whose infidelity to the celestial stories authored by their ancestors fortified their ability to transform the combined pressures of pollution and colonization. United by their efforts to forge new imaginaries around a starless night, they invite reflection on the possibility (and responsibility) of confronting the sky we have together inherited rather than lamenting the one we have lost. In tracing Umm Kamel’s transformation from figure to constellation, I contend that their cosmic interventions set the stage for new alliances between design and darkness, and ultimately, for a more expanded imagination of night design, particularly within the context of the climate crisis.
</description>
<pubDate>Tue, 02 Jan 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164341</guid>
<dc:date>2024-01-02T00:00:00Z</dc:date>
</item>
<item>
<title>Ophthalmology Optical Coherence Tomography Databases for Artificial Intelligence Algorithm: A Review</title>
<link>https://hdl.handle.net/1721.1/164340</link>
<description>Ophthalmology Optical Coherence Tomography Databases for Artificial Intelligence Algorithm: A Review
Restrepo, David; Quion, Justin Michael; Do Carmo Novaes, Frederico; Azevedo Costa, Iago Diogenes; Vasquez, Constanza; Bautista, Alyssa Nicole; Quiminiano, Ellaine; Lim, Patricia Abigail; Mwavu, Roger; Celi, Leo Anthony; Nakayama, Luis Filipe
BACKGROUND: Imaging plays a pivotal role in eye assessment. With the introduction of advanced machine learning and artificial intelligence (AI), the focus has shifted to imaging datasets in ophthalmology. While disparities and health inequalities hidden within data are well-documented, the ophthalmology field faces specific challenges to the creation and maintenance of datasets. Optical Coherence Tomography (OCT) is useful for the diagnosis and monitoring of retinal pathologies, making it valuable for AI applications. This review aims to identify and compare the landscape of publicly available optical coherence tomography databases for AI applications.&#13;
METHODS: We conducted a literature review on OCT and AI articles with publicly accessible datasets, using PubMed, Scopus, and Web of Science databases. The review retrieved 183 articles, and after full-text analysis, 50 articles were included. From the included articles were identified 8 publicly available OCT datasets, focusing on patient demographics and clinical details for thorough assessment and comparison.&#13;
RESULTS: The resulting datasets encompass 154,313 images collected from Spectralis, Cirrus HD, Topcon 3D, and Bioptigen devices. These datasets included normal exams, age-related macular degeneration, and diabetic maculopathy, among others. Comprehensive demographic information is available in one dataset and the USA is the most represented population.&#13;
DISCUSSION: Current publicly available OCT databases for AI applications exhibit limitations, stemming from their non-representative nature and the lack of comprehensive demographic information. Limited datasets hamper research and equitable AI development. To promote equitable AI algorithmic development in ophthalmology, there is a need for the creation and dissemination of more representative datasets.
</description>
<pubDate>Tue, 02 Apr 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164340</guid>
<dc:date>2024-04-02T00:00:00Z</dc:date>
</item>
<item>
<title>Market Design for Capacity Sharing in Networks</title>
<link>https://hdl.handle.net/1721.1/164339</link>
<description>Market Design for Capacity Sharing in Networks
Amin, Saurabh; Jaillet, Patrick; Pulyassary, Haripriya; Wu, Manxi
We study a market mechanism that sets edge prices to incentivize strategic agents to efficiently share limited network capacity. In this market, agents form coalitions, with each coalition sharing a unit capacity of a selected route and making payments to cover edge prices. Our focus is on the existence and computation of market equilibrium, where challenges arise from the interdependence between coalition formation among strategic agents with heterogeneous preferences and route selection that induces a network flow under integral capacity constraints. To address this interplay between coalition formation and network capacity utilization, we introduce a novel approach based on combinatorial auction theory and network flow theory. We establish sufficient conditions on the network topology and agents' preferences that guarantee both the existence and polynomial-time computation of a market equilibrium. Additionally, we identify a particular market equilibrium that maximizes utilities for all agents and is equivalent to the classical Vickrey-Clarke-Groves mechanism. Furthermore, we extend our results to multi-period settings and general networks, showing that when the sufficient conditions are not met, an equilibrium may still exist but requires more complex, path-based pricing mechanisms that set differentiated prices based on agents' preference parameters.
</description>
<pubDate>Fri, 21 Nov 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164339</guid>
<dc:date>2025-11-21T00:00:00Z</dc:date>
</item>
<item>
<title>Forage: Understanding RAG-based Sensemaking for Community Conversations</title>
<link>https://hdl.handle.net/1721.1/164338</link>
<description>Forage: Understanding RAG-based Sensemaking for Community Conversations
Schroeder, Hope; Beeferman, Doug; Detwiller, Maya; Dimitrakopoulou, Dimitra; Roy, Deb
We introduce Forage, a RAG-based and LLM-augmented search engine, which we apply to the problem of sensemaking for community conversation data. We report on formative user studies introducing Forage to two distinct user groups: NPR journalists and municipal staff in the city of Durham, North Carolina. We taxonomize the query types users make with the tool, use cases that include synthesizing insights across conversations and finding content about a particular subject. We find that users tend to gravitate towards using the system for synthesis more than for pure search. We report on challenges and opportunities surfaced by performing sensemaking with an open-ended interface like Forage, such as the benefits of finding content quickly, but also the challenges users face interacting with a system in natural language. Insights from this formative study confirm the usefulness of Forage for sensemaking, but also make follow-up work, such as systematically evaluating system performance and developing appropriate design, urgent.
CHI EA ’25, Yokohama, Japan
</description>
<pubDate>Fri, 25 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164338</guid>
<dc:date>2025-04-25T00:00:00Z</dc:date>
</item>
<item>
<title>Generative artificial intelligence in supply chain and operations management: a capability-based framework for analysis and implementation</title>
<link>https://hdl.handle.net/1721.1/164337</link>
<description>Generative artificial intelligence in supply chain and operations management: a capability-based framework for analysis and implementation
Jackson, Ilya; Ivanov, Dmitry; Dolgui, Alexandre; Namdar, Jafar
This research examines the transformative potential of artificial intelligence (AI) in general and Generative AI (GAI) in particular in supply chain and operations management (SCOM). Through the lens of the resource-based view and based on key AI capabilities such as learning, perception, prediction, interaction, adaptation, and reasoning, we explore how AI and GAI can impact 13 distinct SCOM decision-making areas. These areas include but are not limited to demand forecasting, inventory management, supply chain design, and risk management. With its outcomes, this study provides a comprehensive understanding of AI and GAI's functionality and applications in the SCOM context, offering a practical framework for both practitioners and researchers. The proposed framework systematically identifies where and how AI and GAI can be applied in SCOM, focussing on decision-making enhancement, process optimisation, investment prioritisation, and skills development. Managers can use it as a guidance to evaluate their operational processes and identify areas where AI and GAI can deliver improved efficiency, accuracy, resilience, and overall effectiveness. The research underscores that AI and GAI, with their multifaceted capabilities and applications, open a revolutionary potential and substantial implications for future SCOM practices, innovations, and research.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164337</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Remote Direct Code Execution</title>
<link>https://hdl.handle.net/1721.1/164336</link>
<description>Remote Direct Code Execution
Huang, Yibo; Qiu, Yiming; Ding, Daqian; Kon, Patrick Tser Jern; Zhang, Yiwen; Mao, Yuzhou; Bhatnagar, Archit; Chowdhury, Mosharaf; Devadas, Srinivas; Xing, Jiarong; Chen, Ang
We propose remote direct code execution (RDX), which elevates the power of RDMA from memory access to code execution. We target runtime extension frameworks such as Wasm filters, BPF programs, and UDF functions, where RDX enables an agentless architecture that unlocks capabilities such as fast extension injection, update consistency guarantees, and minimal resource contention. We outline the roadmap for RDX around a new CodeFlow abstraction, encompassing programming remote extensions, exposing management stubs, remotely validating and JIT compiling code, seamlessly linking code to local context, managing remote extension state, and synchronizing code to targets. The case studies and initial results demonstrate the feasibility of RDX and its potential to spark the next wave of RDMA innovations.
HotNets ’25, College Park, MD, USA
</description>
<pubDate>Mon, 17 Nov 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164336</guid>
<dc:date>2025-11-17T00:00:00Z</dc:date>
</item>
<item>
<title>User Adoption of Intelligent Environments: A Review of Technology Adoption Models, Challenges, and Prospects</title>
<link>https://hdl.handle.net/1721.1/164335</link>
<description>User Adoption of Intelligent Environments: A Review of Technology Adoption Models, Challenges, and Prospects
FakhrHosseini, Shabnam; Chan, Kathryn; Lee, Chaiwoo; Jeon, Myounghoon; Son, Heesuk; Rudnik, John; Coughlin, Joseph
Recent technological advancements have enabled the development of smarter (more automated) and more intelligent (adaptable) environments. To understand what factors lead users to reject or adopt Intelligent Environments (IEs), we reviewed nine prominent technology adoption theories. We conducted a literature review to investigate the acceptance and adoption of different types of IEs. We found that perceived usefulness, ease of use, perceived control or self-efficacy, affect and enjoyment, and perceived risks are the common factors across the studies explaining the adoption of IEs. However, shortcomings in the design and methods of the reviewed studies present major concerns in the generalizability and application of existing theories to emerging IEs. We identify eight lacunae in the existing literature and propose a new conceptual model for explaining the adoption of IEs. Through this study, we contribute to the formulation of the theoretical background for the successful introduction of IEs and their integration into users’ everyday life.
</description>
<pubDate>Fri, 16 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164335</guid>
<dc:date>2024-02-16T00:00:00Z</dc:date>
</item>
<item>
<title>Safe and Secure Control of Connected and Automated Vehicles: An Event-Triggered Control Approach using Trust-Aware Robust Control Barrier Functions</title>
<link>https://hdl.handle.net/1721.1/164334</link>
<description>Safe and Secure Control of Connected and Automated Vehicles: An Event-Triggered Control Approach using Trust-Aware Robust Control Barrier Functions
Ahmad, H M SABBIR; Sabouni, Ehsan; Xiao, Wei; Cassandras, Christos; Li, Wenchao
We address the security of a network of Connected and Automated Vehicles (CAVs) cooperating to safely navigate through a conflict area (e.g., traffic intersections, merging roadways, roundabouts). Previous studies have shown that such a network can be targeted by adversarial attacks causing traffic jams or safety violations resulting in collisions.   We focus on attacks targeting the V2X communication network used to share vehicle data and consider as well uncertainties due to noise in sensor measurements and communication channels. To combat these, motivated by recent work on the safe control of CAVs, we propose a trust-aware robust event-triggered decentralized control and coordination framework that can provably guarantee safety.   We maintain a trust metric for each vehicle in the network computed based on their behavior and used to balance the tradeoff between conservativeness (when deeming every vehicle as untrustworthy) while guaranteeing safety and performance.  It is important to highlight that our framework is invariant to the specific choice of the trust framework.   Based on this framework, we propose an attack detection and mitigation scheme which has twofold benefits: (i) the trust framework is immune to false positives, and (ii) it provably guarantees safety against false positive cases which may arise from a poor choice of trust framework. We use extensive simulations in SUMO and CARLA to validate the theoretical guarantees and demonstrate the efficacy of our proposed scheme to detect and mitigate adversarial attacks. The code for the simulated scenarios can be found in this \href{https://github.com/SabbirAhmad26/Trust_based_CBF}{\textit{\underline{link}}}.
</description>
<pubDate>Tue, 18 Nov 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164334</guid>
<dc:date>2025-11-18T00:00:00Z</dc:date>
</item>
<item>
<title>Three-dimensional, soft magnetic-cored solenoids via multi-material extrusion</title>
<link>https://hdl.handle.net/1721.1/164333</link>
<description>Three-dimensional, soft magnetic-cored solenoids via multi-material extrusion
Cañada, Jorge; Kim, Hyeonseok; Velásquez-García, Luis Fernando
This study reports fully 3D-printed, three-dimensional, soft magnetic-cored solenoids that generate three times the largest magnetic fields previously reported from 3D-printed solenoids. The devices are fabricated on a customised, multi-material 3D printer that can extrude both filaments and pellets. Three different kinds of materials are employed to manufacture the reported soft magnetic-cored solenoids: pure PLA (dielectric portions), PLA doped with copper particles (electrically conductive structures), and nylon or PLA doped with metallic particles (soft magnetic cores). Via manufacturing optimisation, the reported devices are 33% smaller and can withstand about twice the current, generating three times more magnetic field. The 3D-printed solenoids generate Gauss-level magnetic fields while drawing tens-of-milliamps currents and can be readily used to implement fully 3D-printed induction sensors. The results of this work extend the state of the art in 3D-printed electronics, enabling the creation of more complex and capable solenoids for in-situ manufactured and in-space manufactured electromagnetic systems.
</description>
<pubDate>Tue, 20 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164333</guid>
<dc:date>2024-02-20T00:00:00Z</dc:date>
</item>
<item>
<title>The Churns and Turns of HCI: Which CHI Papers Make the Most Impact in an Ever-growing Sea of HCI Publications</title>
<link>https://hdl.handle.net/1721.1/164331</link>
<description>The Churns and Turns of HCI: Which CHI Papers Make the Most Impact in an Ever-growing Sea of HCI Publications
Kaltenhauser, Annika; Sch?ning, Johannes; Churchill, Elizabeth; Ishii, Hiroshi; Mekler, Elisa; Shneiderman, Ben
The ACM Conference on Human Factors in Computing Systems (CHI) is the premier venue for research in Human-Computer Interaction (HCI). 11,290 full papers have been published and collectively cited almost one million times. Highly cited papers undoubtedly represent influential work, affecting the creation of review standards and conference submission and acceptance practices within and beyond CHI. However, the factors contributing to high citation counts and what constitutes a highly cited CHI paper remain largely unclear. In this panel discussion, we will engage the CHI community in exploring the relationship between paper characteristics, citation numbers, and effective impact on HCI as a discipline, and on HCI as an influential endeavour in technology design and development. To ground this discussion, we present findings from a literature review of the 100 most cited CHI full papers, looking at past and present fields and subfields of influence. We will also share insights from HCI experts. Our goals are to shed light on the meaning of impactful work at CHI and in HCI more broadly, to reflect on key trends in HCI over the years, and to discuss themes that have driven pivotal shifts in HCI research. We will lead the conversation toward a deeper understanding of citation practices, the role of citations in focusing and driving HCI research, and the implications of citation when it comes to shaping what is considered impactful HCI.
CHI EA ’25, Yokohama, Japan
</description>
<pubDate>Fri, 25 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164331</guid>
<dc:date>2025-04-25T00:00:00Z</dc:date>
</item>
<item>
<title>Safeguards and Security for High-Burnup TRISO Pebble Bed Spent Fuel and Reactors</title>
<link>https://hdl.handle.net/1721.1/164332</link>
<description>Safeguards and Security for High-Burnup TRISO Pebble Bed Spent Fuel and Reactors
Forsberg, Charles; Kadak, Andrew
Several high-temperature thermal neutron–spectrum pebble bed reactors are being commercialized. China has started up two helium-cooled pebble bed high-temperature reactors. In the United States, the X-Energy helium-cooled and the Kairos Power salt-cooled pebble bed high-temperature reactors will produce spent nuclear fuel (SNF) with burnups exceeding 150 000 MWd per tonne. The reactor fuel in each case consists of small spherical graphite pebbles (4 to 6 cm in diameter) containing thousands of small TRISO (microspheric tri-structural isotropic) fuel particles embedded in the fuel of zone these pebbles. The unique isotopic, chemical, and physical characteristics of this high-burnup SNF create a technical case to eliminate safeguards based on the low risk for use in nuclear weapons, while maintaining safeguards in terms of risk for use in radiological weapons. These safeguards could be reduced to the simple counting and monitoring of pebbles in storage. Alternatively, there is the option to create a special category with reduced requirements for this SNF in storage, transport, and disposal. No safeguards would be required for a repository with only this type of SNF. Reactor safeguards are required for fresh fuel, partly burnt fuel, and to identify unconventional pebbles with depleted uranium or other materials that might be used to create weapons-useable materials.
</description>
<pubDate>Fri, 02 Aug 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164332</guid>
<dc:date>2024-08-02T00:00:00Z</dc:date>
</item>
<item>
<title>Constrained Tabular Diffusion for Finance</title>
<link>https://hdl.handle.net/1721.1/164330</link>
<description>Constrained Tabular Diffusion for Finance
Cardei, Michael; Munoz, Jose; Barrera, Oscar; Chandrahas, Shreyas; Saha, Partha
Generative models in finance face the dual challenge of producing realistic data while satisfying strict regulatory and economic objectives, a requirement that standard tabular diffusion models cannot provide. To address this difficulty, we introduce Constrained Tabular Diffusion for Finance (CTDF), a novel integration of sampling-time feasibility operations with mixed-type tabular diffusion in financial applications. By incorporating a training-free feasibility operator into the reverse‑diffusion sampling loop, CTDF enforces hard constraints for applications such as simulation, legal compliance, and extrapolation. Extensive experiments on large-scale financial datasets demonstrate zero constraint violations and improvement in scarce data utility. CTDF establishes a robust method for generating trustworthy and compliant synthetic data, opening new avenues for rigorous generative modeling and analysis in the financial domain.
6th ACM International Conference on AI in Finance (ICAIF ’25), November 15–18, 2025,&#13;
Singapore, Singapore
</description>
<pubDate>Fri, 14 Nov 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164330</guid>
<dc:date>2025-11-14T00:00:00Z</dc:date>
</item>
<item>
<title>A Bayesian sampling framework for constrained optimisation of build layouts in additive manufacturing</title>
<link>https://hdl.handle.net/1721.1/164329</link>
<description>A Bayesian sampling framework for constrained optimisation of build layouts in additive manufacturing
Kim, Suh In; Gee, Kaitlyn; Hart, A John
In additive manufacturing processes such as laser powder bed fusion, the build orientation and packing of components affect the required support structures, the number of parts in each build, and the surface roughness of the printed parts, among other factors. Maximising the packing density while minimising the build height can increase effective machine utilisation and decrease per-part cost. Yet, the build layout optimisation problem is highly nonlinear and difficult to solve using human intuition, so a systematic algorithm approach is required. Here, we present and demonstrate a voxel-based analysis method with Bayesian optimisation for determining component build orientation in additive manufacturing. We introduce selected case studies incorporating exemplary process attributes of laser powder bed fusion, including the determination of orientation and packing configurations based on support removal and tool-accessibility constraints.
</description>
<pubDate>Sat, 17 Aug 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164329</guid>
<dc:date>2024-08-17T00:00:00Z</dc:date>
</item>
<item>
<title>A rapid simple point-of-care assay for the detection of SARS-CoV-2 neutralizing antibodies</title>
<link>https://hdl.handle.net/1721.1/164328</link>
<description>A rapid simple point-of-care assay for the detection of SARS-CoV-2 neutralizing antibodies
Kongsuphol, Patthara; Jia, Huan; Cheng, Hoi Lok; Gu, Yue; Shunmuganathan, Bhuvaneshwari DO; Chen, Ming Wei; Lim, Sing Mei; Ng, Say Yong; Tambyah, Paul Ananth; Nasir, Haziq; Gao, Xiaohong; Tay, Dousabel; Kim, Seunghyeon; Gupta, Rashi; Qian, Xinlei; Kozma, Mary M; Purushotorman, Kiren; McBee, Megan E; MacAry, Paul A; Sikes, Hadley D; Preiser, Peter R
Background Neutralizing antibodies (NAbs) prevent pathogens from infecting host cells.&#13;
Detection of SARS-CoV-2 NAbs is critical to evaluate herd immunity and monitor vaccine&#13;
efficacy against SARS-CoV-2, the virus that causes COVID-19. All currently available NAb&#13;
tests are lab-based and time-intensive.&#13;
Method We develop a 10 min cellulose pull-down test to detect NAbs against SARS-CoV-2&#13;
from human plasma. The test evaluates the ability of antibodies to disrupt ACE2 receptor—&#13;
RBD complex formation. The simple, portable, and rapid testing process relies on two key&#13;
technologies: (i) the vertical-flow paper-based assay format and (ii) the rapid interaction of&#13;
cellulose binding domain to cellulose paper.&#13;
Results Here we show the construction of a cellulose-based vertical-flow test. The developed test gives above 80% sensitivity and specificity and up to 93% accuracy as compared&#13;
to two current lab-based methods using COVID-19 convalescent plasma.&#13;
Conclusions A rapid 10 min cellulose based test has been developed for detection of NAb&#13;
against SARS-CoV-2. The test demonstrates comparable performance to the lab-based tests&#13;
and can be used at Point-of-Care. Importantly, the approach used for this test can be easily&#13;
extended to test RBD variants or to evaluate NAbs against other pathogens.
</description>
<pubDate>Thu, 11 Nov 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164328</guid>
<dc:date>2021-11-11T00:00:00Z</dc:date>
</item>
<item>
<title>Developing a SARS-CoV-2 Antigen Test Using Engineered Affinity Proteins</title>
<link>https://hdl.handle.net/1721.1/164327</link>
<description>Developing a SARS-CoV-2 Antigen Test Using Engineered Affinity Proteins
Kim, Seunghyeon; Yee, Emma; Miller, Eric A; Hao, Yining; Tay, Dousabel MY; Sung, Ki-Joo; Jia, Huan; Johnson, Joseph M; Saeed, Mohsan; Mace, Charles R; Yüksel Yurt, Deniz; Sikes, Hadley D
The ongoing COVID-19 pandemic has clearly established how vital rapid, widely accessible diagnostic tests are in controlling infectious diseases and how difficult and slow it is to scale existing technologies. Here, we demonstrate the use of the rapid affinity pair identification via directed selection (RAPIDS) method to discover multiple affinity pairs for SARS-CoV-2 nucleocapsid protein (N-protein), a biomarker of COVID-19, from in vitro libraries in 10 weeks. The pair with the highest biomarker sensitivity was then integrated into a 10 min, vertical-flow cellulose paper test. Notably, the as-identified affinity proteins were compatible with a roll-to-roll printing process for large-scale manufacturing of tests. The test achieved 40 and 80 pM limits of detection in 1× phosphate-buffered saline (mock swab) and saliva matrices spiked with cell-culture-generated SARS-CoV-2 viruses and is also capable of detection of N-protein from characterized clinical swab samples. Hence, this work paves the way toward the mass production of cellulose paper-based assays which can address the shortages faced due to dependence on nitrocellulose and current manufacturing techniques. Further, the results reported herein indicate the promise of RAPIDS and engineered binder proteins for the timely and flexible development of clinically relevant diagnostic tests in response to emerging infectious diseases.
</description>
<pubDate>Wed, 11 Aug 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164327</guid>
<dc:date>2021-08-11T00:00:00Z</dc:date>
</item>
<item>
<title>Archaeology of Self: Reflexivity in Data Activism to Address Systemic Injustices</title>
<link>https://hdl.handle.net/1721.1/164326</link>
<description>Archaeology of Self: Reflexivity in Data Activism to Address Systemic Injustices
Walker, Raechel; Cruse, Brady; Cora, Aisha; Rogers, Kantwon; D'Ignazio, Catherine; Brion-Meisels, Gretchen; Breazeal, Cynthia
Traditional data science education often neglects the importance of identity and sociopolitical context—especially for African American students whose lived experiences and cultural insights are essential for building justice centered technologies. This paper presents findings from the Data Activism Program, which integrated Dr. Yolanda Sealey-Ruiz’s Archaeology of Self™ framework to foster critical self-reflection and racial identity development among African American high school and college students. Through technical training in data science, art-based learning, and partnerships with social justice organizations, students engaged in reflexive practices that positioned them as active agents in challenging systemic oppression. Interviews reveal that the Archaeology of Self™ deepened students’ reflexivity skills and strengthened their sound racial identity, enabling them to interrogate bias within themselves and the data science process. We argue that embedding frameworks such as the Archaeology of Self™ into algorithmic design offers a concrete, transferable method for operationalizing reflexivity in data science and AI. This study contributes to the AI and data science community by offering actionable strategies to center identity and power in AI development.
EAAMO ’25, Pittsburgh, PA, USA
</description>
<pubDate>Tue, 04 Nov 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164326</guid>
<dc:date>2025-11-04T00:00:00Z</dc:date>
</item>
<item>
<title>Advances in Financial AI: Innovations, Risk, and Responsibility in the Era of LLMs</title>
<link>https://hdl.handle.net/1721.1/164325</link>
<description>Advances in Financial AI: Innovations, Risk, and Responsibility in the Era of LLMs
Lee, Yongjae; Mehrasa, Nazanin; Choi, Chanyeol; Chen, Chung-Chi; Mehta, Dhagash; Zohren, Stefan; Kim, Yoon; Lee, Chulheum; Lee, Yeonhee; Oh, Eunsook
The finance sector is seeing a rapid increase in the application of machine learning and AI, with Large Language Models (LLMs), ESG (Environmental, Social, and Governance) investing, and AI Safety significantly reshaping the field. This workshop focuses on how these advancements intersect with core financial AI applications. We will foster interdisciplinary discussion on applying LLMs to finance, addressing challenges in multilingual and non-English markets like Korea. The event will also highlight the integration of ESG signals into algorithmic decision-making and explore AI Safety, emphasizing reliability, fairness, and explainability for AI systems in regulated financial environments. By bringing together experts from academia, industry, and regulatory bodies, the workshop aims to stimulate discussions on practical issues, ethical dilemmas, and cutting-edge research shaping financial AI's future. We welcome submissions that combine technical rigor with societal relevance in AI-driven financial decisions.
CIKM ’25, Seoul, Republic of Korea
</description>
<pubDate>Mon, 10 Nov 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164325</guid>
<dc:date>2025-11-10T00:00:00Z</dc:date>
</item>
<item>
<title>Design and Fabrication of Hybrid Functional Identities for Mechanical Elements</title>
<link>https://hdl.handle.net/1721.1/164324</link>
<description>Design and Fabrication of Hybrid Functional Identities for Mechanical Elements
AlAlawi, Marwa
My PhD research explores the simultaneous integration of mechanical and electrical functionalities in mechanical components such as gears, linkages, and springs, which I define as "hybrid functional identities." The focus is on transforming these components into non-intrusive sensors and active elements that maintain structural integrity while providing electrical capabilities like sensing, energy harvesting, and communication. I establish a framework for hybrid functional identities by examining common mechanical elements and their associated motions—rotational, linear, and reciprocal—along with force-based interactions like stretching, compression, and torsion. This analysis identifies essential electrical functionalities that complement these mechanical behaviors. Building on this foundation, I investigate modular mechanical building blocks that support diverse mechanical and electrical interaction primitives using a unified geometric structure. Ultimately, I aim to create an interconnected system where hybrid mechanical-electrical components function autonomously and communicate through an embedded wireless network.
CHI EA ’25, Yokohama, Japan
</description>
<pubDate>Fri, 25 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164324</guid>
<dc:date>2025-04-25T00:00:00Z</dc:date>
</item>
<item>
<title>Connecting through Comics: Design and Evaluation of Cube, an Arts-Based Digital Platform for Trauma-Impacted Youth</title>
<link>https://hdl.handle.net/1721.1/164323</link>
<description>Connecting through Comics: Design and Evaluation of Cube, an Arts-Based Digital Platform for Trauma-Impacted Youth
Kumar, Ila; Shen, Jocelyn; Ferguson, Craig; Picard, Rosalind
This paper explores the design, development and evaluation of a digital platform that aims to assist young people who have experienced trauma in understanding and expressing their emotions and fostering social connections. Integrating principles from expressive arts and narrative-based therapies, we collaborate with lived experts to iteratively design a novel, user-centered digital tool for young people to create and share comics that represent their experiences. Specifically, we conduct a series of nine workshops with N=54 trauma-impacted youth and young adults to test and refine our tool, beginning with three workshops using low-fidelity prototypes, followed by six workshops with Cube, a web version of the tool. A qualitative analysis of workshop feedback and empathic relations analysis of artifacts provides valuable insights into the usability and potential impact of the tool, as well as the specific needs of young people who have experienced trauma. Our findings suggest that the integration of expressive and narrative therapy principles into Cube can offer a unique avenue for trauma-impacted young people to process their experiences, more easily communicate their emotions, and connect with supportive communities. We end by presenting implications for the design of social technologies that aim to support the emotional well-being and social integration of youth and young adults who have faced trauma.
</description>
<pubDate>Fri, 02 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164323</guid>
<dc:date>2025-05-02T00:00:00Z</dc:date>
</item>
<item>
<title>The Structure of Cross-National Collaboration in Open-Source Software Development</title>
<link>https://hdl.handle.net/1721.1/164322</link>
<description>The Structure of Cross-National Collaboration in Open-Source Software Development
Xu, Henry; Yu, Katy; He, Hao; Fang, Hongbo; Vasilescu, Bogdan; Park, Patrick
Open-source software (OSS) development platforms, such as GitHub, expand the potential for cross-national collaboration among developers by lowering the geographic, temporal, and coordination barriers that limited software innovation in the past. However, research has shown that the technological affordances that facilitate cross-national collaboration do not uniformly benefit all countries. Using the GitHub Innovation Graph dataset, which aggregates the complete cross-country collaborations among the entire population of GitHub developers, we present quantitative evidence of deep-seated religious and cultural affinities, shared colonial histories, and geopolitical factors structuring the collaborations between non-U.S. country pairs that become visible when the overarching dominance of the U.S. is removed from the data. This study highlights the opportunities to develop decentralizing strategies to facilitate new collaborations between developers in non-U.S. countries, thereby fostering the development of novel, innovative solutions. More generally, this study also underscores the importance of contextualizing user behavior and knowledge management in information systems with long-term, macro-social conditions in which these systems are inextricably embedded.
CIKM ’25, Seoul, Republic of Korea
</description>
<pubDate>Mon, 10 Nov 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164322</guid>
<dc:date>2025-11-10T00:00:00Z</dc:date>
</item>
<item>
<title>Partition–diffusion–reaction bounds for thin-film membrane formation kinetics</title>
<link>https://hdl.handle.net/1721.1/164321</link>
<description>Partition–diffusion–reaction bounds for thin-film membrane formation kinetics
Deshmukh, Akshay; Elimelech, Menachem; Lienhard, John H.
New membrane chemistries and structures have rapidly developed over the last ten&#13;
years, driven by applications ranging from critical metals separations and carbon capture to highly chlorine-resistant reverse-osmosis membranes. The thin selective layer&#13;
at the heart of reverse osmosis and nanofiltration membranes is typically fabricated using interfacial synthesis, with multifunctional aqueous-phase monomers and organicphase monomers. Here, we develop a physics-based model of partition, diffusion, and&#13;
reaction dynamics during the early stages of interfacial synthesis. These processes&#13;
critically impact membrane structure and performance. By solving the resulting partial&#13;
differential equations numerically and with analytical approximations, we demonstrate&#13;
that the planar reaction rate is initially limited by the partitioning and diffusion of the&#13;
aqueous-phase reactant into the organic phase. Later, finite reactant availability and&#13;
aqueous-phase diffusion become limiting. Through a combination of nondimensionalization, parameter mapping, and property prediction, we develop a framework that&#13;
spans a wide parameter space in reactant chemistry, solvent and support layer choice,&#13;
and initial reactant concentrations. We demonstrate that the planar reaction rate and&#13;
dynamics are strongly affected by the partition coefficient of the aqueous reactant,&#13;
which varies rapidly with changes in reactant and solvent chemistry. The influence&#13;
of diffusion variations is more limited. This tractable, physics-based model enables&#13;
the rapid quantification of monomer and solvent impact on interfacial synthesis, which&#13;
is essential for the rational development of new high-performance thin-film composite&#13;
membranes.
</description>
<pubDate>Sat, 15 Nov 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164321</guid>
<dc:date>2025-11-15T00:00:00Z</dc:date>
</item>
<item>
<title>Finger stick blood test to assess postvaccination SARS-CoV-2 neutralizing antibody response against variants</title>
<link>https://hdl.handle.net/1721.1/164320</link>
<description>Finger stick blood test to assess postvaccination SARS-CoV-2 neutralizing antibody response against variants
Lim, Sing Mei; Cheng, Hoi Lok; Jia, Huan; Kongsuphol, Patthara; D/O Shunmuganathan, Bhuvaneshwari; Chen, Ming Wei; Ng, Say Yong; Gao, Xiaohong; Turaga, Shuvan Prashant; Heussler, Sascha P; Somani, Jyoti; Sengupta, Sharmila; Tay, Dousabel MY; McBee, Megan E; Young, Barnaby E; MacAry, Paul A; Sikes, Hadley D; Preiser, Peter R
There is clinical need for a quantifiable point-of-care (PoC) SARS-CoV-2 neutralizing antibody (nAb) test that is adaptable with the pandemic's changing landscape. Here, we present a rapid and semi-quantitative nAb test that uses finger stick or venous blood to assess the nAb response of vaccinated population against wild-type (WT), alpha, beta, gamma, and delta variant RBDs. It captures a clinically relevant range of nAb levels, and effectively differentiates prevaccination, post first dose, and post second dose vaccination samples within 10 min. The data observed against alpha, beta, gamma, and delta variants agrees with published results evaluated in established serology tests. Finally, our test revealed a substantial reduction in nAb level for beta, gamma, and delta variants between early BNT162b2 vaccination group (within 3 months) and later vaccination group (post 3 months). This test is highly suited for PoC settings and provides an insightful nAb response in a postvaccinated population.
</description>
<pubDate>Sat, 22 Jan 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164320</guid>
<dc:date>2022-01-22T00:00:00Z</dc:date>
</item>
<item>
<title>Rapid Evaluation of Vaccine Booster Effectiveness against SARS-CoV-2 Variants</title>
<link>https://hdl.handle.net/1721.1/164319</link>
<description>Rapid Evaluation of Vaccine Booster Effectiveness against SARS-CoV-2 Variants
Cheng, Hoi Lok; Lim, Sing Mei; Jia, Huan; Chen, Ming Wei; Ng, Say Yong; Gao, Xiaohong; Somani, Jyoti; Sengupta, Sharmila; Tay, Dousabel MY; Chua, Patrina WL; R., Abirami; Ling, Sharon YH; McBee, Megan E; Young, Barnaby E; Sikes, Hadley D; Preiser, Peter R
As the COVID-19 pandemic continues, countries around the world are switching toward vaccinations and boosters to combat the pandemic. However, waning immunity against SARS-CoV-2 wild-type (WT) and variants have been widely reported. Booster vaccinations have shown to be able to increase immunological protection against new variants; however, the protection observed appears to decrease quickly over time suggesting a second booster shot may be appropriate. Moreover, heterogeneity and waning of the immune response at the individual level was observed suggesting a more personalized vaccination approach should be considered. To evaluate such a personalized strategy, it is important to have the ability to rapidly evaluate the level of neutralizing antibody (nAbs) response against variants at the individual level and ideally at a point of care setting. Here, we applied the recently developed cellulose pulled-down virus neutralization test (cpVNT) to rapidly assess individual nAb levels to WT and variants of concerns in response to booster vaccination. Our findings confirmed significant heterogeneity of nAb responses against a panel of SARS-CoV-2 variants, and indicated a strong increase in nAb response against variants of concern (VOCs) upon booster vaccination. For instance, the nAb response against current predominant omicron variant was observed with medians of 88.1% (n = 6, 95% CI = 73.2% to 96.2%) within 1-month postbooster and 70.7% (n = 22, 95% CI = 66.4% to 81.8%) 3 months postbooster. Our data show a point of care (POC) test focusing on nAb response levels against VOCs can guide decisions on the potential need for booster vaccinations at individual level. Importantly, it also suggests the current booster vaccines only give a transient protective response against some VOC and new more targeted formulations of a booster vaccine against specific VOC may need to be developed in the future.&#13;
IMPORTANCE Vaccination against SARS-CoV-2 induces protection through production of neutralization antibodies (nAb). The level of nAb is a major indicator of immunity against SARS-CoV-2 infection. We developed a rapid point-of-care test that can monitor the nAb level from a drop of finger stick blood. Here, we have implemented the test to monitor individual nAb level against wild-type and variants of SARS-CoV-2 at various time points of vaccination, including post-second-dose vaccination and postbooster vaccination. Huge diversity of nAb levels were observed among individuals as well as increment in nAb levels especially against Omicron variant after booster vaccination. This study evaluated the performance of this point-of-care test for personalized nAb response tracking. It verifies the potential of using a rapid nAb test to guide future vaccination regimens at both the individual and population level.
</description>
<pubDate>Wed, 07 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164319</guid>
<dc:date>2022-09-07T00:00:00Z</dc:date>
</item>
<item>
<title>Tumor-localized catalases can fail to alter tumor growth and transcriptional profiles in subcutaneous syngeneic mouse tumor models</title>
<link>https://hdl.handle.net/1721.1/164318</link>
<description>Tumor-localized catalases can fail to alter tumor growth and transcriptional profiles in subcutaneous syngeneic mouse tumor models
Sheen, Allison; Agarwal, Yash; Cheah, Keith M; Cowles, Sarah C; Stinson, Jordan A; Palmeri, Joseph R; Sikes, Hadley D; Wittrup, K Dane
Catalase is an antioxidant enzyme that catalyzes the rapid conversion of hydrogen peroxide to water and oxygen. Use of catalase as a cancer therapeutic has been proposed to reduce oxidative stress and hypoxia in the tumor microenvironment, both activities which are hypothesized to reduce tumor growth. Furthermore, exposing murine tumors to exogenous catalase was previously reported to have therapeutic benefit. We studied the therapeutic effect of tumor-localized catalases with the aim to further elucidate the mechanism of action. To do this, we engineered two approaches to maximize intratumoral catalase exposure: 1) an injected extracellular catalase with enhanced tumor retention, and 2) tumor cell lines that over-express intracellular catalase. Both approaches were characterized for functionality and tested for therapeutic efficacy and mechanism in 4T1 and CT26 murine syngeneic tumor models. The injected catalase was confirmed to have enzyme activity &gt;30,000 U/mg and was retained at the injection site for more than one week in vivo. The engineered cell lines exhibited increased catalase activity and antioxidant capacity, with catalase over-expression that was maintained for at least one week after gene expression was induced in vivo. We did not observe a significant difference in tumor growth or survival between catalase-treated and untreated mice when either approach was used. Finally, bulk RNA sequencing of tumors was performed, comparing the gene expression of catalase-treated and untreated tumors. Gene expression analysis revealed very few differentially expressed genes as a result of exposure to catalase and notably, we did not observe changes consistent with an altered state of hypoxia or oxidative stress. In conclusion, we observe that sustained intratumoral catalase neither has therapeutic benefit nor triggers significant differential expression of genes associated with the anticipated therapeutic mechanism in the subcutaneous syngeneic tumor models used. Given the lack of effect observed, we propose that further development of catalase as a cancer therapeutic should take these findings into consideration.
</description>
<pubDate>Tue, 01 Aug 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164318</guid>
<dc:date>2023-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Optical Detection of Interleukin-6 Using Liquid Janus Emulsions Using Hyperthermophilic Affinity Proteins</title>
<link>https://hdl.handle.net/1721.1/164317</link>
<description>Optical Detection of Interleukin-6 Using Liquid Janus Emulsions Using Hyperthermophilic Affinity Proteins
Chen, Michelle; Corless, Elliot I; Engelward, Bevin P; Swager, Timothy M; Sikes, Hadley D.
When equal volumes of two immiscible liquids are mixed (e.g., a hydrocarbon and a fluorocarbon), Janus droplets can form in an aqueous solution. In a gravity-aligned Janus droplet, the boundary between the two phases is flat and thus optically transparent when viewed from above. When tipped due to interactions with an analyte (i.e., agglutination), the resulting change in refraction and reflection yields an optical signal that can be detected and quantified. This study reports the detection and quantitation of interleukin-6 (IL-6) using emulsions functionalized at the hydrocarbon:aqueous interface with engineered proteins that bind IL-6 at high affinity and specificity. Hyperthermophilic affinity proteins (rcSso7d) are derived from thermophiles, giving them excellent thermal stability. Two rcSso7d affinity protein variants were synthesized with a noncanonical azide-functionalized amino acid to enable click chemistry to novel polymeric anchors embedded in the hydrocarbon phase. The two binding proteins recognize different epitopes, enabling the detection of both monomeric and dimeric IL-6 via agglutination. It is noteworthy that the rsSso7d protein variants, in addition to having superior thermal stability and facile recombinant synthesis in &lt;i&gt;E. coli&lt;/i&gt;, show superior performance when compared to commercial antibodies for IL-6.
</description>
<pubDate>Thu, 22 Aug 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164317</guid>
<dc:date>2024-08-22T00:00:00Z</dc:date>
</item>
<item>
<title>Point-of-need diagnostics in a post-Covid world: an opportunity for paper-based microfluidics to serve during syndemics</title>
<link>https://hdl.handle.net/1721.1/164316</link>
<description>Point-of-need diagnostics in a post-Covid world: an opportunity for paper-based microfluidics to serve during syndemics
Tsaloglou, Maria-Nefeli; Christodouleas, Dionysios C; Milette, Jonathan; Milkey, Kendall; Romine, Isabelle C; Im, Judy; Lathwal, Shefali; Selvam, Duraipandian Thava; Sikes, Hadley D; Whitesides, George M
Zoonotic outbreaks present with unpredictable threats to human health, food production, biodiversity, national security and disrupt the global economy. The COVID-19 pandemic—caused by zoonotic coronavirus, SARS-CoV2— is the most recent upsurge of an increasing trend in outbreaks for the past 100 years. This year, emergence of avian influenza (H5N1) is a stark reminder of the need for national and international pandemic preparedness. Tools for threat reduction include consistent practices in reporting pandemics, and widespread availability of accurate detection technologies. Wars and extreme climate events redouble the need for fast, adaptable and affordable diagnostics at the point of need. During the recent pandemic, rapid home tests for SARS-CoV-2 proved to be a viable functional model that leverages simplicity. In this perspective, we introduce the concept of syndemnicity in the context of infectious diseases and point-of-need healthcare diagnostics. We also provide a brief state-of-the-art for paper-based microfluidics. We illustrate our arguments with a case study for detecting brucellosis in cows. Finally, we conclude with lessons learned, challenges and opportunities for paper-based microfluidics to serve point-of-need healthcare diagnostics during syndemics.
</description>
<pubDate>Wed, 01 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164316</guid>
<dc:date>2025-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Faster search for tensor decomposition over finite fields</title>
<link>https://hdl.handle.net/1721.1/164315</link>
<description>Faster search for tensor decomposition over finite fields
Yang, Jason
We present an &#119874;&#13;
∗&#13;
(|F|&#13;
min{&#119877;, Í&#13;
&#119889;≥2 &#119899;&#119889; }+(&#119877;−&#119899;0 ) (Í&#13;
&#119889;≠0 &#119899;&#119889; )&#13;
)-time algorithm for determining whether the rank of a concise tensor &#119879; ∈&#13;
F&#13;
&#119899;0×···×&#119899;&#119863;−1&#13;
is ≤ &#119877;, assuming &#119899;0 ≥ · · · ≥ &#119899;&#119863;−1 and &#119877; ≥ &#119899;0.&#13;
For 3-dimensional tensors, we have a second algorithm running&#13;
in &#119874;&#13;
∗&#13;
(|F|&#13;
&#119899;0+&#119899;2+(&#119877;−&#119899;0+1−&#119903;∗ ) (&#119899;1+&#119899;2 )+&#119903;&#13;
2&#13;
∗ ) time, where &#119903;∗ :=&#13;
j&#13;
&#119877;&#13;
&#119899;0&#13;
k&#13;
+ 1.&#13;
Both algorithms use polynomial space and improve on our previous&#13;
work, which achieved running time &#119874;&#13;
∗&#13;
(|F|&#13;
&#119899;0+(&#119877;−&#119899;0 ) (Í&#13;
&#119889; &#119899;&#119889; )&#13;
).
ISSAC ’25, Guanajuato, Mexico
</description>
<pubDate>Mon, 10 Nov 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164315</guid>
<dc:date>2025-11-10T00:00:00Z</dc:date>
</item>
<item>
<title>Human-AI Interaction for Augmented Reasoning: Improving Human Reflective and Critical Thinking with Artificial Intelligence</title>
<link>https://hdl.handle.net/1721.1/164314</link>
<description>Human-AI Interaction for Augmented Reasoning: Improving Human Reflective and Critical Thinking with Artificial Intelligence
Danry, Valdemar; Pataranutaporn, Pat; Cui, Christopher; Hung, Jui-Tse; Blanchard, Lancelot; Bu?inca, Zana; Tan, Chenhao; Starner, Thad; Maes, Pattie
AI-Augmented Reasoning systems are cognitive assistants that support human reasoning by providing AI-based feedback that can help users improve their critical reasoning skills. Made possible with new techniques like argumentation mining, fact-checking, crowdsourcing, attention nudging, and large language models, AI augmented reasoning systems can provide real-time feedback on logical reasoning, help users identify and avoid flawed arguments and misinformation, suggest counter-arguments, provide evidence-based explanations, and foster deeper reflection. The goal of this workshop is to bring together researchers from AI, HCI, cognitive and social science to discuss recent advances in AI-augmented reasoning, to identify open problems in this area, and to cultivate an emerging community on this important topic.
CHI EA ’25, Yokohama, Japan
</description>
<pubDate>Fri, 25 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164314</guid>
<dc:date>2025-04-25T00:00:00Z</dc:date>
</item>
<item>
<title>Speech to Reality: On-Demand Production using Natural Language, 3D Generative AI, and Discrete Robotic Assembly</title>
<link>https://hdl.handle.net/1721.1/164313</link>
<description>Speech to Reality: On-Demand Production using Natural Language, 3D Generative AI, and Discrete Robotic Assembly
Kyaw, Alexander Htet; Smith, Miana; Jeon, Se Hwan; Gershenfeld, Neil
We present a system that transforms speech into physical objects using 3D generative AI and discrete robotic assembly. By leveraging natural language, the system makes design and manufacturing more accessible to people without expertise in 3D modeling or robotic programming. While generative AI models can produce a wide range of 3D meshes, AI-generated meshes are not directly suitable for robotic assembly or account for fabrication constraints. To address this, we contribute a workflow that integrates natural language, 3D generative AI, geometric processing, and discrete robotic assembly. The system discretizes the AI-generated geometry and modifies it to meet fabrication constraints such as component count, overhangs, and connectivity to ensure feasible physical assembly. The results are demonstrated through the assembly of various objects, ranging from chairs to shelves, which are prompted via speech and realized within 5 minutes using a robotic arm.
SCF ’25, Cambridge, MA, USA
</description>
<pubDate>Wed, 19 Nov 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164313</guid>
<dc:date>2025-11-19T00:00:00Z</dc:date>
</item>
<item>
<title>MechStyle: Augmenting Generative AI with Mechanical Simulation to Create Stylized and Structurally Viable 3D Models</title>
<link>https://hdl.handle.net/1721.1/164312</link>
<description>MechStyle: Augmenting Generative AI with Mechanical Simulation to Create Stylized and Structurally Viable 3D Models
Faruqi, Faraz; Abdel-Rahman, Amira; Tejedor, Leandra; Nisser, Martin; Li, Jiaji; Phadnis, Vrushank; Jampani, Varun; Gershenfeld, Neil; Hofmann, Megan; Mueller, Stefanie
Recent developments in Generative AI enable creators to stylize 3D models based on text prompts. These methods change the 3D model geometry, which can compromise the model’s structural integrity once fabricated. We present MechStyle, a system that enables creators to stylize 3D printable models while preserving their structural integrity. MechStyle accomplishes this by augmenting the Generative AI-based stylization process with feedback from a Finite Element Analysis (FEA) simulation. As the stylization process modifies the geometry to approximate the desired style, feedback from the FEA simulation reduces modifications to regions with increased stress. We evaluate the effectiveness of FEA simulation feedback in the augmented stylization process by comparing three stylization control strategies. We also investigate the time efficiency of our approach by comparing three adaptive scheduling strategies. Finally, we demonstrate MechStyle’s user interface that allows users to generate stylized and structurally viable 3D models and provide five example applications.
SCF ’25, Cambridge, MA, USA
</description>
<pubDate>Wed, 19 Nov 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164312</guid>
<dc:date>2025-11-19T00:00:00Z</dc:date>
</item>
<item>
<title>AI-assisted sensemaking: Human-AI collaboration for the analysis and interpretation of recorded facilitated conversations</title>
<link>https://hdl.handle.net/1721.1/164310</link>
<description>AI-assisted sensemaking: Human-AI collaboration for the analysis and interpretation of recorded facilitated conversations
Kabbara, Jad; Phan, Thanh-Mai; Rakhilin, Marina; Detwiller, Maya; Dimitrakopoulou, Dimitra; Roy, Deb
In light of growing toxic polarization and societal fragmentation often fueled by social media, we are designing alternative communication spaces we refer to as dialogue networks—networks of people engaged in recorded small-group prompted dialogue. We introduce the dialogue network framework and our use of tools powered by large language models that assist humans in the analysis and interpretation of themes and patterns across conversations which we refer to as sensemaking. We pilot case studies in collaboration with community partners using a prototype AI-assisted sensemaking tool. Insights from these pilots can inform the use of AI for human-led community engagement processes.
CHI EA ’25, Yokohama, Japan
</description>
<pubDate>Fri, 25 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164310</guid>
<dc:date>2025-04-25T00:00:00Z</dc:date>
</item>
<item>
<title>$HealthGenie:$ A Knowledge-Driven LLM Framework for Tailored Dietary Guidance</title>
<link>https://hdl.handle.net/1721.1/164309</link>
<description>$HealthGenie:$ A Knowledge-Driven LLM Framework for Tailored Dietary Guidance
Gao, Fan; Zhao, Xinjie; Xia, Ding; Zhou, Zhongyi; Yang, Rui; Lu, Jinghui; Jiang, Hang; Park, Chanjun; Li, Irene
Seeking dietary guidance often requires navigating complex nutritional knowledge while considering individual health needs. To address this, we present HealthGenie, an interactive platform that leverages the interpretability of knowledge graphs (KGs) and the conversational power of large language models (LLMs) to deliver tailored dietary recommendations alongside integrated nutritional visualizations for fast, intuitive insights. Upon receiving a user query, HealthGenie performs intent refinement and maps user's needs to a curated nutritional knowledge graph. The system then retrieves and visualizes relevant subgraphs, while offering detailed, explainable recommendations. Users can interactively adjust preferences to further tailor results. A within-subject study and quantitative analysis show that HealthGenie reduces cognitive load and interaction effort while supporting personalized, health-aware decision-making.
CIKM ’25, Seoul, Republic of Korea
</description>
<pubDate>Mon, 10 Nov 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164309</guid>
<dc:date>2025-11-10T00:00:00Z</dc:date>
</item>
<item>
<title>TinkerXR: In-Situ, Reality-Aware CAD and 3D Printing Interface for Novices</title>
<link>https://hdl.handle.net/1721.1/164308</link>
<description>TinkerXR: In-Situ, Reality-Aware CAD and 3D Printing Interface for Novices
Arslan, O?uz; Akdo?an, Artun; Dogan, Mustafa Doga
Despite the growing accessibility of augmented reality (AR) for visualization, existing computer-aided design (CAD) systems remain confined to traditional screens or require complex setups or predefined parameters, limiting immersion and accessibility for novices. We present TinkerXR, an open-source AR interface enabling in-situ design and fabrication through Constructive Solid Geometry (CSG) modeling. TinkerXR operates solely with a headset and 3D printer, allowing users to design directly in and for their physical environments. By leveraging spatial awareness, depth occlusion, recognition of physical constraints, reference objects, and hand movement controls, TinkerXR enhances realism, precision, and ease of use. Its AR-based workflow integrates design and 3D printing with a drag-and-drop interface for printers’ virtual twins.&#13;
A user study comparing TinkerXR with Tinkercad shows that TinkerXR offers novices higher accessibility, engagement, and ease of use. Participants highlighted how designing directly in physical space made the process more intuitive. By bridging the gap between digital creation and physical output, TinkerXR aims to transform everyday spaces into expressive creative studios. We release TinkerXR as open source1 to encourage further exploration of accessible, spatially grounded CAD tools.
SCF ’25, Cambridge, MA, USA
</description>
<pubDate>Wed, 19 Nov 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164308</guid>
<dc:date>2025-11-19T00:00:00Z</dc:date>
</item>
<item>
<title>Hierarchical Discrete Lattice Assembly: An Approach for the Digital Fabrication of Scalable Macroscale Structures</title>
<link>https://hdl.handle.net/1721.1/164307</link>
<description>Hierarchical Discrete Lattice Assembly: An Approach for the Digital Fabrication of Scalable Macroscale Structures
Smith, Miana; Richard, Paul; Kyaw, Alexander; Gershenfeld, Neil
Although digital fabrication processes at the desktop scale have become proficient and prolific, systems aimed at producing larger-scale structures are still typically complex, expensive, and unreliable. In this work, we present an approach for the fabrication of scalable macroscale structures using simple robots and interlocking lattice building blocks. A target structure is first voxelized so that it can be populated with an architected lattice. These voxels are then grouped into larger interconnected blocks, which are produced using standard digital fabrication processes, leveraging their capability to produce highly complex geometries at a small scale. These blocks, on the size scale of tens of centimeters, are then fed to mobile relative robots that are able to traverse over the structure and place new blocks to form structures on the meter scale. To facilitate the assembly of large structures, we introduce a live digital twin simulation tool for controlling and coordinating assembly robots that enables both global planning for a target structure and live user design, interaction, or intervention. To improve assembly throughput, we introduce a new modular assembly robot, designed for hierarchical voxel handling. We validate this system by demonstrating the voxelization, hierarchical blocking, path planning, and robotic fabrication of a set of meter-scale objects.
SCF ’25, Cambridge, MA, USA
</description>
<pubDate>Wed, 19 Nov 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164307</guid>
<dc:date>2025-11-19T00:00:00Z</dc:date>
</item>
<item>
<title>Are Crypto Ecosystems (De)centralizing? A Framework for Longitudinal Analysis</title>
<link>https://hdl.handle.net/1721.1/164306</link>
<description>Are Crypto Ecosystems (De)centralizing? A Framework for Longitudinal Analysis
Ju, Harang; Valavi, Eshan; Kumar, Madhav; Aral, Sinan
Blockchain technology relies on decentralization to resist faults and attacks while operating without trusted intermediaries. Although industry experts have touted decentralization as central to their promise and disruptive potential, it is still unclear whether the crypto ecosystems built around blockchains are becoming more or less decentralized over time. As crypto plays an increasing role in facilitating economic transactions and peer-to-peer interactions, measuring their decentralization becomes even more essential.We thus propose a systematic framework for measuring the decentralization of crypto ecosystems over time and compare commonly used decentralization metrics. We applied this framework to seven prominent blockchains, across five distinct subsystems and across their lifetime for over 15 years. Our analysis revealed that while crypto has largely become more decentralized over time, recent trends show a shift toward centralization in the consensus layer, NFT marketplaces, and developers. Our framework and results inform researchers, policymakers, and practitioners about the design, regulation, and implementation of crypto ecosystems and provide a systematic, replicable foundation for future studies.
</description>
<pubDate>Mon, 24 Nov 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164306</guid>
<dc:date>2025-11-24T00:00:00Z</dc:date>
</item>
<item>
<title>Multimodal AI for Human Sensing and Interaction</title>
<link>https://hdl.handle.net/1721.1/164304</link>
<description>Multimodal AI for Human Sensing and Interaction
Liang, Paul Pu; Ahuja, Karan; Luo, Yiyue
A significant body of HCI research today focuses on applying AI to sense, learn, and interact with humans through a wide range of wearable and ubiquitous sensors. These methods typically involve learning features from multimodal sensory data using AI methods. To aid HCI researchers who want to apply AI to their sensing problems, this course will cover the fundamental challenges and approaches in multimodal AI for human sensing and interaction. It is planned for 3 parts, one given by each organizer. The first covers the foundations of multimodal AI, studying how AI systems can represent, combine, and learn information from many interconnected sensory inputs. The second part discusses the practice of multimodal AI for human sensing, covering the latest methods for cross-modal learning across diverse sensors, human-centered application domains, and real-world concerns around their usage. The final part covers the hardware, fabrication, and data collection challenges that must be tackled to deploy these multimodal AI systems in the real world. By the end of this course, attendees should understand the fundamental principles and challenges of multimodal AI, identify the right AI approaches for their problems, prototype basic hardware systems for efficient and robust sensing, be aware of real-world concerns around ethics, interpretability, and privacy, and appreciate the range of human-centered applications enabled by multimodal AI and sensing.
CHI EA ’25, Yokohama, Japan
</description>
<pubDate>Fri, 25 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164304</guid>
<dc:date>2025-04-25T00:00:00Z</dc:date>
</item>
<item>
<title>Deep-Time Architecture: Building as Material-Event</title>
<link>https://hdl.handle.net/1721.1/164303</link>
<description>Deep-Time Architecture: Building as Material-Event
Alonso, Cristina Parreño
Despite our tendency to conceive, perceive, and represent buildings as static objects, buildings are, in their abundant reality, matter and energy in flux. As Heraclitus famously remarked in his panta rhei (πάντα ῥεῖ,): “everything flows.”1 Buildings are no different, and they need to be better thought through as entities in motion. In architectural literature, many voices have challenged the prevailing notion of the building as a static object. Bruno Latour, for instance, claims that a building is rather “a moving project, and that even once it has been built, it ages, it is transformed by its users, modified by all of what happens inside and outside, and that it will pass or be renovated, adulterated and transformed beyond recognition.”2 Another attempt to express architecture’s fluidity is Bernard Tschumi’s triad, “space, event and movement,” with which he aimed to expand what constitutes building beyond a static object and form: “There is no space without event, no architecture without movement.”3 And here we must add that there is no movement without time—and further, that given enough time, even a solid-like material (think of a building here) flows.
</description>
<pubDate>Sat, 02 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164303</guid>
<dc:date>2021-01-02T00:00:00Z</dc:date>
</item>
<item>
<title>Involuntary vs. voluntary flexible work: insights for scholars and stakeholders</title>
<link>https://hdl.handle.net/1721.1/164302</link>
<description>Involuntary vs. voluntary flexible work: insights for scholars and stakeholders
Kaduk, Anne; Genadek, Katie; Kelly, Erin L; Moen, Phyllis
Building on insights from the early stages of our research partnership with a U.S. Fortune 500 organization, we came to differentiate between voluntary and involuntary schedule variability and remote work. This differentiation underscores the complexity behind flexible schedules and remote work, especially among white-collar, salaried professionals. We collected survey data among the partner firm's information technology (IT) workforce to evaluate whether these forms of flexibility had different implications for workers, as part of the larger Work, Family, and Health Network Study. We find that a significant minority of these employees report working variable schedules and working at home involuntarily. Involuntary variable schedules are associated with greater work-to-family conflict, stress, burnout, turnover intentions, and lower job satisfaction in models that adjust for personal characteristics, job, work hours, family demands, and other factors. Voluntary remote work, in contrast, is protective and more common in this professional sample. Employees working at least 20% of their hours at home and reporting moderate or high choice over where they work have lower stress and intentions to leave the firm. These findings point to the importance of both stakeholders and scholars distinguishing between voluntary and involuntary forms of flexibility, even in a relatively advantaged workforce.
</description>
<pubDate>Thu, 08 Aug 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164302</guid>
<dc:date>2019-08-08T00:00:00Z</dc:date>
</item>
<item>
<title>Machine learning demand forecasting and supply chain performance</title>
<link>https://hdl.handle.net/1721.1/164301</link>
<description>Machine learning demand forecasting and supply chain performance
Feizabadi, Javad
In many supply chains, firms staged in upstream of the chain suffer from variance amplification emanating from demand information distortion in a multi-stage supply chain and, consequently, their operation inefficiency. Prior research suggest that employing advanced demand forecasting, such as machine learning, could mitigate the effect and improve the performance; however, it is less known what is the extent and magnitude of savings as tangible supply chain performance outcomes. In this research, hybrid demand forecasting methods grounded on machine learning i.e. ARIMAX and Neural Network is developed. Both time series and explanatory factors are feed into the developed method. The method was applied and evaluated in the context of functional product and a steel manufacturer. The statistically significant supply chain performance improvement differences were found across traditional and ML-based demand forecasting methods. The implications for the theory and practice are also presented.
</description>
<pubDate>Tue, 04 Aug 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164301</guid>
<dc:date>2020-08-04T00:00:00Z</dc:date>
</item>
<item>
<title>The future of sperm: a biovariability framework for understanding global sperm count trends</title>
<link>https://hdl.handle.net/1721.1/164300</link>
<description>The future of sperm: a biovariability framework for understanding global sperm count trends
Boulicault, Marion; Perret, Meg; Galka, Jonathan; Borsa, Alex; Gompers, Annika; Reiches, Meredith; Richardson, Sarah
The past 50 years have seen heated debate in the reproductive sciences about global trends in human sperm count. In 2017, Levine and colleagues published the largest and most methodologically rigorous meta-regression analysis to date and reported that average total sperm concentration among men from ‘Western’ countries has decreased by 59.3% since 1973, with no sign of halting. These results reverberated in the scientific community and in public discussions about men and masculinity in the modern world, in part because of scientists’ public-facing claims about the societal implications of the decline of male fertility. We find that existing research follows a set of implicit and explicit assumptions about how to measure and interpret sperm counts, which collectively form what we term the Sperm Count Decline hypothesis (SCD). Using the study by Levine and colleagues, we identify weaknesses and inconsistencies in the SCD, and propose an alternative framework to guide research on sperm count trends: the Sperm Count Biovariability hypothesis (SCB). SCB asserts that sperm count varies within a wide range, much of which can be considered non-pathological and species-typical. Knowledge about the relationship between individual and population sperm count and life-historical and ecological factors is critical to interpreting trends in average sperm counts and their relationships to health and fertility.
</description>
<pubDate>Mon, 10 May 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164300</guid>
<dc:date>2021-05-10T00:00:00Z</dc:date>
</item>
<item>
<title>Estimating Pedestrian Flows on Street Networks</title>
<link>https://hdl.handle.net/1721.1/164299</link>
<description>Estimating Pedestrian Flows on Street Networks
Sevtsuk, Andres
City governments and planners alike commonly seek to increase pedestrian activity on city streets as part of broader sustainability, community building, and economic development strategies. Though walkability has received ample attention in planning literature, most planners still lack practical methods for predicting how development proposals could affect pedestrian activity on specific streets or public spaces at different times of the day. Cities typically require traffic impact assessments (TIAs) but not pedestrian impact assessments. In this study I present a methodology for estimating pedestrian trip generation and distribution between detailed origins and destinations in both existing and proposed built environments. Using the betweenness index from network analysis, I introduce a number of methodological improvements that allow the index to model pedestrian trips with parameters and constraints to account for pedestrian behavior in different settings. I demonstrate its application in the Kendall Square area of Cambridge (MA), where estimated foot traffic is compared during lunch and evening peak periods with observed pedestrian counts. The proposed approach can be particularly useful for TIAs, neighborhood plans, and large-scale development projects, where pedestrian flow estimates can be used to guide pedestrian infrastructure and safety improvements and public space investments or for locating pedestrian priority streets during the COVID-19 pandemic.
</description>
<pubDate>Sat, 02 Oct 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164299</guid>
<dc:date>2021-10-02T00:00:00Z</dc:date>
</item>
<item>
<title>Understanding individuals with spinal cord injury’s self-care practices: a technology probe study to promote pressure relief adherence</title>
<link>https://hdl.handle.net/1721.1/164298</link>
<description>Understanding individuals with spinal cord injury’s self-care practices: a technology probe study to promote pressure relief adherence
Oh, Hannah Hye Yeon; Pontis, Sheila
Pressure reliefs (PRs) are self-care practices essential for individuals with spinal cord injury (SCI) to prevent life-threatening pressure injuries (PIs). Despite the benefits, individuals often do not do these exercises at home, leading to increased patient morbidity and mortality. To examine how digital technology could improve this population's adherence to PR exercises, we conducted a technology probe study with five individuals with SCI over ten consecutive business days. A chat-based intervention was created to send user-scheduled PR reminders, which were personalized with visual elements and progress trackers. Participants were interviewed before and after interacting with the probe to better understand their experiences with PIs and PR practices. Results shed light on specific factors that may impact individuals with SCI's behaviours towards PRs and four considerations to design a customisable reminder intervention: (1) easy to use and friendly technology, (2) design-your-own- schedule feature, (3) communication style feature, and (4) dialogue support features. Personalisation supported with gamified visual progress tracking and motivational messages emerged as a strong strategy to increase PR adherence. Both sets of findings expand upon the human-computer interaction (HCI) literature for mobile health tools that encourage self-care practices; in particular, to the specific needs of individuals with SCI and the use of visual elements to increase engagement.
</description>
<pubDate>Wed, 02 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164298</guid>
<dc:date>2024-10-02T00:00:00Z</dc:date>
</item>
<item>
<title>Health and toxicity in content moderation: the discursive work of justification</title>
<link>https://hdl.handle.net/1721.1/164297</link>
<description>Health and toxicity in content moderation: the discursive work of justification
Gibson, Anna D.; Docherty, Niall; Gillespie, Tarleton
Within academia, industry, and government, the terms ‘health’ and ‘toxicity’ are widely used to describe and justify decisions around online content and its removal. However, the meanings of these terms are assumed to be self-evident and therefore are rarely examined. This article turns a critical eye to the health and toxicity metaphor to unpack its hidden political work. We trace the metaphor through three different discourses: the historical political economy of the term, the usage by cultural elites in the last two decades, and finally through its contemporary instrumental usage by volunteer content moderators on Facebook. By linking these discourses together, we argue that the metaphor of health and toxicity serves as a means for justification and legitimacy under contemporary neoliberalized orders that typically chafe at modes of public intervention and the language of democratic statecraft. Rather than elucidating the challenges of online content, we find that the metaphor often serves to obfuscate or sidestep the hardest problems in democratic governance. This analysis therefore has practical significance for researchers, policymakers, journalists, and other speakers that publicly traffic in this discourse at large.
</description>
<pubDate>Tue, 12 Dec 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164297</guid>
<dc:date>2023-12-12T00:00:00Z</dc:date>
</item>
<item>
<title>Balancing Covariates in Randomized Experiments with the Gram–Schmidt Walk Design</title>
<link>https://hdl.handle.net/1721.1/164296</link>
<description>Balancing Covariates in Randomized Experiments with the Gram–Schmidt Walk Design
Harshaw, Christopher; Sävje, Fredrik; Spielman, Daniel A; Zhang, Peng
The design of experiments involves a compromise between covariate balance and robustness. This article provides a formalization of this tradeoff and describes an experimental design that allows experimenters to navigate it. The design is specified by a robustness parameter that bounds the worst-case mean squared error of an estimator of the average treatment effect. Subject to the experimenter’s desired level of robustness, the design aims to simultaneously balance all linear functions of potentially many covariates. Less robustness allows for more balance. We show that the mean squared error of the estimator is bounded in finite samples by the minimum of the loss function of an implicit ridge regression of the potential outcomes on the covariates. Asymptotically, the design perfectly balances all linear functions of a growing number of covariates with a diminishing reduction in robustness, effectively allowing experimenters to escape the compromise between balance and robustness in large samples. Finally, we describe conditions that ensure asymptotic normality and provide a conservative variance estimator, which facilitate the construction of asymptotically valid confidence intervals. Supplementary materials for this article are available online.
</description>
<pubDate>Tue, 01 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164296</guid>
<dc:date>2024-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>From natural language to simulations: applying AI to automate simulation modelling of logistics systems</title>
<link>https://hdl.handle.net/1721.1/164295</link>
<description>From natural language to simulations: applying AI to automate simulation modelling of logistics systems
Jackson, Ilya; Jesus Saenz, Maria; Ivanov, Dmitry
Our research strives to examine how simulation models of logistics systems can be produced automatically from verbal descriptions in natural language and how human experts and artificial intelligence (AI)-based systems can collaborate in the domain of simulation modelling. We demonstrate that a framework constructed upon the refined GPT-3 Codex is capable of generating functionally valid simulations for queuing and inventory management systems when provided with a verbal explanation. As a result, the language model could produce simulation models for inventory and process control. These results, along with the rapid improvement of language models, enable a significant simplification of simulation model development. Our study offers guidelines and a design of a natural language processing-based framework on how to build simulation models of logistics systems automatically, given the verbal description. In generalised terms, our work offers a technological underpinning of human-AI collaboration for the development of simulation models.
</description>
<pubDate>Fri, 16 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164295</guid>
<dc:date>2024-02-16T00:00:00Z</dc:date>
</item>
<item>
<title>HiTop 2.0: combining topology optimisation with multiple feature size controls and human preferences</title>
<link>https://hdl.handle.net/1721.1/164294</link>
<description>HiTop 2.0: combining topology optimisation with multiple feature size controls and human preferences
Schiffer, Gillian; Ha, Dat Quoc; Carstensen, Josephine V
Topology optimisation is a computational design approach that generates high-performing, efficient structures uniquely suited to a design engineer’s goal. However, there exist two major obstacles to the accessibility, or ease of use, of topology optimisation: expensive computational costs and users’ binary decision between personal intuition and the algorithm’s result. Human-informed topology optimisation, or HiTop, presents an alternative approach to topology optimisation when a user lacks access to a high-performance computer or knowledge of code parameters. HiTop 2.0 prompts users to interactively identify a region of interest in the preliminary design and modify the size of the solid and/or void features. The novel contribution of this paper implements multi-phase minimum and maximum solid feature size controls in HiTop 2.0, and demonstrates 2D and 3D benchmark examples, including test cases that show how the user can interactively enhance issues related to eigenvalues, stress, and energy absorption, while solving the minimum compliance problem.
</description>
<pubDate>Sun, 31 Dec 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164294</guid>
<dc:date>2023-12-31T00:00:00Z</dc:date>
</item>
<item>
<title>Integrated urban heat sinks for low-carbon neighbourhoods: dissipating heat to the ground and sky through building structures</title>
<link>https://hdl.handle.net/1721.1/164293</link>
<description>Integrated urban heat sinks for low-carbon neighbourhoods: dissipating heat to the ground and sky through building structures
Gascón Alvarez, Eduardo; Feickert, Kiley; Ismail, Mohamed A; Mueller, Caitlin T; Norford, Leslie K
In a global context of simultaneous urbanization and rising ambient temperatures, it is imperative to design heat-resilient and material-efficient neighbourhoods that respond to the pressing demand for housing with minimal environmental impact. With this goal in mind, the work presented here focuses on the integration of heat dissipation systems within structural building components, introducing a novel framework for their systems-level simulation and design. Two well-studied, low-cost systems (shallow geothermal and night-sky cooling) are modelled within a parametric design workflow that combines bottom-up structural embodied carbon calculations with annual building energy simulations that account for heat sink availability. The proposed method results in a fast and reliable early-stage design tool that allows urban planners, policymakers, and designers to evaluate the suitability of available heat dissipation technologies across climates and urban morphologies. This paper analyzes specifically the multi-domain performance of a hypothetical urban geometry within three different cooling-dominated locations (Algiers, Cairo, and Bangkok).
</description>
<pubDate>Sun, 04 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164293</guid>
<dc:date>2025-05-04T00:00:00Z</dc:date>
</item>
<item>
<title>China’s Potential Lessons from Ukraine for Conflict over Taiwan</title>
<link>https://hdl.handle.net/1721.1/164292</link>
<description>China’s Potential Lessons from Ukraine for Conflict over Taiwan
Taylor Fravel, M
What lessons for a conflict over Taiwan might China be learning from Russia’s invasion of Ukraine and the global responses to the war? And what are the strategic implications of these lessons? To answer these questions, I examine how the war in Ukraine may be shaping China’s assessments of the political, military and economic costs of military action against Taiwan, and how these assessments may influence China’s decision to use force against Taiwan.
</description>
<pubDate>Mon, 03 Jul 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164292</guid>
<dc:date>2023-07-03T00:00:00Z</dc:date>
</item>
<item>
<title>BoundarEase: Fostering Constructive Community Engagement to Inform More Equitable Student Assignment Policies</title>
<link>https://hdl.handle.net/1721.1/164291</link>
<description>BoundarEase: Fostering Constructive Community Engagement to Inform More Equitable Student Assignment Policies
Overney, Cassandra; Moe, Cassandra; Chang, Alvin; Gillani, Nabeel
Public school districts across the United States (US) play a pivotal role in shaping access to quality education&#13;
through their student assignment policies—most prominently, school attendance boundaries. Community&#13;
engagement processes for changing such policies, however, are often opaque, cumbersome, and highly&#13;
polarizing—hampering equitable access to quality schools in ways that can perpetuate disparities in achievement and future life outcomes. In this paper, we describe a collaboration with a large US public school district&#13;
serving nearly 150,000 students to design and evaluate a new sociotechnical system, “BoundarEase”, for&#13;
fostering more constructive community engagement around changing school attendance boundaries. Through&#13;
a formative study with 16 community members, we first identify several frictions in existing community&#13;
engagement processes during boundary planning, like individualistic over collective thinking; a failure to understand and empathize with different community members when considering policy impacts; and challenges&#13;
in accessing and understanding the impacts of boundary changes. We then use these frictions to inspire the&#13;
design and development of BoundarEase, a web platform that allows community members to explore and&#13;
offer feedback on potential boundaries based on their preferences. A user study with 12 community members&#13;
reveals that BoundarEase prompts reflection among community members on how policies might impact&#13;
families beyond their own, and increases transparency around the details of policy proposals. Our paper offers&#13;
education researchers insights into the challenges and opportunities involved in community engagement for&#13;
designing student assignment policies; human-computer interaction researchers a case study of how new&#13;
sociotechnical systems might help mitigate polarization in local policymaking; and school districts a practical&#13;
tool they might use to facilitate community engagement to foster more equitable student assignment policies.
Cassandra Overney, Cassandra Moe, Alvin Chang, and Nabeel Gillani. 2025. BoundarEase: Fostering Constructive Community Engagement to Inform More Equitable Student Assignment Policies. Proc. ACM Hum.-Comput. Interact. 9, 2, Article CSCW040 (May 2025), 37 pages.
</description>
<pubDate>Fri, 02 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164291</guid>
<dc:date>2025-05-02T00:00:00Z</dc:date>
</item>
<item>
<title>Multi-objective Evolutionary Learning for Near Pareto-Optimal Optimization of Solar Deployment</title>
<link>https://hdl.handle.net/1721.1/164290</link>
<description>Multi-objective Evolutionary Learning for Near Pareto-Optimal Optimization of Solar Deployment
Sigrist, Cooper; Li, Archimedes; Zhang, Alice; Lechowicz, Adam; Bashir, Noman; Lertsaroj, Pichsinee; Bahlous-Boldi, Ryan; Hajiesmaili, Mohammad
Existing residential rooftop photovoltaic (PV) installations in the United States are inequitable, as they are concentrated in high-income neighborhoods, and carbon-inefficient because they are often not located in electric grids dominated by fossil-fuel generators. Prior work, however, shows that prioritizing socioeconomic equity can also significantly increase the carbon efficiency of new installations. In this paper, we formalize the problem of site selection for rooftop PV installations as a multi-objective optimization problem, with metrics including energy generation, carbon offsetting, and demographic equity. We introduce a novel method called Evolutionary Value Assignment (EVA) that uses a neural network trained via evolutionary learning to select ideal sites for deployment. We evaluate our proposed approach in a case study using a dataset of U.S. solar generation and demographic information. Compared to projections of current installation trends, our method improves Carbon Efficiency by 43%, Income Equity by 41%, and Racial Equity by 24%, while increasing Energy Generation Potential by up to 10%. Therefore, our optimized placement can achieve the estimated carbon offset needed for net-zero emissions from electricity generation earlier than current deployment trends.
BUILDSYS ’25, Golden, CO, USA
</description>
<pubDate>Tue, 11 Nov 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164290</guid>
<dc:date>2025-11-11T00:00:00Z</dc:date>
</item>
<item>
<title>Robust and expert-agnostic digital twin calibration via ensemble learning and Bayesian optimization</title>
<link>https://hdl.handle.net/1721.1/164289</link>
<description>Robust and expert-agnostic digital twin calibration via ensemble learning and Bayesian optimization
Zhan, Sicheng; Cui, Bosen
Digital twins have emerged as a critical tool in tackling climate change. Considering the data scarcity of complex systems, a promising approach to developing digital twins involves combining physics-based models with data assimilation. However, model calibration remains challenging due to uncertainties in both the physical models and observational data, and the reliance on domain knowledge. In this study, we develop an ensemble learning-based approach that aggregates sub-models with diversified calibration configurations. The proposed method streamlines calibration without expert-driven parameter screening and improves the digital twin's extrapolation capability, enabling more robust predictive applications. We demonstrate the effectiveness of our approach by calibrating the energy model of an office building, significantly reducing the extrapolation error and the associated risks. To the best of our knowledge, this is the first study to facilitate the calibration of physics-based models using ensemble learning, especially in the parameter space.
BUILDSYS ’25, Golden, CO, USA
</description>
<pubDate>Tue, 11 Nov 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164289</guid>
<dc:date>2025-11-11T00:00:00Z</dc:date>
</item>
<item>
<title>Talk to the Hand: an LLM-powered Chatbot with Visual Pointer as Proactive Companion for On-Screen Tasks</title>
<link>https://hdl.handle.net/1721.1/164288</link>
<description>Talk to the Hand: an LLM-powered Chatbot with Visual Pointer as Proactive Companion for On-Screen Tasks
Prasongpongchai, Thanawit; Pataranutaporn, Pat; Lertsutthiwong, Monchai; Maes, Pattie
This paper presents Pointer Assistant, a novel human-AI interaction technique for on-screen tasks. The design features a chatbot displayed as an extra mouse pointer, alongside the user’s, which proactively gives feedback on user actions while directing them to relevant areas on the screen and responding to the user’s direct chat messages. The effectiveness of the design’s key characteristics, pointer form and proactivity, was investigated in a study involving 220 participants in a financial budget planning task. Results demonstrated that the pointer design and interaction reduced task load while improving satisfaction with the experience, and increased the number of budget categories ideated during the task compared to the traditional passive chat log design. Participants viewed Pointer Assistant as a fun, innovative, and helpful visual guide while noting that its assertiveness can be improved. Future developments could offer even further enhancements to the user experience of human-AI collaboration and task outcomes.
CHI ’25, Yokohama, Japan
</description>
<pubDate>Fri, 25 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164288</guid>
<dc:date>2025-04-25T00:00:00Z</dc:date>
</item>
<item>
<title>WireBend-kit: A Computational Design and Fabrication Toolkit for Wirebending Custom 3D Wireframe Structures</title>
<link>https://hdl.handle.net/1721.1/164287</link>
<description>WireBend-kit: A Computational Design and Fabrication Toolkit for Wirebending Custom 3D Wireframe Structures
Faruqi, Faraz; Paonaskar, Josha; Schuler, Riley; Prevey, Aiden; Taylor, Carson; Tak, Anika; Guinto, Anthony; Shilamkar, Eeshani; Cheenaruenthong, Natarith; Nisser, Martin
This paper introduces WireBend-kit, a desktop wirebending machine and computational design tool for creating 3D wireframe structures. Combined, they allow users to rapidly and inexpensively create custom 3D wireframe structures from aluminum wire. Our design tool is implemented in freely available software and allows users to generate virtual wireframe designs and assess their fabricability. A path-planning procedure automatically converts the wireframe design into fabrication instructions for our machine while accounting for material elasticity and kinematic error sources. The custom machine costs $293 in parts and can form aluminum wire into 3D wireframe structures through an ordered sequence of feed, bend, and rotate instructions. Our technical evaluation reveals our system’s ability to overcome odometrically accumulating errors inherent to wirebending in order to produce accurate 3D structures from inexpensive hardware. Finally, we provide application examples demonstrating the design space enabled by Wirebend-kit.
SCF ’25, Cambridge, MA, USA
</description>
<pubDate>Wed, 19 Nov 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164287</guid>
<dc:date>2025-11-19T00:00:00Z</dc:date>
</item>
<item>
<title>Bias Delayed is Bias Denied? Assessing the Effect of Reporting Delays on Disparity Assessments</title>
<link>https://hdl.handle.net/1721.1/164286</link>
<description>Bias Delayed is Bias Denied? Assessing the Effect of Reporting Delays on Disparity Assessments
Gosciak, Jennah; Balagopalan, Aparna; Ouyang, Derek; Koenecke, Allison; Ghassemi, Marzyeh; Ho, Daniel
Prior work has documented widespread racial and ethnic inequities across sectors, such as healthcare, finance, and technology. Across all of these domains, conducting disparity assessments at regular time intervals is critical for surfacing potential biases in decision-making and improving outcomes across demographic groups. Because disparity assessments fundamentally depend on the availability of demographic information, their efficacy is limited by the availability and consistency of available demographic identifiers. While prior work has considered the impact of missing data on fairness, little attention has been paid to the role of delayed demographic data. Delayed data, while eventually observed, might be missing at the critical point of monitoring and action – and delays may be unequally distributed across groups in ways that distort disparity assessments. We characterize such impacts in healthcare, using electronic health records of over 5M patients across primary care practices in all 50 states. Our contributions are threefold. First, we document the high rate of race and ethnicity reporting delays in a healthcare setting and demonstrate widespread variation in rates at which demographics are reported across different groups. Second, through a set of retrospective analyses using real data, we find that such delays impact disparity assessments and hence conclusions made across a range of consequential healthcare outcomes, particularly at more granular levels of state-level and practice-level assessments. Third, we find limited ability of conventional methods that impute missing race in mitigating the effects of reporting delays on the accuracy of timely disparity assessments. Our insights and methods generalize to many domains of algorithmic fairness where delays in the availability of sensitive information may confound audits, thus deserving closer attention within a pipeline-aware machine learning framework.
FAccT ’25, Athens, Greece
</description>
<pubDate>Mon, 23 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164286</guid>
<dc:date>2025-06-23T00:00:00Z</dc:date>
</item>
<item>
<title>Securing Cryptographic Software via Typed Assembly Language</title>
<link>https://hdl.handle.net/1721.1/164285</link>
<description>Securing Cryptographic Software via Typed Assembly Language
Song, Shixin; Dong, Tingzhen; Nwabueze, Kosi; Zanders, Julian; Erbsen, Andres; Chlipala, Adam; Yan, Mengjia
Authors of cryptographic software are well aware that their code should not leak secrets through its timing behavior, and, until 2018, they believed that following industry-standard constant-time coding guidelines was sufficient. However, the revelation of the Spectre family of speculative execution attacks injected new complexities.&#13;
To block speculative attacks, prior work has proposed annotating the program's source code to mark secret data, with hardware using this information to decide when to speculate (i.e., when only public values are involved) or not (when secrets are in play). While these solutions are able to track secret information stored on the heap, they suffer from limitations that prevent them from correctly tracking secrets on the stack, at a cost in performance.&#13;
This paper introduces SecSep, a transformation framework that rewrites assembly programs so that they partition secret and public data on the stack. By moving from the source-code level to assembly rewriting, SecSep is able to address limitations of prior work. The key challenge in performing this assembly rewriting stems from the loss of semantic information through the lengthy compilation process. The key innovation of our methodology is a new variant of typed assembly language (TAL), Octal, which allows us to address this challenge. Assembly rewriting is driven by compile-time inference within Octal. We apply our technique to cryptographic programs and demonstrate that it enables secure speculation efficiently, incurring a low average overhead of 1.2%.
CCS ’25, Taipei
</description>
<pubDate>Sat, 22 Nov 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164285</guid>
<dc:date>2025-11-22T00:00:00Z</dc:date>
</item>
<item>
<title>Study on LLMs for Promptagator-Style Dense Retriever Training</title>
<link>https://hdl.handle.net/1721.1/164284</link>
<description>Study on LLMs for Promptagator-Style Dense Retriever Training
Gwon, Daniel; Jedidi, Nour; Lin, Jimmy
Promptagator demonstrated that Large Language Models (LLMs) with few-shot prompts can be used as task-specific query generators for fine-tuning domain-specialized dense retrieval models. However, the original Promptagator approach relied on proprietary and large-scale LLMs which users may not have access to or may be prohibited from using with sensitive data. In this work, we study the impact of open-source LLMs at accessible scales (≤14B parameters) as an alternative. Our results demonstrate that open-source LLMs as small as 3B parameters can serve as effective Promptagator-style query generators. We hope our work will inform practitioners with reliable alternatives for synthetic data generation and give insights to maximize fine-tuning results for domain-specific applications. Our code is available at https://www.github.com/mitll/promptodile
CIKM ’25, Seoul, Republic of Korea
</description>
<pubDate>Mon, 10 Nov 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164284</guid>
<dc:date>2025-11-10T00:00:00Z</dc:date>
</item>
<item>
<title>One-Sided Bounded Noise: Theory, Optimization Algorithms and Applications</title>
<link>https://hdl.handle.net/1721.1/164283</link>
<description>One-Sided Bounded Noise: Theory, Optimization Algorithms and Applications
Xiao, Hanshen; Wan, Jun; Shi, Elaine; Devadas, Srinivas
We investigate the optimal trade-off between utility and privacy using one-sided perturbation. Unlike conventional privacy-preserving statistical releases, randomization for obfuscating side-channel information is often constrained by infrastructure limitations. In practical scenarios, these constraints may only allow positive and bounded perturbations. For example, extending processing time or sending and storing dummy messages/data is typically feasible. However, implementing modifications in the opposite direction is challenging due to restrictions imposed by hardware capacity, communication protocols, and data management systems. In this paper, we establish the foundation of the positive noise mechanism within three semantic privacy frameworks: Differential Privacy (DP), Maximal Leakage (MaxL), and Probably Approximately Correct (PAC) Privacy. We then present a series of results that characterize or approximate the optimal one-sided noise distribution, subject to a second-moment budget and a bounded maximal magnitude. Building on this theoretical foundation, we develop efficient tools to solve the underlying optimization problems. Through experiments conducted in various scenarios, we demonstrate that existing techniques, such as Truncated Biased Laplace noise, are often suboptimal and result in excessive performance degradation. For instance, in an anonymous communication system with a 250K message budget, our optimized DP noise mechanism achieves a 21× reduction in dummy messages and an 18× reduction in dummy message latency overhead compared to traditional methods.
CCS ’25, Taipei, Taiwan
</description>
<pubDate>Sat, 22 Nov 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164283</guid>
<dc:date>2025-11-22T00:00:00Z</dc:date>
</item>
<item>
<title>TH-Wood: Developing Thermo-Hygro-Coordinating Driven Wood Actuators to Enhance Human-Nature Interaction</title>
<link>https://hdl.handle.net/1721.1/164282</link>
<description>TH-Wood: Developing Thermo-Hygro-Coordinating Driven Wood Actuators to Enhance Human-Nature Interaction
Wang, Guanyun; Chen, Chuang; Jin, Xiao; Chen, Yulu; Zheng, Yangweizhe; Zhen, Qianzi; Zhang, Yang; Li, Jiaji; Yang, Yue; Tao, Ye; Luo, Shijian; Sun, Lingyun
Wood has become increasingly applied in shape-changing interfaces for its eco-friendly and smart responsive properties, while its applications face challenges as it remains primarily driven by humidity. We propose TH-Wood, a biodegradable actuator system composed of wood veneer and microbial polymers, driven by both temperature and humidity, and capable of functioning in complex outdoor environments. This dual-factor-driven approach enhances the sensing and response channels, allowing for more sophisticated coordinating control methods. To assist in designing and utilizing the system more effectively, we developed a structure library inspired by dynamic plant forms, conducted extensive technical evaluations, created an educational platform accessible to users, and provided a design tool for deformation adjustments and behavior previews. Finally, several ecological applications demonstrate the potential of TH-Wood to significantly enhance human interaction with natural environments and expand the boundaries of human-nature relationships.
CHI ’25, Yokohama, Japan
</description>
<pubDate>Fri, 25 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164282</guid>
<dc:date>2025-04-25T00:00:00Z</dc:date>
</item>
<item>
<title>From blades to tracks: a case study in structural reuse of curved surfaces for circular design</title>
<link>https://hdl.handle.net/1721.1/164281</link>
<description>From blades to tracks: a case study in structural reuse of curved surfaces for circular design
Pupping, Jesse; Riso, Marzia; Popescu, Mariana; Bousseau, Adrien; Joustra, Jelle
We explore the fabrication of curved surfaces by reusing panels extracted from decommissioned wind turbine blades, using cycling pumptracks as a case study. We first present real-world prototypes of pumptrack modules that we manufactured to evaluate the practicality of this reuse scenario and to define the boundary conditions for harvesting blade panels and assembling a track. We then propose an algorithm to optimize the segmentation of a wind turbine blade into quadrilateral panels whose sides fall within a small set of compatible boundaries. These panels form a library of modules that designers can connect side by side to create pumptracks of various lengths and curvatures. Together, these contributions provide a proof-of-concept of how computer-aided design and manufacturing can support circular design through the reuse of curved surfaces.
</description>
<pubDate>Wed, 19 Nov 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164281</guid>
<dc:date>2025-11-19T00:00:00Z</dc:date>
</item>
<item>
<title>Fintech Innovation in China</title>
<link>https://hdl.handle.net/1721.1/164280</link>
<description>Fintech Innovation in China
Cusumano, Michael
This column discusses innovation in payment platforms in China and what Western central banks and governments might learn.  Private Chinese companies led in the introduction of the mobile payment systems Alipay and WeChat Pay, using QR codes, and most transactions in the country are now digital.  China also has banned private crypto currencies and stablecoins and introduced a public digital currency and payment system using crypto technology.  However, it has been very difficult to get users to switch to the new central bank digital currency, despite aggressive promotions, subsidies, and mandates.  China's experience suggests that other central banks around the world will have difficulty introducing their own digital currencies and competing with private stablecoins and cryptocurrencies as well as other private digital payment platforms.
</description>
<pubDate>Tue, 23 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164280</guid>
<dc:date>2025-09-23T00:00:00Z</dc:date>
</item>
<item>
<title>The Stable Marriage Problem and Sudoku</title>
<link>https://hdl.handle.net/1721.1/164279</link>
<description>The Stable Marriage Problem and Sudoku
Borodin, Matvey; Chen, Eric; Duncan, Aidan; Khovanova, Tanya; Litchev, Boyan; Liu, Jiahe; Moroz, Veronika; Qian, Matthew; Raghavan, Rohith; Rastogi, Garima; Voigt, Michael
Are you having trouble getting married? These days, there are lots of products on the market for dating, from apps to websites and matchmakers, but we know a simpler way! That’s right—your path to coupled life isn’t through Tinder; it’s through Sudoku! Read our fabulous paper, where we explore the Stable Marriage Problem to help you find happiness and stability in marriage through math. As a bonus, you get two Sudoku puzzles with a new flavor.
</description>
<pubDate>Wed, 07 Aug 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164279</guid>
<dc:date>2024-08-07T00:00:00Z</dc:date>
</item>
<item>
<title>How Medical Technologies Materialize Oppression</title>
<link>https://hdl.handle.net/1721.1/164278</link>
<description>How Medical Technologies Materialize Oppression
Boulicault, Marion
Biomedical practice can encode and perpetuate oppressive ideologies. This encoding and perpetuation, scholars like Liao and Carbonell (Citation2023) convincingly argue, can occur not only via social practices, but also through medical technologies themselves. In other words, medical technologies can “materialize oppression”: they can be biased in a way that systematically “reflects and perpetuates unjust power relations” (Liao and Carbonell Citation2023, 9).&#13;
&#13;
In this paper, I examine how medical technologies materialize oppression, offering a preliminary, non-exhaustive taxonomy of the mechanisms of this materialization. While scholars like Liao and Carbonell focus primarily on physical medical instruments, I offer new examples that illustrate these mechanisms at work, focusing on medical data classification technologies and infrastructures. A clearer view of how these mechanisms operate suggests possibilities for building technologies that liberate rather than oppress.
</description>
<pubDate>Mon, 03 Apr 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164278</guid>
<dc:date>2023-04-03T00:00:00Z</dc:date>
</item>
<item>
<title>Wild Wood Gridshells: Mixed-Reality Construction of Nonstandard Wood</title>
<link>https://hdl.handle.net/1721.1/164277</link>
<description>Wild Wood Gridshells: Mixed-Reality Construction of Nonstandard Wood
Cousin, Tim; Alkhayat, Latifa; Pearl, Natalie; Dewart, Christopher B; Mueller, Caitlin
Irregular wood is often downcycled despite having significant embedded strength. Reintegrating this wood into structural assemblies can improve material efficiency in the built environment. This work implemented material logic in a design-to-fabrication workflow for building structures using bifurcated tree branches to leverage this potential (Figure 1). This process is demonstrated through the design and construction of a prototype. A user-oriented computational interface is proposed that manages irregular geometries, matching and optimization algorithms, and structural simulation for design iteration. The demonstrated workflow, which concludes with augmented reality (AR) assisted fabrication, facilitates designing with varying materials, enabling upcycling a wide range of nonstandard building elements. At scale, this methodology can significantly reduce the environmental impact of construction.
</description>
<pubDate>Mon, 03 Jul 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164277</guid>
<dc:date>2023-07-03T00:00:00Z</dc:date>
</item>
<item>
<title>MiNav: Autonomous Drone Navigation Indoors using Millimeter-Waves</title>
<link>https://hdl.handle.net/1721.1/164276</link>
<description>MiNav: Autonomous Drone Navigation Indoors using Millimeter-Waves
Lam, Maisy; Herrera, Joshua; Afzal, Sayed Saad; Zhou, Kaichen; Adib, Fadel
We present the design, implementation, and evaluation of MiNav, a system capable of accurate, efficient and fully autonomous&#13;
drone navigation in challenging indoor environments, including those where vision-based systems fail. MiNav builds on&#13;
recent literature in millimeter-wave (mmWave) backscatter localization and makes the leap to full end-to-end autonomous&#13;
mmWave-based navigation.&#13;
MiNav leverages a mmWave radar mounted on a drone and one or more mmWave backscatter tags deployed in the environment.&#13;
To enable autonomous navigation, our design introduces key innovations. First, MiNav derives a novel Joint DOP-SNR&#13;
formulation to probabilistically model uncertainty in localization, and uses this uncertainty to generate an RF-Navigation Map&#13;
that maximizes the accuracy and reliability of mmWave backscatter localization throughout an environment. It then applies a&#13;
RF-aware Autonomous Path Planning technique that jointly optimizes for navigation efficiency and localization performance.&#13;
We built an end-to-end real-time implementation of MiNav consisting of a custom built drone and mmWave backscatter&#13;
tags. We tested it in practical indoor environments. We run over 165 successful autonomous missions across different tag&#13;
deployments and demonstrate a median 3D navigation error of 9.1 cm. Our results also show that in comparison to baseline&#13;
implementations that rely on more classical uncertainty metrics, MiNav achieves a 20% increase in navigation reliability and&#13;
nearly 3x improvement in self-tracking in millimeter-wave backscatter localization. Finally, we demonstrate first of its kind&#13;
capabilities, such as fully autonomous, end-to-end mmWave-based drone navigation and path planning in featureless and dark&#13;
environments. Demo video: http://y2u.be/EpnWibRcxBI
</description>
<pubDate>Wed, 03 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164276</guid>
<dc:date>2025-09-03T00:00:00Z</dc:date>
</item>
<item>
<title>A Civics Lesson for Corporations Seeking to Join a University Community of Innovation</title>
<link>https://hdl.handle.net/1721.1/164275</link>
<description>A Civics Lesson for Corporations Seeking to Join a University Community of Innovation
Wright, Randall S.
Civics, according to Merriam-Webster(2023), is “a social science dealing withthe rights and duties of citizens.”We’ve reached an inflection point.The headline of the July 2023 edi-tion of University-Industry EngagementAdvisor (Lewis 2023) reads “Beforesigning off on strategic partnerships,experts stress value of solid due dili-gence process.”
</description>
<pubDate>Mon, 30 Oct 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164275</guid>
<dc:date>2023-10-30T00:00:00Z</dc:date>
</item>
<item>
<title>Artificial intelligence for telemedicine diabetic retinopathy screening: a review</title>
<link>https://hdl.handle.net/1721.1/164274</link>
<description>Artificial intelligence for telemedicine diabetic retinopathy screening: a review
Nakayama, Luis Filipe; Zago Ribeiro, Lucas; Novaes, Frederico; Miyawaki, Isabele Ayumi; Miyawaki, Andresa Emy; de Oliveira, Juliana Angélica Estevão; Oliveira, Talita; Malerbi, Fernando Korn; Regatieri, Caio Vinicius Saito; Celi, Leo Anthony; Silva, Paolo S
PURPOSE: This study aims to compare artificial intelligence (AI) systems applied in diabetic retinopathy (DR) teleophthalmology screening, currently deployed systems, fairness initiatives and the challenges for implementation.&#13;
METHODS: The review included articles retrieved from PubMed/Medline/EMBASE literature search strategy regarding telemedicine, DR and AI. The screening criteria included human articles in English, Portuguese or Spanish and related to telemedicine and AI for DR screening. The author's affiliations and the study's population income group were classified according to the World Bank Country and Lending Groups.&#13;
RESULTS: The literature search yielded a total of 132 articles, and nine were included after full-text assessment. The selected articles were published between 2004 and 2020 and were grouped as telemedicine systems, algorithms, economic analysis and image quality assessment. Four telemedicine systems that perform a quality assessment, image preprocessing and pathological screening were reviewed. A data and post-deployment bias assessment are not performed in any of the algorithms, and none of the studies evaluate the social impact implementations. There is a lack of representativeness in the reviewed articles, with most authors and target populations from high-income countries and no low-income country representation.&#13;
CONCLUSIONS: Telemedicine and AI hold great promise for augmenting decision-making in medical care, expanding patient access and enhancing cost-effectiveness. Economic studies and social science analysis are crucial to support the implementation of AI in teleophthalmology screening programs. Promoting fairness and generalizability in automated systems combined with telemedicine screening programs is not straightforward. Improving data representativeness, reducing biases and promoting equity in deployment and post-deployment studies are all critical steps in model development.
</description>
<pubDate>Tue, 12 Dec 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164274</guid>
<dc:date>2023-12-12T00:00:00Z</dc:date>
</item>
<item>
<title>The Removal Chain &amp; Sentient Life Cycles</title>
<link>https://hdl.handle.net/1721.1/164273</link>
<description>The Removal Chain &amp; Sentient Life Cycles
Schrage, Leonard; Duarte, Fábio; Ratti, Carlo
As our cities are growing, managing waste is becoming increasingly challenging. Global plastic waste is set to almost triple by 2060 (OECD Citation2020) while recycling rates are staying below expectations.&#13;
&#13;
At the same time, landfills are being relocated away from cities, reaching their maximum capacities, or forced to shut down due to contamination with hazardous materials. As waste management infrastructure is increasingly removed from urban areas, we are becoming further disconnected from its ubiquitous, indispensable, yet invisible life of its own.&#13;
&#13;
In recent years, supply chain issues have been an omnipresent reflection of our consumerist reality. For example, when the Ever Given—one of the largest container ships in the world—got stuck in the Suez Canal in 2021 (Chellel et al. Citation2021), we were reminded that our globalized goods travel a long way around the world before they arrive at our doorstep. Still, we tend to forget that there is a life after the supply. On a planet with finite resources and growing piles of (hazardous) trash, we need to look further than the obvious. We urgently need to embrace a circular economy to combat the climate crisis. And to do so, we need to mind both the supply and removal chains.
</description>
<pubDate>Mon, 03 Jul 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164273</guid>
<dc:date>2023-07-03T00:00:00Z</dc:date>
</item>
<item>
<title>A two-level machine learning approach for predicting thermal striping in T-junctions with upstream elbow</title>
<link>https://hdl.handle.net/1721.1/164272</link>
<description>A two-level machine learning approach for predicting thermal striping in T-junctions with upstream elbow
Wang, Yu-Jou; Baglietto, Emilio; Shirvan, Koroush
Thermal striping is a phenomenon characterized by oscillatory mixing of non-isothermal streams, which is commonly seen in industrial processes such as nuclear coolant piping, petrochemical plants and liquefied natural gas transportation. The oscillatory mixing of hot and cold fluid can produce thermal field fluctuations and pose a potential risk of high-cycle thermal fatigue failures. Predicting and evaluating spatiotemporal fluctuations in thermal striping often requires high resolution and massive computational power. Although there have been extensive studies using machine learning algorithms on surrogate modeling, research focused on spatiotemporal fluctuation predictions is very limited. Due to the high dimensionality, it often requires complex algorithms with a large amount of high-fidelity training data, which limits the adoption of such methods for industrial applications. In this research, a two-level machine learning framework based on turbulence coherent structures is proposed and its application to a practical problem is demonstrated. The two-level design leverages vortex identification and local bias correction techniques, efficiently reducing the number of full-order simulations required for training. In the first level, well-organized coherent structures are extracted by performing Proper Orthogonal Decomposition on local parameters and then a tree-based machine-learning model is used to down-select the reference structures for the field reconstruction. In the second level, a parameterized convolution neural network is trained to predict the bias introduced by reference structures approximation. The demonstration of the methodology shows that the method can accurately capture the fluctuation frequencies and amplitudes of the spatiotemporal fields in a highly variational setting. Based on the vortex identification method, the methodology is expected to be applicable to general phenomenon driven by large coherent structures.
</description>
<pubDate>Sun, 02 Jun 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164272</guid>
<dc:date>2024-06-02T00:00:00Z</dc:date>
</item>
<item>
<title>A Dual-Branch Coupled Fourier Neural Operator for High-Resolution Multi-Phase Flow Modeling in Porous Media</title>
<link>https://hdl.handle.net/1721.1/164258</link>
<description>A Dual-Branch Coupled Fourier Neural Operator for High-Resolution Multi-Phase Flow Modeling in Porous Media
Al Hashim, Hassan; Elyas, Odai; Williams, John
This paper investigates a physics-informed surrogate modeling framework for multi-phase flow in porous media based on the Fourier Neural Operator. Traditional numerical simulators, though accurate, suffer from severe computational bottlenecks due to fine-grid discretizations and the iterative solution of highly nonlinear partial differential equations. By parameterizing the kernel integral directly in Fourier space, the operator provides a discretization-invariant mapping between function spaces, enabling efficient spectral convolutions. We introduce a Dual-Branch Adaptive Fourier Neural Operator with a shared Fourier encoder and two decoders: a saturation branch that uses an inverse Fourier transform followed by a multilayer perceptron and a pressure branch that uses a convolutional decoder. Temporal information is injected via Time2Vec embeddings and a causal temporal transformer, conditioning each forward pass on step index and time step to maintain consistent dynamics across horizons. Physics-informed losses couple data fidelity with residuals from mass conservation and Darcy pressure, enforcing the governing constraints in Fourier space; truncated spectral kernels promote generalization across meshes without retraining. On SPE10-style heterogeneities, the model shifts the infinity-norm error mass into the 10−2 to 10−1 band during early transients and sustains lower errors during pseudo-steady state. In zero-shot three-dimensional coarse-to-fine upscaling from 30 ×110 ×5 to 60 ×220 ×5, it attains &#119877;2 =0.90, RMSE = 4.4 ×10−2, and MAE = 3.2 ×10−2, with more than 90% of voxels below five percent absolute error across five unseen layers, while the end-to-end pipeline runs about three times faster than a full-order fine-grid solve and preserves water-flood fronts and channel connectivity. Benchmarking against established baselines indicates a scalable, high-fidelity alternative for high-resolution multi-phase flow simulation in porous media.
</description>
<pubDate>Sat, 22 Nov 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164258</guid>
<dc:date>2025-11-22T00:00:00Z</dc:date>
</item>
<item>
<title>Unveiling IPv6 Scanning Dynamics: A Longitudinal Study Using Large Scale Proactive and Passive IPv6 Telescopes</title>
<link>https://hdl.handle.net/1721.1/164257</link>
<description>Unveiling IPv6 Scanning Dynamics: A Longitudinal Study Using Large Scale Proactive and Passive IPv6 Telescopes
Tanveer, Hammas Bin; Chan, Echo; Mok, Ricky K. P.; Kappes, Sebastian; Richter, Philipp; Gasser, Oliver; Ronan, John; Berger, Arthur; Claffy, kc
We introduce new tools and vantage points to develop and integrate proactive techniques to attract IPv6 scan traffic, thus enabling its analysis. By deploying the largest-ever IPv6 proactive telescope in a production ISP network, we collected over 600M packets of unsolicited traffic from 1.9k Autonomous Systems in 10 months. We characterized the sources of unsolicited traffic, evaluated the effectiveness of five major features across the network stack, and inferred scanners' sources of target addresses and their strategies.
</description>
<pubDate>Thu, 04 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164257</guid>
<dc:date>2025-09-04T00:00:00Z</dc:date>
</item>
<item>
<title>HARMONY: A Scalable Distributed Vector Database for High-Throughput Approximate Nearest Neighbor Search</title>
<link>https://hdl.handle.net/1721.1/164256</link>
<description>HARMONY: A Scalable Distributed Vector Database for High-Throughput Approximate Nearest Neighbor Search
Xu, Qian; Zhang, Feng; Li, Chengxi; Cao, Lei; Chen, Zheng; Zhai, Jidong; Du, Xiaoyong
Approximate Nearest Neighbor Search (ANNS) is essential for various data-intensive applications, including recommendation systems, image retrieval, and machine learning. Scaling ANNS to handle billions of high-dimensional vectors on a single machine presents significant challenges in memory capacity and processing efficiency. To address these challenges, distributed vector databases leverage multiple nodes for the parallel storage and processing of vectors. However, existing solutions often suffer from load imbalance and high communication overhead, primarily due to traditional partition strategies that fail to effectively distribute the workload. In this paper, we introduce Harmony, a distributed ANNS system that employs a novel multi-granularity partition strategy, combining dimension-based and vector-based partition. This strategy ensures a balanced distribution of computational load across all nodes while effectively minimizing communication costs. Furthermore, Harmony incorporates an early-stop pruning mechanism that leverages the monotonicity of distance computations in dimensionbased partition, resulting in significant reductions in both computational and communication overhead. We conducted extensive experiments on diverse real-world datasets, demonstrating that Harmony outperforms leading distributed vector databases, achieving 4.63&amp;#215; throughput on average in four nodes and 58% performance improvement over traditional distribution for skewed workloads.
</description>
<pubDate>Tue, 23 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164256</guid>
<dc:date>2025-09-23T00:00:00Z</dc:date>
</item>
<item>
<title>Kernel Extension DSLs Should Be Verifier-Safe!</title>
<link>https://hdl.handle.net/1721.1/164255</link>
<description>Kernel Extension DSLs Should Be Verifier-Safe!
Solleza, Franco; Adam, Justus; Crotty, Andrew; Narayan, Akshay; Schwarzkopf, Malte; Tatbul, Nesime
eBPF allows developers to write safe operating system extensions, but writing these extensions remains challenging because it requires detailed knowledge of both the extension's domain and eBPF's programming interface. Most importantly, the extension must pass the eBPF verifier.&#13;
This paper argues that DSLs for extensions should guarantee verifier-safety: valid DSL programs should result in eBPF code that always passes the verifier. This avoids complex debugging and the need for extension developers to be eBPF experts. We show that three existing DSLs for different domains are compatible with verifier-safety. Beyond verifier-safety, practical extension DSLs must also achieve good performance. Inspired by database query optimization, we sketch an approach to creating DSL-specific optimizers capable of maintaining verifier-safety. A preliminary evaluation shows that optimizing verifier-safe extension performance is feasible.
eBPF ’25, September 8–11, 2025, Coimbra, Portugal
</description>
<pubDate>Mon, 08 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164255</guid>
<dc:date>2025-09-08T00:00:00Z</dc:date>
</item>
<item>
<title>Experiencing EmbedNet: Embedding self-sensing to 3D casting objects</title>
<link>https://hdl.handle.net/1721.1/164254</link>
<description>Experiencing EmbedNet: Embedding self-sensing to 3D casting objects
Liu, Fangzheng; Dementyev, Artem; Wicaksono, Irmandy; Paradiso, Joseph
This paper introduces EmbedNet, a method for integrating dense sensor networks into casting objects. With EmbedNet, sensor nodes are seamlessly incorporated into casting objects during fabrication. The process involves extruding base materials like silicone rubber or liquid plastic and a custom-designed sensor strip using a hand-held extruder into a mold tailored to specific applications. The base material mixes with the sensor strip in the mold, and upon curing, the result is an object with a defined shape housing a sensor network. EmbedNet employs a small Host node to access sensor data from all nodes on the strip. Each sensor node is self-contained and provides status indications through an onboard RGB LED. The Host connects with all sensor nodes using just three wires: power, ground, and data. This one-wire communication is facilitated through a custom-designed software serial port for each sensor node. The paper showcases various applications of EmbedNet, including wearables, home sensing, and entertainment devices.
UIST Adjunct ’25, Busan, Republic of Korea
</description>
<pubDate>Sat, 27 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164254</guid>
<dc:date>2025-09-27T00:00:00Z</dc:date>
</item>
<item>
<title>Demonstrating NeuroFlux: A Non-Invasive Peripheral Magnetic Stimulation Device for Multimodal Haptic Feedback</title>
<link>https://hdl.handle.net/1721.1/164253</link>
<description>Demonstrating NeuroFlux: A Non-Invasive Peripheral Magnetic Stimulation Device for Multimodal Haptic Feedback
Huang, Bingjian; Chin, Sam; Wigdor, Daniel; Paradiso, Joseph
We demonstrate NeuroFlux, a wearable armband that delivers multimodal haptic feedback through non-invasive peripheral magnetic stimulation. Unlike conventional haptic devices limited to either tactile or kinesthetic modalities, NeuroFlux stimulates peripheral nerves to independently evoke both muscle movements and localized skin sensations. Our system features a custom-designed control circuit and a multi-coil armband, enabling precise, real-time control of stimulation location and intensity. This hardware innovation significantly expands the design space of haptic feedback by bridging kinesthetic and tactile modalities through a single, compact device. In our demonstration, participants will experience a wide range of magnetically induced haptic sensations, including independent stimulation of muscular and cutaneous nerves in the forearm. The setup includes interactive tasks that showcase NeuroFlux’s ability to generate diverse haptic effects such as finger flexion, wrist movement, as well as immersive virtual reality object interactions. By offering hands-on exposure to peripheral magnetic stimulation, we aim to spark new research directions in multimodal haptic feedback and make neural stimulation more accessible to the HCI community.
UIST Adjunct ’25, Busan, Republic of Korea
</description>
<pubDate>Sat, 27 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164253</guid>
<dc:date>2025-09-27T00:00:00Z</dc:date>
</item>
<item>
<title>Supernotes: Driving Consensus in Crowd-Sourced Fact-Checking</title>
<link>https://hdl.handle.net/1721.1/164252</link>
<description>Supernotes: Driving Consensus in Crowd-Sourced Fact-Checking
De, Soham; Bakker, Michiel; Baxter, Jay; Saveski, Martin
X's Community Notes, a crowd-sourced fact-checking system, allows users to annotate potentially misleading posts. Notes rated as helpful by a diverse set of users are prominently displayed below the original post. While demonstrably effective at reducing misinformation's impact when notes are displayed, there is an opportunity for notes to appear on many more posts: for 91% of posts where at least one note is proposed, no notes ultimately achieve sufficient support from diverse users to be shown on the platform. This motivates the development of Supernotes: AI-generated notes that synthesize information from several existing community notes and are written to foster consensus among a diverse set of users. Our framework uses an LLM to generate many diverse Supernote candidates from existing proposed notes. These candidates are then evaluated by a novel scoring model, trained on millions of historical Community Notes ratings, selecting candidates that are most likely to be rated helpful by a diverse set of users. To test our framework, we ran a human subjects experiment in which we asked participants to compare the Supernotes generated by our framework to the best existing community notes for 100 sample posts. We found that participants rated the Supernotes as significantly more helpful, and when asked to choose between the two, preferred the Supernotes 75.2% of the time. Participants also rated the Supernotes more favorably than the best existing notes on quality, clarity, coverage, context, and argumentativeness. Finally, in a follow-up experiment, we asked participants to compare the Supernotes against LLM-generated summaries and found that the participants rated the Supernotes significantly more helpful, demonstrating that both the LLM-based candidate generation and the consensus-driven scoring play crucial roles in creating notes that effectively build consensus among diverse users.
WWW ’25, Sydney, NSW, Australia
</description>
<pubDate>Tue, 22 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164252</guid>
<dc:date>2025-04-22T00:00:00Z</dc:date>
</item>
<item>
<title>I Feel Your Pain: a Haptic Interface for Improving Pain Literacy</title>
<link>https://hdl.handle.net/1721.1/164251</link>
<description>I Feel Your Pain: a Haptic Interface for Improving Pain Literacy
Yin, Peggy; Chen, Sofia; Chang, Ethan
There is no sensation more universal and misunderstood than pain. While pain presents itself in nearly every eukaryotic organism, it remains one of the most elusive disease states to express, let alone treat. Here, we introduce Pain by Numbers, a haptic, immersive storytelling interface that facilitates user recognition and communication of low-to-medium-intensity pain, in order to improve pain literacy for patients, physicians, and society-at-large.
UIST Adjunct ’25, Busan, Republic of Korea
</description>
<pubDate>Sat, 27 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164251</guid>
<dc:date>2025-09-27T00:00:00Z</dc:date>
</item>
<item>
<title>Understanding and Scaling Collaborative Filtering Optimization from the Perspective of Matrix Rank</title>
<link>https://hdl.handle.net/1721.1/164250</link>
<description>Understanding and Scaling Collaborative Filtering Optimization from the Perspective of Matrix Rank
Loveland, Donald; Wu, Xinyi; Zhao, Tong; Koutra, Danai; Shah, Neil; Ju, Mingxuan
Collaborative Filtering (CF) methods dominate real-world recommender systems given their ability to learn high-quality, sparse ID-embedding tables that effectively capture user preferences. These tables scale linearly with the number of users and items, and are trained to ensure high similarity between embeddings of interacted user-item pairs, while maintaining low similarity for non-interacted pairs. Despite their high performance, encouraging dispersion for non-interacted pairs necessitates expensive regularization (e.g., negative sampling), hurting runtime and scalability. Existing research tends to address these challenges by simplifying the learning process, either by reducing model complexity or sampling data, trading performance for runtime. In this work, we move beyond model-level modifications and study the properties of the embedding tables under different learning strategies. Through theoretical analysis, we find that the singular values of the embedding tables are intrinsically linked to different CF loss functions. These findings are empirically validated on real-world datasets, demonstrating the practical benefits of higher stable rank -- a continuous version of matrix rank which encodes the distribution of singular values. Based on these insights, we propose an efficient warm-start strategy that regularizes the stable rank of the user and item embeddings. We show that stable rank regularization during early training phases can promote higher-quality embeddings, resulting in training speed improvements of up to 65.9%. Additionally, stable rank regularization can act as a proxy for negative sampling, allowing for performance gains of up to 21.2% over loss functions with small negative sampling ratios. Overall, our analysis unifies current CF methods under a new perspective -- their optimization of stable rank -- motivating a flexible regularization method that is easy to implement, yet effective at enhancing CF systems.
WWW ’25, April 28-May 2, 2025, Sydney, NSW, Australia
</description>
<pubDate>Tue, 22 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164250</guid>
<dc:date>2025-04-22T00:00:00Z</dc:date>
</item>
<item>
<title>Unified and Generalizable Reinforcement Learning for Facility Location Problems on Graphs</title>
<link>https://hdl.handle.net/1721.1/164249</link>
<description>Unified and Generalizable Reinforcement Learning for Facility Location Problems on Graphs
Guo, Wenxuan; Wang, Runzhong; Xu, Yanyan; Jin, Yaohui
Facility location problems on graphs are ubiquitous in the real world and hold significant importance, yet their resolution is often impeded by NP-hardness. MIP solvers can find the optimal solutions but fail to handle large instances, while algorithm efficiency has a higher priority in cases of emergency. Recently, machine learning methods have been proposed to tackle such classical problems with fast inference, but they are limited to the myopic constructive pattern and only consider simple cases in Euclidean space. This paper introduces a unified and generalizable approach to tackle facility location problems on weighted graphs with deep reinforcement learning, demonstrating a keen awareness of complex graph structures. Striking a harmonious balance between solution quality and running time, our method stands out with superior efficiency and steady performance. Our model trained on small graphs is highly scalable and consistently generates high-quality solutions, achieving a speedup of more than 2000 times to Gurobi on instances with 1000 nodes. The experiments on Shanghai road networks further demonstrate its practical value in solving real-world problems. The source codes are available at https://github.com/AryaGuo/PPO-swap.
WWW ’25, Sydney, NSW, Australia
</description>
<pubDate>Tue, 22 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164249</guid>
<dc:date>2025-04-22T00:00:00Z</dc:date>
</item>
<item>
<title>Paratrouper: Exploratory Creation of Character Cast Visuals Using Generative AI</title>
<link>https://hdl.handle.net/1721.1/164248</link>
<description>Paratrouper: Exploratory Creation of Character Cast Visuals Using Generative AI
Leong, Joanne; Ledo, David; Driscoll, Thomas; Grossman, Tovi; Fitzmaurice, George; Anderson, Fraser
Great characters are critical to the success of many forms of media, such as comics, games, and films. Designing visually compelling casts of characters requires significant skill and consideration, and there is a lack of specialized tools to support this endeavor. We investigate how AI-driven image-generation techniques can empower creatives to explore a variety of visual design possibilities for individual and groups of characters. Informed by interviews with character designers, Paratrouper is a multi-modal system that enables creating and experimenting with multiple permutations for character casts and visualizing them in various contexts as part of a holistic approach to design. We demonstrate how Paratrouper supports different aspects of the character design process, and share insights from its use by eight creators. Our work highlights the interplay between creative agency and serendipity, as well as the visual interrelationships among character aesthetics.
Joanne Leong, David Ledo, Thomas Driscoll, Tovi Grossman, George Fitzmaurice, and Fraser Anderson. 2025. Paratrouper: Exploratory Creation of Character Cast Visuals Using Generative AI. In Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems (CHI '25). Association for Computing Machinery, New York, NY, USA, Article 189, 1–20.
</description>
<pubDate>Fri, 25 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164248</guid>
<dc:date>2025-04-25T00:00:00Z</dc:date>
</item>
<item>
<title>FiberCircuits: A Miniaturization Framework To Manufacture Fibers That Embed Integrated Circuits</title>
<link>https://hdl.handle.net/1721.1/164247</link>
<description>FiberCircuits: A Miniaturization Framework To Manufacture Fibers That Embed Integrated Circuits
Honnet, Cedric; Babatain, Wedyan; Luo, Yiyue; Kilic Afsar, Ozgun; Bensahel, Chloe; Nicita, Sarah; Zhu, Yunyi; Danielescu, Andreea; Gershenfeld, Neil; Paradiso, Joseph
UIST ’25, Busan, Republic of Korea
</description>
<pubDate>Sat, 27 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164247</guid>
<dc:date>2025-09-27T00:00:00Z</dc:date>
</item>
<item>
<title>FIP: Endowing Robust Motion Capture on Daily Garment by Fusing Flex and Inertial Sensors</title>
<link>https://hdl.handle.net/1721.1/164246</link>
<description>FIP: Endowing Robust Motion Capture on Daily Garment by Fusing Flex and Inertial Sensors
Zheng, Ruonan; Fang, Jiawei; Yao, Yuan; Gao, Xiaoxia; Zuo, Chengxu; Guo, Shihui; Luo, Yiyue
What if our clothes could capture our body motion accurately? This paper introduces Flexible Inertial Poser (FIP), a novel motion-capturing system using daily garments with two elbow-attached flex sensors and four Inertial Measurement Units (IMUs). To address the inevitable sensor displacements in loose wearables which degrade joint tracking accuracy significantly, we identify the distinct characteristics of the flex and inertial sensor displacements and develop a Displacement Latent Diffusion Model and a Physics-informed Calibrator to compensate for sensor displacements based on such observations, resulting in a substantial improvement in motion capture accuracy. We also introduce a Pose Fusion Predictor to enhance multimodal sensor fusion. Extensive experiments demonstrate that our method achieves robust performance across varying body shapes and motions, significantly outperforming SOTA IMU approaches with a 19.5% improvement in angular error, a 26.4% improvement in elbow angular error, and a 30.1% improvement in positional error. FIP opens up opportunities for ubiquitous human-computer interactions and diverse interactive applications such as Metaverse, rehabilitation, and fitness analysis. Our project page can be seen at Flexible Inertial Poser.
CHI ’25, Yokohama, Japan
</description>
<pubDate>Fri, 25 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164246</guid>
<dc:date>2025-04-25T00:00:00Z</dc:date>
</item>
<item>
<title>GPU-accelerated dynamic nonlinear optimization with ExaModels and MadNLP</title>
<link>https://hdl.handle.net/1721.1/164245</link>
<description>GPU-accelerated dynamic nonlinear optimization with ExaModels and MadNLP
Pacaud, François; Shin, Sungho
We investigate the potential of Graphics Processing Units (GPUs) to solve large-scale nonlinear programs with a dynamic structure. Using ExaModels, a GPU-accelerated automatic differentiation tool, and the interior-point solver MadNLP, we significantly reduce the time to solve dynamic nonlinear optimization problems. The sparse linear systems formulated in the interior-point method is solved on the GPU using a hybrid solver combining an iterative method with a sparse Cholesky factorization, which harness the newly released NVIDIA cuDSS solver. Our results on the classical distillation column instance show that despite a significant pre-processing time, the hybrid solver allows to reduce the time per iteration by a factor of $\mathbf{2 5}$ for the largest instance.
2024 IEEE 63rd Conference on Decision and Control (CDC), Milan, Italy, 2024
</description>
<pubDate>Wed, 26 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164245</guid>
<dc:date>2025-02-26T00:00:00Z</dc:date>
</item>
<item>
<title>Scalable Primal Decomposition Schemes for Large-Scale Infrastructure Networks</title>
<link>https://hdl.handle.net/1721.1/164244</link>
<description>Scalable Primal Decomposition Schemes for Large-Scale Infrastructure Networks
Engelmann, Alexander; Shin, Sungho; Pacaud, François; Zavala, Victor M
The operation of large-scale infrastructure networks requires scalable optimization schemes. To guarantee safe system operation, a high degree of feasibility in a small number of iterations is important. Decomposition schemes can help to achieve scalability. In terms of feasibility, however, classical approaches, such as the alternating direction method of multipliers (ADMMs), often converge slowly. In this work, we present primal decomposition schemes for hierarchically structured strongly convex quadratic programs. These schemes offer high degrees of feasibility in a small number of iterations in combination with global convergence guarantees. We benchmark their performance against the centralized off-the-shelf interior-point solver Ipopt and ADMM on problems with up to 300 000 decision variables and constraints. We find that the proposed approaches solve problems as fast as Ipopt, but with reduced communication and without requiring a full model exchange. Moreover, the proposed schemes achieve a higher accuracy than ADMM.
</description>
<pubDate>Sun, 01 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164244</guid>
<dc:date>2025-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Coplanarity of rooted spanning-tree vectors</title>
<link>https://hdl.handle.net/1721.1/164243</link>
<description>Coplanarity of rooted spanning-tree vectors
Polettini, Matteo; Harunari, Pedro E.; Cengio, Sara D.; Lecomte, Vivien
Employing a recent technology of tree surgery, we prove a “deletion–constriction” formula for products of rooted spanning-trees on weighted directed graphs that generalizes deletion–contraction on undirected graphs. The formula implies that, letting τ x ∅ , τ x + , and τ x - be the rooted spanning-tree polynomials obtained, respectively, by removing both directed edges between two vertices, or by forcing the tree to pass through either edge, the vectors ( τ x ∅ , τ x + , τ x - ) are coplanar for all roots x . We deploy the result to give an alternative derivation of a recently found mutual linearity of stationary currents of Markov chains. We generalize deletion–constriction and current linearity for two pairs of edges and conjecture that similar results may hold for arbitrary subsets of edges.
</description>
<pubDate>Fri, 05 Dec 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164243</guid>
<dc:date>2025-12-05T00:00:00Z</dc:date>
</item>
<item>
<title>Search for the decay B0 → ϕϕ</title>
<link>https://hdl.handle.net/1721.1/164242</link>
<description>Search for the decay B0 → ϕϕ
Aaij, R.; Abdelmotteleb, A. S. W.; Abellan Beteta, C.; Abudinén, F.; Ackernley, T.; Adefisoye, A. A.; Adeva, B.; Adinolfi, M.; Adlarson, P.; Agapopoulou, C.; Aidala, C. A.; Ajaltouni, Z.; Akar, S.; Akiba, K.; Albicocco, P.; Albrecht, J.; Aleksiejunas, R.
A search for the decay B0 → ϕϕ is made using pp collision data collected with the LHCb detector at centre-of-mass energies of 7, 8 and 13 TeV, corresponding to an integrated luminosity of 9 fb−1. No significant signal is observed, and an upper limit on the branching fraction of 1.3 (1.4) × 10−8 at 90 (95)% confidence level is set. This result supersedes the previous LHCb study and improves the upper limit by a factor of two.
</description>
<pubDate>Wed, 03 Dec 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164242</guid>
<dc:date>2025-12-03T00:00:00Z</dc:date>
</item>
<item>
<title>Regulating Sommerfeld resonances for multi-state systems and higher partial waves</title>
<link>https://hdl.handle.net/1721.1/164241</link>
<description>Regulating Sommerfeld resonances for multi-state systems and higher partial waves
Parikh, Aditya; Sato, Ryosuke; Slatyer, Tracy R.
Long-range attractive interactions between dark matter particles can significantly enhance their annihilation, particularly at low velocities. This “Sommerfeld enhancement” is typically computed by evaluating the deformation of the two-particle wavefunction due to the long-range potential, while ignoring the physics associated with the annihilation, and then scaling the appropriate annihilation matrix elements by factors that depend on the wavefunction in the limit where the particles approach zero relative separation. It has long been recognized that this approach is a valid approximation only in the limit where the annihilation rate is small, and breaks down in the regime where the enhanced annihilation rate approaches the unitarity bound, in which case ignoring the impact of the annihilation physics on the two-particle wavefunction cannot be justified and leads to apparent violations of unitarity. In the case where the physics relevant to annihilation occurs at a parametrically shorter distance scale (higher energy scale) compared with the long-range potential, we provide a simple prescription for correcting the Sommerfeld enhancement for the effects of the short-range physics, valid for all partial waves and for systems where multiple states are coupled by the long-range potential.
</description>
<pubDate>Wed, 03 Dec 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164241</guid>
<dc:date>2025-12-03T00:00:00Z</dc:date>
</item>
<item>
<title>A field guide to event-shape observables using optimal transport</title>
<link>https://hdl.handle.net/1721.1/164240</link>
<description>A field guide to event-shape observables using optimal transport
Cesarotti, Cari; LeBlanc, Matt
We lay out the phenomenological behavior of event-shape observables evaluated by solving optimal transport problems between collider events and reference geometries — which we name manifold distances — to provide guidance regarding their use in future studies. This discussion considers several choices related to the metric used to quantify these distances. We explore the differences between the various options, for the first time using a combination of analytical studies and simulated minimum-bias and multi-jet events. Making judicious choices when defining the metric and reference geometry can improve sensitivity to interesting signal features and reduce sensitivity to non-perturbative effects in QCD. The goal of this article is to provide a ‘field guide’ that can inform how choices made when defining a manifold distance can be tailored for the analysis at-hand.
</description>
<pubDate>Tue, 02 Dec 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164240</guid>
<dc:date>2025-12-02T00:00:00Z</dc:date>
</item>
<item>
<title>Search for the lepton-flavour-violating decays B0 → K*0τ±e∓</title>
<link>https://hdl.handle.net/1721.1/164239</link>
<description>Search for the lepton-flavour-violating decays B0 → K*0τ±e∓
Aaij, R.; Abdelmotteleb, A. S. W.; Abellan Beteta, C.; Abudinén, F.; Ackernley, T.; Adefisoye, A. A.; Adeva, B.; Adinolfi, M.; Adlarson, P.; Agapopoulou, C.; Aidala, C. A.; Ajaltouni, Z.; Akar, S.; Akiba, K.; Albicocco, P.; Albrecht, J.
A first search at LHCb for the lepton-flavour-violating decays B0 → K*0τ±e∓ is presented. The analysis is performed using a sample of proton-proton collision data, collected with the LHCb detector at a centre-of-mass energy of 13 TeV between 2016 and 2018, corresponding to an integrated luminosity of 5.4 fb−1. No significant signal is observed, and upper limits on the branching fractions are determined to be B B 0 → K ∗ 0 τ − e + &lt; 5.9 7.1 × 10 − 6 and B B 0 → K ∗ 0 τ + e − &lt; 4.9 5.9 × 10 − 6 at the 90% (95%) confidence level. These results correspond to the current most stringent upper limits for b → sτl transitions.
</description>
<pubDate>Wed, 26 Nov 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164239</guid>
<dc:date>2025-11-26T00:00:00Z</dc:date>
</item>
<item>
<title>Guidelines for environmental life cycle assessment of cultivated meat</title>
<link>https://hdl.handle.net/1721.1/164238</link>
<description>Guidelines for environmental life cycle assessment of cultivated meat
Blackstone, Nicole T.; Pavlova, Anisiya; Trinidad, Kirsten R.; Nikkhah, Amin; Sinke, Pelle; Heller, Martin; Duncan-Duggal, Joe; Ridoutt, Brad; Smetana, Sergiy; Makov, Tamar; Shabtai, Shira
Purpose Cultivated meat is produced by growing animal cells in vitro without using, or reducing the use of, animals for meat, poultry, or seafood production. Responsibly and consistently investigating the environmental impacts of cultivated meat is essential to provide reliable performance benchmarks and realistic comparisons with animal-based production systems. In this contribution, we provide technical, actionable guidelines for conducting life cycle assessments (LCAs) of cultivated meat and highlight further research needs for the field. Methods We assembled a global team of recognized and active scientists in cultivated meat LCA, livestock systems LCA, and ISO LCA standards to develop this set of guidelines using a workshop (in person and online) and online meetings, as well as asynchronous feedback, to reach consensus. Results and discussion These guidelines provide specifications throughout the four phases of LCA, from goal definition to the interpretation of LCA results. Data gaps, including the availability and quality of feed or food-grade culture media component inventories, are among the areas highlighted for further exploration. Conclusion We invite LCA practitioners to apply these guidelines when investigating cultivated meat systems to increase the consistency and reliability of environmental impact evaluations for these emerging products.
</description>
<pubDate>Wed, 03 Dec 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164238</guid>
<dc:date>2025-12-03T00:00:00Z</dc:date>
</item>
<item>
<title>Sharp Bound for the Erdős–Straus Non-averaging Set Problem</title>
<link>https://hdl.handle.net/1721.1/164237</link>
<description>Sharp Bound for the Erdős–Straus Non-averaging Set Problem
Pham, Huy T.; Zakharov, Dmitrii
A set of integers A is non-averaging if there is no element a in A which can be written as an average of a subset of A not containing a . We show that the largest non-averaging subset of { 1 , … , n } has size n 1 / 4 + o ( 1 ) , thus solving the Erdős–Straus problem. We also determine the largest size of a non-averaging set in a d -dimensional box for any fixed d . Our main tool includes the structure theorem for the set of subset sums due to Conlon, Fox and the first author, together with a result about the structure of a point set in nearly convex position.
</description>
<pubDate>Wed, 03 Dec 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164237</guid>
<dc:date>2025-12-03T00:00:00Z</dc:date>
</item>
<item>
<title>Seasonal variations of the atmospheric muon neutrino spectrum measured with IceCube</title>
<link>https://hdl.handle.net/1721.1/164236</link>
<description>Seasonal variations of the atmospheric muon neutrino spectrum measured with IceCube
IceCube Collaboration
This study presents an analysis of seasonal variations in the atmospheric muon neutrino flux, using 11.3 years of data from the IceCube Neutrino Observatory. By leveraging a novel spectral unfolding method, we explore the energy range from 125 GeV to 10 TeV for zenith angles from 90 ∘ to 110 ∘ , corresponding to the Antarctic atmosphere. Our findings reveal that the differential measurement of the amplitudes of the seasonal variation is consistent with an energy-dependent decrease reaching ( - 4.5 ± 1.2)% during Austral winter and increase to (+ 3.9 ± 1.3)% during Austral summer relative to the annual average at 10 TeV. While the unfolded flux exceeds the model predictions by up to 30%, the differential measurement of the seasonal to annual average flux remains unaffected. The measured seasonal variations of the muon neutrino spectrum are consistent with theoretical predictions using the MCEq code and the NRLMSISE-00 atmospheric model.
</description>
<pubDate>Mon, 01 Dec 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164236</guid>
<dc:date>2025-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Capacity lower bound for the Ising perceptron</title>
<link>https://hdl.handle.net/1721.1/164235</link>
<description>Capacity lower bound for the Ising perceptron
Ding, Jian; Sun, Nike
We consider the Ising perceptron with gaussian disorder, which is equivalent to the discrete cube { - 1 , + 1 } N intersected by M random half-spaces. The perceptron’s capacity is the largest integer M N for which the intersection is nonempty. It is conjectured by Krauth and Mézard (1989) that the (random) ratio M N / N converges in probability to an explicit constant α ⋆ ≐ 0.83 . Kim and Roche (1998) proved the existence of a positive constant γ such that γ ⩽ M N / N ⩽ 1 - γ with high probability; see also Talagrand (1999). In this paper we show that the Krauth–Mézard conjecture α ⋆ is a lower bound with positive probability, under the condition that an explicit univariate function S ⋆ ( λ ) is maximized at λ = 0 . Our proof is an application of the second moment method to a certain slice of perceptron configurations, as selected by the so-called TAP (Thouless, Anderson, and Palmer, 1977) or AMP (approximate message passing) iteration, whose scaling limit has been characterized by Bayati and Montanari (2011) and Bolthausen (2012).
</description>
<pubDate>Sun, 23 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164235</guid>
<dc:date>2025-02-23T00:00:00Z</dc:date>
</item>
<item>
<title>Tackling the UK’s regional economic inequality: binding constraints and avenues for policy intervention</title>
<link>https://hdl.handle.net/1721.1/164234</link>
<description>Tackling the UK’s regional economic inequality: binding constraints and avenues for policy intervention
Stansbury, Anna; Turner, Dan; Balls, Ed
We analyse binding constraints to productivity growth in the UK’s regions outside London and the greater South East. These analyses challenge a number of common arguments about the UK’s regional economic inequality problem. We find little evidence consistent with the hypotheses (i) that low shares of university graduates remain the primary constraint on growth for the UK’s regions; (ii) that there is a generalised issue with access to finance for firms outside the South East; or (iii) that low or falling regional migration rates are to blame for the persistence of the UK’s regional economic inequalities. Instead, we find evidence consistent with (i) a specific relative shortage of STEM degrees; (ii) binding transport infrastructure constraints within major non-London conurbations; (iii) a failure of public innovation policy to support clusters beyond the South East, in particular through the regional distribution of public support for Research and Development (R&amp;D); and (iv) missed opportunities for higher internal mobility due to London’s overheating housing market. We also find some suggestive evidence consistent with constraints on access to early-stage equity financing for high-growth-potential SMEs in certain regions.
</description>
<pubDate>Mon, 14 Aug 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164234</guid>
<dc:date>2023-08-14T00:00:00Z</dc:date>
</item>
<item>
<title>Chile’s Inclusion Law: the arduous drive to regulate an unequal education system, 2006–19</title>
<link>https://hdl.handle.net/1721.1/164233</link>
<description>Chile’s Inclusion Law: the arduous drive to regulate an unequal education system, 2006–19
Cummings, Peter MM; Mizala, Alejandra; Schneider, Ben Ross
Chile’s Inclusion Law, passed in 2015, significantly increased government regulation of one of the most privatised education systems in the world and provided major redistributive benefits. How did Chile’s government succeed in passing and implementing this legislation in the face of a powerful and cohesive opposition? Our study finds that student protesters served as the initial impetus, shaping the education debate and increasing the political salience and urgency of education reform. In line with power resource theory, other left movement organisations and voters used their power to support redistributive education reform, and Bachelet’s centre-left coalition followed through on its mandate by proposing the Inclusion Law. Also, a well-connected policy network helped articulate problems with the status quo and shaped the specifics of the education bill. To develop this argument, the paper draws on historical information on the student movement in Chile, quantitative data on education stakeholder appearances in the press, public opinion surveys, and detailed analysis of the 13-month legislative proceedings – to explain the law’s passage in congress. To underscore the significance of the Inclusion Law and to contextualise the Chilean case, the paper also compares Chile to other countries with nation-wide school choice systems.
</description>
<pubDate>Wed, 16 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164233</guid>
<dc:date>2025-04-16T00:00:00Z</dc:date>
</item>
<item>
<title>Bayesian Network–Based Fault Diagnostic System for Nuclear Power Plant Assets</title>
<link>https://hdl.handle.net/1721.1/164232</link>
<description>Bayesian Network–Based Fault Diagnostic System for Nuclear Power Plant Assets
Zhao, Xingang; Wang, Xinyan; Golay, Michael W
Future advances in nuclear power technologies call for enhanced operator advice and autonomous control capabilities that can leverage simpler designs and increased safety features to reduce reliance on human labor. One of the first tasks in the development of such capabilities is the formulation of symptom-based conditional failure probabilities for the plant structures, systems, and components (SSCs) of interest. The primary goal is to aid plant personnel in (1) deducing the probabilistic performance status of the monitored SSCs and (2) detecting impending faults/failures. The task of estimating conditional failure probability is a bidirectional inference problem, and a logical approach is to use the Bayesian network (BN) method. As a knowledge-based explainable artificial intelligence tool and a probabilistic graphical model, BN offers reasoning capability under uncertainty, graphical representation emulating physical behavior of the target SSC, and interpretability of the model structure and results. This paper provides a systematic overview of the BN technique and the software tools for implementing BN models, along with the associated knowledge representation and reasoning paradigm. Both operational data and expert judgment can be readily incorporated into the knowledge base of a BN model. The challenges with data availability are highlighted, and the general approach to target SSC identification is presented. The focus is on failure-prone and risk-important balance of plant assets, especially for cases with strong operator involvement. Two example case studies on the failure of (1) a centrifugal pump and (2) an electric motor are conducted to demonstrate the usefulness and technical feasibility of the proposed BN-based fault diagnostic system using an expert system shell.
</description>
<pubDate>Sat, 04 Mar 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164232</guid>
<dc:date>2023-03-04T00:00:00Z</dc:date>
</item>
<item>
<title>Your body tells more than words – predicting perceived meeting productivity through body signals</title>
<link>https://hdl.handle.net/1721.1/164231</link>
<description>Your body tells more than words – predicting perceived meeting productivity through body signals
Zeyda, Maximilian; Stracke, Selina; Knipfer, Kristin; Gloor, Peter A
The productivity of work meetings is mostly assessed through post-hoc questionnaires. These questionnaires are impractical as they require additional time after the meeting has ended. Thus, measuring meeting productivity in a non-intrusive manner is of practical and theoretical importance. Extending research on physiological arousal and the healthy physiological variability thesis to the context of work meetings, we take a novel approach and investigate whether physiological arousal and the variability in implicit body signals of meeting participants (heart rate, arm movements, and speech intensity) can be accurate predictors of perceived meeting productivity. In a preliminary field study, we used smartwatches and tracked the body signals of 16 team members in 26 team meetings. The perceived meeting productivity was assessed at the end of the meetings. Partly supporting our assumptions, multilevel analysis showed that the variance in arm acceleration was a significant predictor of perceived meeting productivity. Further, using a random forest classifier, we accurately predicted perceived meeting productivity in roughly 60% of the cases with body signals. This study adds to previous work on meeting effectiveness by tapping into the potential of wearables to provide valid information about perceived meeting productivity. Cultivating our findings, we discuss lessons learned for future research.
</description>
<pubDate>Sun, 03 Mar 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164231</guid>
<dc:date>2024-03-03T00:00:00Z</dc:date>
</item>
<item>
<title>Phonon Sampling Method for Inelastic Thermal Neutron Scattering Events</title>
<link>https://hdl.handle.net/1721.1/164230</link>
<description>Phonon Sampling Method for Inelastic Thermal Neutron Scattering Events
Trainer, Amelia; Forget, Benoit
Accurate representation of thermal neutron scattering in Monte Carlo transport simulations requires that the molecular vibrations of the target material be accounted for. Historically, this has been achieved by precomputing large multidimensional tables that are a function of temperature and the cosine of the scattering angle, as well as incoming and outgoing neutron energy. Most commonly used sampling techniques for thermal neutron scattering rely on large multidimensional tables, where higher resolution results in an increase in required memory and attempts to reduce memory can result in grid coarseness errors. An alternative sampling method is introduced here that is a significant departure from precomputed tables and instead relies on a more physical model of the scattering behavior. The phonon sampling method classifies neutron scattering events by the number of phonons excited/de-excited during the scattering collision. In doing so, energy exchange may be obtained via rejection sampling, and an analytical representation of the momentum exchange is obtained. This sampling method has been tested on graphite, yttrium hydride, and uranium nitride, and preliminary implementation of the phonon sampling method shows accurate results for angular and energy distributions, though resulting in up to a 40% slowdown in overall calculation time. This notable slowdown is countered, however, by a large reduction in storage (over 99% reduction compared to standard multidimensional tables).
</description>
<pubDate>Thu, 03 Aug 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164230</guid>
<dc:date>2023-08-03T00:00:00Z</dc:date>
</item>
<item>
<title>Assessment of Engineering Turbulence Models in Buoyant Diabatic Turbulent Flow</title>
<link>https://hdl.handle.net/1721.1/164229</link>
<description>Assessment of Engineering Turbulence Models in Buoyant Diabatic Turbulent Flow
Wiser, Ralph; Baglietto, Emilio
Turbulent heat transfer in buoyancy-dominated flows is a challenging problem for computational fluid dynamics (CFD). Many authors attribute model error in these conditions to the Reynolds analogy. We leverage a brand-new direct numerical simulation database to evaluate the performance of several popular turbulence models in buoyant diabatic channel flow. We find that heat transfer results are relatively accurate, with a Nusselt number error less than 20%. However, the turbulent flow solution is very inaccurate, with wall shear overpredicted by up to 100%. This indicates significant turbulence model error in such flows. We determined that the dominant sources of model error are missing physics in the algebraic Reynolds stress framework and the simple buoyancy production term used in industrial CFD. We suggest that future modeling efforts focus on these two sources of model error. We demonstrate that the Reynolds analogy is not the dominant source of model error.
</description>
<pubDate>Tue, 02 Jul 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164229</guid>
<dc:date>2024-07-02T00:00:00Z</dc:date>
</item>
<item>
<title>The insurgent smart city: How a social movement created an alternative imaginary of the smart city</title>
<link>https://hdl.handle.net/1721.1/164228</link>
<description>The insurgent smart city: How a social movement created an alternative imaginary of the smart city
Stokols, Andrew
Urban scholars have critiqued smart cities for their association with neoliberal governance, narrow focus on quantifiable aspects of urban systems, and failure to incorporate citizens’ needs or aspirations. The “smart city” remains a contested concept and as such is subject to reappropriation. Here, I analyze the case of an urban social movement, the 2019–2020 Hong Kong Anti-ELAB protests, as an alternative, “insurgent smart city.” Following from an earlier network analysis of Telegram channels used during the protests, I show how the communications system underpinning much of the protest action simultaneously enabled coordination while also remaining open to grassroots decision-making and innovations of new protest formats as the movement responded to countertactics of the state and police. Telegram channels linked neighborhood-based organizing to the citywide movement. These actions not only emulated but also inverted top-down visions of a total urban information system underpinning many smart city projects. Framing the Hong Kong Anti-ELAB protests as an insurgent smart city offers an alternative sociotechnical imaginary of what smart cities could be, and raises possibilities for an “insurgent digital citizenship” as an alternative to both state and platform-mediated forms of digital citizenship.
</description>
<pubDate>Mon, 17 Jul 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164228</guid>
<dc:date>2023-07-17T00:00:00Z</dc:date>
</item>
<item>
<title>The evolution of global cybersecurity norms in the digital age: A longitudinal study of the cybersecurity norm development process</title>
<link>https://hdl.handle.net/1721.1/164227</link>
<description>The evolution of global cybersecurity norms in the digital age: A longitudinal study of the cybersecurity norm development process
Madnick, Benjamin; Huang, Keman; Madnick, Stuart
Developing cybersecurity norms and global normative cybersecurity behaviors play an increasingly critical role in global cybersecurity governance. This paper takes a longitudinal approach to analyze cybersecurity norms development activities during the period 1997–2020. A total of 206 individual cases were collected, and 233 individual cybersecurity norms were identified and compiled into 25 subject categories. Categorizing the norm subjects alongside the frequency of cases and norms identified each year allowed for a longitudinal view of cyber norm activities and the evolution in developments over these years. This examination enables us to categorize cybersecurity norms, including their dynamic focus and evolution patterns. By studying those viewed as “successful,” we gain guidance regarding the construction of global cybersecurity governance in the digital age.
</description>
<pubDate>Fri, 03 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164227</guid>
<dc:date>2024-05-03T00:00:00Z</dc:date>
</item>
<item>
<title>Anthropology Has One Job (On Genocide in the United States)</title>
<link>https://hdl.handle.net/1721.1/164226</link>
<description>Anthropology Has One Job (On Genocide in the United States)
Lowry, David Shane
In an introductory anthropology course, the instructor might provide a definition of anthropology similar to this: “Anthropology is the most scientific of the humanities, and it is the most humanistic of the sciences.” If something like that is said, it stems from a statement in Anthropology, a 1964 book by famed anthropologist Eric Wolf in which he attempted to define the discipline. Wolf’s approach came at a time when many anthropologists were attempting to intervene in the historical telling of the world.Footnote1 In particular, Wolf argued that non-Europeans were also participants in global, colonial processes. The value of Wolf’s voice—indeed, the value of most anthropology at the time—was that it offered a wide-scale view of human events for which the anthropologist was merely an observer, hence not responsible.
</description>
<pubDate>Mon, 02 Jan 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164226</guid>
<dc:date>2023-01-02T00:00:00Z</dc:date>
</item>
<item>
<title>Online Bidding under RoS Constraints without Knowing the Value</title>
<link>https://hdl.handle.net/1721.1/164225</link>
<description>Online Bidding under RoS Constraints without Knowing the Value
Vijayan, Sushant; Feng, Zhe; Padmanabhan, Swati; Shanmugam, Karthikeyan; Suggala, Arun; Wang, Di
We consider the problem of bidding in online advertising, where an advertiser aims to maximize value while adhering to budget and Return-on-Spend (RoS) constraints. Unlike prior work that assumes knowledge of the value generated by winning each impression (e.g., conversions), we address the more realistic setting where the advertiser must simultaneously learn the optimal bidding strategy and the value of each impression opportunity. This introduces a challenging exploration-exploitation dilemma: the advertiser must balance exploring different bids to estimate impression values with exploiting current knowledge to bid effectively. To address this, we propose a novel Upper Confidence Bound (UCB)-style algorithm that carefully manages this trade-off. Via a rigorous theoretical analysis, we prove that our algorithm achieves Õ(₲T log(|B|T) ) regret and constraint violation, where T is the number of bidding rounds and B is the domain of possible bids. This establishes the first optimal regret and constraint violation bounds for bidding in the online setting with unknown impression values. Moreover, our algorithm is computationally efficient and simple to implement. We validate our theoretical findings through experiments on synthetic data, demonstrating that our algorithm exhibits strong empirical performance compared to existing approaches.
WWW ’25, April 28–May 2, 2025, Sydney, NSW, Australia.
</description>
<pubDate>Tue, 22 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164225</guid>
<dc:date>2025-04-22T00:00:00Z</dc:date>
</item>
<item>
<title>Tactile Vega-Lite: Rapidly Prototyping Tactile Charts with Smart Defaults</title>
<link>https://hdl.handle.net/1721.1/164224</link>
<description>Tactile Vega-Lite: Rapidly Prototyping Tactile Charts with Smart Defaults
Chen, Mengzhu (Katie); Pedraza Pineros, Isabella; Satyanarayan, Arvind; Zong, Jonathan
Tactile charts are essential for conveying data to blind and low vision (BLV) readers but are difficult for designers to construct. Non-expert designers face barriers to entry due to complex guidelines, while experts struggle with fragmented and time-consuming workflows that involve extensive customization. Inspired by formative interviews with expert tactile graphics designers, we created Tactile Vega-Lite (TVL): an extension of Vega-Lite that offers tactile-specific abstractions and synthesizes existing guidelines into a series of smart defaults. Predefined stylistic choices enable non-experts to produce guideline-compliant tactile charts quickly. Expert users can override defaults to tailor customizations for their intended audience. In a user study with 12 tactile graphics creators, we show that Tactile Vega-Lite enhances flexibility and consistency by automating tasks like adjusting spacing and translating braille while accelerating iterations through pre-defined textures and line styles. Through expert critique, we also learn more about tactile chart design best practices and design decisions.
CHI ’25, Yokohama, Japan
</description>
<pubDate>Fri, 25 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164224</guid>
<dc:date>2025-04-25T00:00:00Z</dc:date>
</item>
<item>
<title>Toward Everyday Perceptual and Physiological Augmentation</title>
<link>https://hdl.handle.net/1721.1/164223</link>
<description>Toward Everyday Perceptual and Physiological Augmentation
Tao, Yujie; Gemicioglu, Tan; Chin, Sam; Huang, Bingjian; Brooks, Jas; Follmer, Sean; Lopes, Pedro; Nanayakkara, Suranga
Human senses are fundamental to how we interpret and interact with the world. Computing devices are increasingly coupled with the human sensory system through interfaces such as smart glasses, earbuds, and wristbands. This opens up opportunities to dynamically mediate, modify, and augment perceptual experiences and physiological processes through multisensory stimulation. These devices go beyond assistive technologies designed for individuals with sensory impairments (e.g., hearing aids) and are now available for everyday use. Applications range from enriching immersive entertainment experiences to supporting well-being through multisensory interventions.&#13;
The UIST community has been a key venue for introducing many proof-of-concept prototypes in multisensory stimulation. However, gaps remain in systematically understanding how such technologies can be designed, studied, and contextualized in long-term, everyday use. This workshop will examine barriers to transitioning prototypes from proof-of-concepts into systems for real-world use. The session will feature keynote talks, demo sessions, and an interactive device-swap activity where participants exchange and wear different devices during the afternoon session, and conclude with an open discussion to develop implementation frameworks.
UIST Adjunct ’25, Busan, Republic of Korea
</description>
<pubDate>Sat, 27 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164223</guid>
<dc:date>2025-09-27T00:00:00Z</dc:date>
</item>
<item>
<title>GyFoam: Fabricating Lattice Foam with Customizable Stiffness through Uniform Expansion</title>
<link>https://hdl.handle.net/1721.1/164222</link>
<description>GyFoam: Fabricating Lattice Foam with Customizable Stiffness through Uniform Expansion
Wang, Guanyun; Chen, Haotian; Wang, Yufeng; Li, Songyun; Tao, Yue; Qi, Fanke; Cao, Lizhuo; Jin, Xiao; Tao, Ye; Li, Jiaji
We present GyFoam, a fabrication method integrating foam material with lattice structure to enable controlled and uniform expansion, which supports high-quality forming in appearance and customizable stiffness in function, using standard 3D printers, filaments, commercially available Thermo-Expandable Microspheres and silicone. To achieve customizable stiffness, we propose two methods: modifying material concentration and adjusting lattice structural parameters. Additionally, we propose three shape control strategies for creating complex shapes: bending, wavy edges, and internal doming. Furthermore, a user-friendly design tool is established for users to construct lattice structures, preview basic deformation, and generate mold models for printing. Finally, through a series of applications, we validate GyFoam’s practical usage of fabricating large objects, wearable products, enabling flexible interactions and creating aesthetic designs.
UIST ’25, Busan, Republic of Korea
</description>
<pubDate>Sat, 27 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164222</guid>
<dc:date>2025-09-27T00:00:00Z</dc:date>
</item>
<item>
<title>EmbroChet: A Hybrid Textile Fabrication Approach for 3D Personalized Handicraft via Heat-Shrinking</title>
<link>https://hdl.handle.net/1721.1/164221</link>
<description>EmbroChet: A Hybrid Textile Fabrication Approach for 3D Personalized Handicraft via Heat-Shrinking
Wang, Guanyun; Wang, Zhiqi; Li, Fanyu; Liu, Qinyang; Dong, Tianshu; Hong, Zixiang; Li, Xinyi; Zhu, Kuangqi; Li, Jiaji; Zhao, Xiaoliang; Tao, Ye
We propose EmbroChet, a hybrid approach that bridges digital fabrication and textile craftsmanship, empowering individuals unfamiliar with intricate craft techniques to design and fabricate 3D textile handicrafts intuitively. EmbroChet allows the creation of handicrafts by embroidering chain stitches (a fundamental embroidery technique) onto a heat-shrinkable film, which subsequently self-transforms from a 2D composite to a 3D textile through a freely controllable heating triggering process. Through a single stitch type, the method enables custom designs and intricate geometries to be achieved without complex manual skills that often requires expertise between different stitch knowledge. To better demonstrate EmbroChet, we propose a design tool that includes shape-changing libraries to assist users in customizing 3D shapes. The evaluation demonstrates its unique strength in balancing geometric complexity and textile softness. Furthermore, our workshop verifies the feasibility of EmbroChet, exploring its potential for personalized textile fabrication, and synergizing the precision of digital fabrication with the tactile artistry of textile craftsmanship.
UIST ’25, Busan, Republic of Korea
</description>
<pubDate>Sat, 27 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164221</guid>
<dc:date>2025-09-27T00:00:00Z</dc:date>
</item>
<item>
<title>Meta-antenna: Mechanically Frequency Reconfigurable Metamaterial Antennas</title>
<link>https://hdl.handle.net/1721.1/164220</link>
<description>Meta-antenna: Mechanically Frequency Reconfigurable Metamaterial Antennas
AlAlawi, Marwa; Zheng, Regina; Ahn, Sooyeon; Yan, Katherine; Sethapakdi, Ticha; Zhu, Junyi; Mueller, Stefanie
We introduce Meta-antenna, a design and fabrication pipeline for creating frequency reconfigurable antennas while making use of a single type of mechanical metamaterial structure. Unlike traditional static antenna systems with fixed radiation patterns and frequency responses per geometry, Meta-antenna leverages mechanical reconfiguration to alter the radiation and geometry characteristics of the antenna, making it more versatile for sensing and communication. Meta-antenna provides a design space of resonance frequency from 500 MHz to 6.3 GHz (≥ 10dB) upon the structure’s compression, bending, or rotation. Additionally, we provide an Ansys-based editor that allows users to generate metamaterial antenna geometries and simulate their resonance frequency. We also provide a code template for Meta-antenna based sensing interactions. Our technical evaluation demonstrates that our fabricated Meta-antenna structures remain functional even after 10,000 compression cycles. Finally, we contribute three example applications showcasing Meta-antenna’s potential in adaptive personal devices, smart home systems, and tangible user interfaces.
UIST ’25, Busan, Republic of Korea
</description>
<pubDate>Sat, 27 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164220</guid>
<dc:date>2025-09-27T00:00:00Z</dc:date>
</item>
<item>
<title>Computational Tailor-Making for Personalized, Shape-changing, and Sustainable Fabrics</title>
<link>https://hdl.handle.net/1721.1/164219</link>
<description>Computational Tailor-Making for Personalized, Shape-changing, and Sustainable Fabrics
Narumi, Koya; Hirose, Yuichi; Lee, Hsuanling; Larsson, Maria; He, Liang; Leake, Mackenzie; Forman, Jack; Farahi, Behnaz; Yao, Lining; Igarashi, Takeo
Fabrics are fundamental elements of our daily lives, which are woven, knitted, or embroidered into diverse products like clothing and furniture. Recent advances in materials science and digital fabrication have enabled us to fabricate personalized and responsive fabric products computationally and interactively, which we call “computational tailor-making.” In this workshop, we will build an interdisciplinary network of researchers on computational tailor-making and discuss (1) computational fabric design, (2) novel fabric fabrication tools, (3) shape-changing fabrics, and (4) sustainable fabric production, from the viewpoint of HCI. The workshop session will help attendees build a shared vision, recognize potential challenges, find unexpected solutions and ideas, collaborate beyond disciplines, and explore the possible connection to industries.
UIST Adjunct ’25, Busan, Republic of Korea
</description>
<pubDate>Sat, 27 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164219</guid>
<dc:date>2025-09-27T00:00:00Z</dc:date>
</item>
<item>
<title>Ori-TENG: 3D Printed Origami Tessellations as Triboelectric Nanogenerators for Self-powered Sensing and Energy Harvesting</title>
<link>https://hdl.handle.net/1721.1/164218</link>
<description>Ori-TENG: 3D Printed Origami Tessellations as Triboelectric Nanogenerators for Self-powered Sensing and Energy Harvesting
AlAlawi, Marwa; Wang, Kexin; Zheng, Regina; Chan, Adelene; Feick, Martin; Mueller, Stefanie
We introduce Ori-TENG, a design and fabrication framework for 3D&#13;
printed origami tessellations that function as triboelectric sensors&#13;
and energy harvesters. Ori-TENG structures are 3D printed flat in&#13;
a single step, then folded, with internal electrical routing optimized&#13;
for both folding mechanics and triboelectric performance.
UIST Adjunct ’25, Busan, Republic of Korea
</description>
<pubDate>Sat, 27 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164218</guid>
<dc:date>2025-09-27T00:00:00Z</dc:date>
</item>
<item>
<title>GreenMix: Energy-Efficient Serverless Computing via Randomized Sketching on Asymmetric Multi-Cores</title>
<link>https://hdl.handle.net/1721.1/164217</link>
<description>GreenMix: Energy-Efficient Serverless Computing via Randomized Sketching on Asymmetric Multi-Cores
Basu Roy, Rohan; Patel, Tirthak; Li, Baolin; Samsi, Siddharth; Gadepally, Vijay; Tiwari, Devesh
GreenMix is motivated by the renewed interest in asymmetric multi-core processors and the emergence of the serverless computing model. Asymmetric multi-cores offer better energy and performance trade-offs by placing different core types on the same die. However, existing serverless scheduling techniques do not leverage these benefits. GreenMix is the first serverless work to reduce energy and serverless keep-alive costs while meeting QoS targets by leveraging asymmetric multi-cores. GreenMix employs randomized sketching, tailored for serverless execution and keep-alive, to perform within 10% of the optimal solution in terms of energy efficiency and keep-alive cost reduction. GreenMix’s effectiveness is demonstrated through evaluations on clusters of ARM big.LITTLE and Intel Alder Lake asymmetric processors. It outperforms competing state-of-the-art schedulers, offering a novel approach for energy-efficient serverless computing.
SC ’25, St Louis, MO, USA
</description>
<pubDate>Sat, 15 Nov 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164217</guid>
<dc:date>2025-11-15T00:00:00Z</dc:date>
</item>
<item>
<title>Scalable and Low Power Localization for Underwater Robots</title>
<link>https://hdl.handle.net/1721.1/164216</link>
<description>Scalable and Low Power Localization for Underwater Robots
Afzal, Sayed Saad; Rademacher, Jack; Chen, Weitung; Wang, Purui; Adib, Fadel
Localization is a critical task for underwater robots, yet today’s underwater localization systems are limited by their accuracy, scalability, and/or energy consumption (i.e., longevity).&#13;
We present the design, implementation, and evaluation of&#13;
EchoBLUE– an accurate, scalable, and low-power localization system for underwater robots.&#13;
In EchoBLUE, an underwater robot transmits SONARstyle (FMCW) signals, and leverages ultra-low power underwater backscatter nodes as location anchors. EchoBLUE’s&#13;
design introduces two key innovations. The first is a novel&#13;
doppler compensation mechanism that enables it to accurately self-localize under mobility: the technique employs a&#13;
cross-chirp mechanism that exploits the quad-band nature of&#13;
the resulting backscatter response to overcome the rangedoppler ambiguity. Second, it introduces the first semi-active&#13;
retrodirective underwater backscatter design and uses it for&#13;
location anchors; this design achieves wide bandwidth to&#13;
backscatter the full FMCW signal, enabling fine-grained localization.&#13;
We implemented a proof of concept prototype of EchoBLUE&#13;
by building a base station mounted on a BlueROV2 underwater robot and custom-designed low-power retrodirective&#13;
location anchors deployed in a pool. Our evaluation across&#13;
700 real-world trials demonstrates that EchoBLUE achieves a&#13;
median 3D localization accuracy of 28 cm and 90th percentile&#13;
of 48 cm. Moreover, these anchors consume only 740 &#120583;&#119882; for&#13;
semi-active backscatter, paving the way for truly low-power&#13;
and scalable underwater localization.
</description>
<pubDate>Fri, 21 Nov 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164216</guid>
<dc:date>2025-11-21T00:00:00Z</dc:date>
</item>
<item>
<title>A polyurethane-urea elastomer at low to extreme strain rates</title>
<link>https://hdl.handle.net/1721.1/164215</link>
<description>A polyurethane-urea elastomer at low to extreme strain rates
Lee, Jaehee; Veysset, David; Hsieh, Alex J; Rutledge, Gregory C; Cho, Hansohl
A finite strain nonlinear constitutive model is presented to study the extreme mechanical behavior of a polyurethane-urea (PUU) well suited for many engineering applications. The micromechanically- and thermodynamically based constitutive model captures salient features in resilience and dissipation in the material from low to extreme strain rates. The extreme deformation features are further elucidated by laser-induced micro-particle impact tests for the material, where an ultrafast strain rate ( &gt; 1 0 6 s−1) incurs. Numerical simulations for the strongly inhomogeneous deformation events are in good agreement with the experimental data, supporting the predictive capabilities of the constitutive model for the extreme deformation features of the PUU material over at least 9 orders of magnitude in strain rates ( 1 0 − 3 to 1 0 6 s−1).
</description>
<pubDate>Fri, 15 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164215</guid>
<dc:date>2023-09-15T00:00:00Z</dc:date>
</item>
<item>
<title>Molecular simulation of flow-enhanced nucleation of polyethylene crystallites in biaxial flows</title>
<link>https://hdl.handle.net/1721.1/164214</link>
<description>Molecular simulation of flow-enhanced nucleation of polyethylene crystallites in biaxial flows
Gangal, Chinmay S; Rutledge, Gregory C
Flow-enhanced nucleation (FEN) of n-pentacontahectane (C150) under biaxial extensional flows of varying strain rate ratios is studied using nonequilibrium molecular dynamics simulation. The nucleation rates thus calculated are used to test previously published FEN models based on invariants of the conformation tensor of Kuhn segments and the extra stress tensor. Models based on the conformation tensor provide a more accurate description of FEN observed in biaxial flow simulations than those based on the extra stress tensor. In addition, the formation of nematic domains previously reported to be stabilized by shear or extensional flow is absent in equibiaxial flows. However, such domains do form in non-equibiaxial flows, and nucleation occurs in these domains preferentially. The shape and orientation of nuclei formed under biaxial flows of various strengths and strain rate ratios are also reported.
</description>
<pubDate>Wed, 17 Apr 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164214</guid>
<dc:date>2024-04-17T00:00:00Z</dc:date>
</item>
<item>
<title>Cholesterol Nanofiber Patches with Sustainable Oil Delivery Eliminate Inflammation in Atopic Skin</title>
<link>https://hdl.handle.net/1721.1/164213</link>
<description>Cholesterol Nanofiber Patches with Sustainable Oil Delivery Eliminate Inflammation in Atopic Skin
Sroczyk, Ewa A; Tarasiuk, Aleksandra; Talar, Marcin; Rutledge, Gregory C; Makaro, Adam; Misztal, Zofia; Wołyniak, Maria; Berniak, Krzysztof; Sałaga, Maciej; Fichna, Jakub; Stachewicz, Urszula
Atopic skin is dry and itchy and lacks integrity. Impaired skin barrier results from altered lipid composition of the skin. A crucial skin lipid, cholesterol, provides flexibility and homeostasis of the cell membranes' lipid bilayer. Cholesterol-based creams and natural oils, especially blackcurrant seed oil, are beneficial for skin care as they hydrate the skin and improve its integrity. The major atopic symptom, skin dryness, can be overcome by the application of porous patches enhanced with cholesterol and natural oil. The base of the patches is constructed of polyimide (PI) nanofibers with cholesterol coatings and externally added blackcurrant seed oil. The presence of cholesterol in PI mats hinders the passage of oil through the patches to the skin, resulting in sustained and prolonged skin hydration. The theoretical and numerical investigations of oil dynamics in porous mats confirmed the experimental results, showing a prolonged skin hydration effect up to 6 h. Additionally, as demonstrated by in vivo tests on atopic mice, cholesterol patches lower serum immunoglobulin E levels and expression of proinflammatory cytokines in the skin, thereby accelerating skin healing. Our results hold great promise for the long-term application of the patches in atopic dermatitis treatment.
</description>
<pubDate>Fri, 12 Jul 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164213</guid>
<dc:date>2024-07-12T00:00:00Z</dc:date>
</item>
<item>
<title>The Polls and the U.S. Presidential Election in 2020 …. and 2024</title>
<link>https://hdl.handle.net/1721.1/164212</link>
<description>The Polls and the U.S. Presidential Election in 2020 …. and 2024
Barnett, Arnold; Sarfati, Arnaud
Arguably, the single greatest determinant of U.S. public policy is the identity of the president. And if trusted, polls not only provide forecasts about presidential-election outcomes but can act to shape those outcomes. Looking ahead to the 2024 U.S. presidential election and recognizing that polls before the 2020 presidential election were sharply criticized, we consider whether such harsh assessments are warranted. Initially, we explore whether such polls as processed by the sophisticated aggregator FiveThirtyEight successfully forecast actual 2020 state-by-state outcomes. We evaluate FiveThirtyEight’s forecasts using customized statistical methods not used previously, methods that take account of likely correlations among election outcomes in similar states. We find that, taken together, the pollsters and FiveThirtyEight did an excellent job in predicting who would win in individual states, even those “tipping point” states where forecasting is more difficult. However, we also find that FiveThirtyEight underestimated Donald Trump’s vote shares by state to a modest but statistically significant extent. We further consider how the polls performed when the more primitive aggregator Real Clear Politics combined their results, and then how well single statewide polls performed without aggregation. It emerges that both Real Clear Politics and the individual polls fared surprisingly well.
</description>
<pubDate>Tue, 30 May 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164212</guid>
<dc:date>2023-05-30T00:00:00Z</dc:date>
</item>
<item>
<title>The Profession’s Vanguards: Arab Architects and Regional Architectural Exchange, 1900–50</title>
<link>https://hdl.handle.net/1721.1/164211</link>
<description>The Profession’s Vanguards: Arab Architects and Regional Architectural Exchange, 1900–50
Abusaada, Nadi
Writings on architecture in the Middle East during the first half of the twentieth century have often focused on the legacies of colonial architects and planners in shaping Middle Eastern cities and built environments. Contrarily, this article focuses on the overlooked history of the first milieu of trained Arab architects in Middle East, focusing on Palestine, Syria, Lebanon and Egypt. Examining unstudied historical materials and archives, it maps out the trajectories of individual architects as well as the architectural profession more generally in this period of rapid change. It is divided into three main sections that highlight this: first, architecture’s transition from the Ottoman guild system to its professionalisation by the turn of the century; second, the mobility of architectural knowledge and expertise in the Arab region following the First World War; finally, the development of a new institutionalised architectural culture that sought to cultivate bonds between Arab architects not only in their individual countries, but also regionally throughout the Arab world towards the mid-twentieth century.
</description>
<pubDate>Wed, 19 Jul 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164211</guid>
<dc:date>2023-07-19T00:00:00Z</dc:date>
</item>
<item>
<title>Fossil fuel divestment and public climate change policy preferences: an experimental test in three countries</title>
<link>https://hdl.handle.net/1721.1/164210</link>
<description>Fossil fuel divestment and public climate change policy preferences: an experimental test in three countries
Schwartz, Joshua A.; Lendway, Paul; Nuri, Abolfazl
Divestment is a prominent strategy championed by activists to induce positive social change. For example, the current fossil fuel divestment movement includes over 1,500 institutions that control $40 trillion in assets. A primary pathway through which divestment is theorized to be effective is by influencing public beliefs and policy preferences, thus pressuring policymakers to take action. However, prior research only tests this argument via qualitative case studies. We assess the impact of exposure to information about fossil fuel divestment on public opinion through the use of national survey experiments in three major greenhouse gas emitters: the U.S., India, and South Africa. We find surprisingly little evidence that exposure to information about the fossil fuel divestment movement can increase public support for policies that address climate change. Our findings suggest that divestment movements may be less effective at changing beliefs and policy preferences than previously realized.
</description>
<pubDate>Sun, 26 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164210</guid>
<dc:date>2023-02-26T00:00:00Z</dc:date>
</item>
<item>
<title>“Sculptress Interprets Land’s Spirit”: Elizabeth Wyn Wood, the Group of Seven, and analogy as equivalence</title>
<link>https://hdl.handle.net/1721.1/164209</link>
<description>“Sculptress Interprets Land’s Spirit”: Elizabeth Wyn Wood, the Group of Seven, and analogy as equivalence
Nikčević, Hana
Canadian sculptor Elizabeth Wyn Wood (1903–66), best known for her modernist landscape sculptures, has since the inception of her artistic career been compared, through analogy, with the Group of Seven (fl. 1920–33), Canada’s enduringly famous and overtly nationalistic collective of modernist landscape painters. Critics claimed that Wood “achieved for sculpture what the Group of Seven achieved for painting” and, occasionally, invoked specific Group artists, dubbing Wood the “Lawren Harris of sculpture.” Analogizing across disciplines, the Wood/Group likening appears to posit a formal comparison in gendered language: the Group’s bold, decorative portrayals of the northern Ontario “wilderness” find clear visual comparands in Wood’s abstracted compositions of the same region. In this article, however, I demonstrate that the apparently visual basis for the comparison is inextricable from the textual discourse fundamental to Canadian art in the early twentieth century and beyond; it is only through analyzing this discourse that an understanding of the Wood/Group analogy can be reached. The Group ostensibly pioneered the first genuine Canadian landscape aesthetic; through immersing himself in the land, the mythology went, the Canadian artist learned to paint Canada on its own terms. This landscape artist-as-woodsman myth was a form of settler indigenization by which Canada laid cultural claim to colonized land. Analogy frames Wood as not an epigone but an equal of the Group: in producing organically, anew, a genuine Canadian landscape aesthetic for sculpture, Wood “achieved for sculpture what the Group of Seven achieved for painting”—its deployment as a medium in the service of Canada’s land claim.
</description>
<pubDate>Tue, 26 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164209</guid>
<dc:date>2023-09-26T00:00:00Z</dc:date>
</item>
<item>
<title>A Landscape “Difficult to Describe”: The Model Village and the Capital City</title>
<link>https://hdl.handle.net/1721.1/164208</link>
<description>A Landscape “Difficult to Describe”: The Model Village and the Capital City
Springstubb, Phoebe
In mid-twentieth-century Punjab, grassroots development projects sought to modernize the countryside by decentralizing power to villages. The capital city Chandigarh, built in the same period, seems to represent the opposite: a national symbol of a newly independent India’s centralized power. Yet, this article argues, rural and urban were reciprocal and volatile counterparts. Through the work of M.S. Randhawa, it reorients analysis of Chandigarh to reveal how the materiality of landscape itself was a medium for territorial planning, indelibly linking—and managing the distinctions between—city and countryside. A botanist and civil servant, Randhawa used landscape to realize modernizing agendas and to constrain social change in projects from model villages and a “bioaesthetic” plan for the city to new land-grant universities that ushered in the Green Revolution’s industrialized agriculture. His work offers a revisionist history of development’s practitioners and periodization. It shows how an uneven fabric of late-colonial rural uplift shaped the contours of postcolonial, state-directed agrarian transformation. Following the civil servant in the landscape, this article calls for the grounding of abstract theories like development and state formation in histories of their local inflections.
</description>
<pubDate>Wed, 22 Mar 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164208</guid>
<dc:date>2023-03-22T00:00:00Z</dc:date>
</item>
<item>
<title>Policy Search through Genetic Programming and LLM-assisted Curriculum Learning</title>
<link>https://hdl.handle.net/1721.1/164207</link>
<description>Policy Search through Genetic Programming and LLM-assisted Curriculum Learning
Jorgensen, Steven; Nadizar, Giorgia; Pietropolli, Gloria; Manzoni, Luca; Medvet, Eric; O'Reilly, Una-May; Hemberg, Erik
Curriculum learning (CL) consists in using a diverse set of user-provided test cases, with varying levels of difficulty and organized in a suitable progression, for learning a policy. The quality of test cases is important to allow optimization techniques as genetic programming (GP) to solve policy search problems. In this work, we evaluate large language models (LLMs) as providers of test cases for GP-based policy search. We consider two policy search tasks, a single-player and a multi-player game, and four LLMs differing in complexity and specialization, which we prompt in order to generate suitable test cases for the two games. We experimentally assess the intrinsic quality of LLM-generated test cases and their utility when inserted in a curriculum consumed by a GP optimization. We evaluate the robustness of the approach with respect to the way cases are scheduled in curricula and with respect to the policy representation, for which we use both graphs and linear programs evolved by GP. We observe that the effectiveness of LLM-assisted CL depends on both the choice of LLM and the design of the prompting and scheduling strategies. These findings highlight important considerations for leveraging LLMs in automated curriculum design for GP-based optimization.
</description>
<pubDate>Fri, 31 Oct 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164207</guid>
<dc:date>2025-10-31T00:00:00Z</dc:date>
</item>
<item>
<title>Robust Biharmonic Skinning Using Geometric Fields</title>
<link>https://hdl.handle.net/1721.1/164206</link>
<description>Robust Biharmonic Skinning Using Geometric Fields
Dodik, Ana; Sitzmann, Vincent; Solomon, Justin; Stein, Oded
Bounded bihramonic weights are a popular tool used to rig and deform characters for animation, to compute reduced-order simulations, and to define feature descriptors for geometry processing. They necessitate tetrahedralizing the volume bounded by the surface, introducing the possibility of meshing artifacts or tetrahedralization failure. We introduce a mesh-free and robust automatic skinning technique that generates weights comparable to the current state of the art, but works reliably even on open surfaces, triangle soups, and point clouds where current methods fail. We achieve this through the use of a specialized Lagrangian representation enabled by the advent of hardware ray-tracing, which circumvents the need for finite elements while optimizing the biharmonic energy and enforcing boundary conditions. The flexibility of our formulation allows us to integrate artistic control through weight painting during the optimization. We offer a thorough qualitative and quantitative evaluation of our method.
</description>
<pubDate>Tue, 28 Oct 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164206</guid>
<dc:date>2025-10-28T00:00:00Z</dc:date>
</item>
<item>
<title>SquareLoop: Explore Optimal Authentication Block Strategy for ML</title>
<link>https://hdl.handle.net/1721.1/164205</link>
<description>SquareLoop: Explore Optimal Authentication Block Strategy for ML
Strzeszynski, Jan; Tong, Jianming; Lee, Kyungmi; Xiong, Nathan; Parashar, Angshuman; Emer, Joel; Krishna, Tushar; Yan, Mengjia
Off-chip memory in ML accelerators is vulnerable to both hardware&#13;
and software attack, which needs encryption and authentication.&#13;
Precise performance modeling of it requires (1) representation of&#13;
authentication blocks (AuthBlock) to cover the full design space of&#13;
shapes and orientations, and (2) precise memory behavior modeling,&#13;
as encryption and authentication mainly increase memory traffic.&#13;
This paper introduces &#119878;&#13;
2Loop, a framework that resolves these&#13;
challenges by introducing (1) flexible, all-level partitioning based&#13;
AuthBlocks for ensuring full coverage of the entire design space, (2)&#13;
a realistic layout-based memory model, and (3) an Mapping-LayoutAuthentication co-search algorithm to explore the drastic combinatorial design space to figure out optimal mapping, layout, and&#13;
AuthBlock shape choice for multi-layer workloads. SquareLoop’s&#13;
detailed memory model helps find better mapping to achieve 1.32×&#13;
speedup on ResNet18 compared to the SotA SecureLoop, and our&#13;
latency predictions are validated to within 7.3% of an RTL implementation. &#119878;&#13;
2&#119871;&#119900;&#119900;&#119901; also achieve up-to 1.08×/1.82× overall speedup for&#13;
authenticated ResNet18/MobileNet-V3 on various accelerators with&#13;
AuthBlock and Mapping co-searching. We open-source &#119878;&#13;
2Loop to&#13;
provide a powerful and validated tool for designing efficient, secure&#13;
accelerators at https://github.com/maeri-project/squareloop.
HASP 2025, Seoul, Republic of Korea
</description>
<pubDate>Sat, 18 Oct 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164205</guid>
<dc:date>2025-10-18T00:00:00Z</dc:date>
</item>
<item>
<title>A Close Look at RMP Entry Caching and Its Security Implications in SEV-SNP</title>
<link>https://hdl.handle.net/1721.1/164204</link>
<description>A Close Look at RMP Entry Caching and Its Security Implications in SEV-SNP
Bagia, Alexis; Ulitzsch, Vincent; Trujillo, Dani?l; Li, Mengyuan; Yan, Mengjia; Seifert, Jean-Pierre
AMD’s Secure Encrypted Virtualization (SEV) technology is a pivotal component in AMD server processors that boosts cloud computing security. It achieves this by offering transparent memory encryption and managing keys for protecting virtual machines (VMs),&#13;
independently of the hypervisor’s trustworthiness. The latest iteration, SEV-Secure Nested Paging (SEV-SNP), introduces memory&#13;
integrity protection through a data structure called the Reverse&#13;
Map Table (RMP), which maps system physical addresses to guest&#13;
physical addresses and tracks ownership of physical pages.&#13;
The RMP is maintained in a dedicated region in DRAM. As every memory write triggers a check against an RMP entry, caching&#13;
RMP entries is crucial to alleviating the RMP’s performance impact. However, caching may create new security challenges, as it&#13;
can introduce new microarchitectural side-channels. In addition,&#13;
maintaining cache coherence is crucial for the RMP’s security guarantees. However, so far, neither the details of the RMP’s caching&#13;
behavior nor its security implications have been explored. This&#13;
paper aims to fill this gap by conducting a systematic study of the&#13;
RMP’s caching behavior. Through reverse engineering, we identify&#13;
that the RMP is not only cached in the TLB, but also in the L1D&#13;
and L2 data cache. Interestingly, this caching depends on the access&#13;
type on Zen 5. We also uncover the mechanisms by which cache&#13;
coherence across the TLB is enforced. We find that each update to&#13;
the RMP table triggers a global TLB flush across all cores. Finally,&#13;
we present several potential security implications and demonstrate&#13;
that an attacker can exploit RMP’s caching to leak physical address&#13;
information. A user process can leak 6 bits of the Physical Frame&#13;
Number (PFN) of its pages via the L1D cache within 2.5 µs per page,&#13;
with success rates of 97 % (Zen 4) and 99 % (Zen 3 and Zen 5).
</description>
<pubDate>Sat, 18 Oct 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164204</guid>
<dc:date>2025-10-18T00:00:00Z</dc:date>
</item>
<item>
<title>Guarding LLM-aided Software Transformation Tasks via Component Exoskeletons</title>
<link>https://hdl.handle.net/1721.1/164203</link>
<description>Guarding LLM-aided Software Transformation Tasks via Component Exoskeletons
Lamprou, Evangelos; Kalhauge, Christian; Rinard, Martin; Vasilakis, Nikos
Large language models (LLMs) are achieving state-of-the-art results across a wide variety of software transformation tasks---including translating across languages and lifting opaque software components to high-level languages. Unfortunately, their results are often subtly incorrect, insecure, or underperformant---affecting the widespread deployment of these LLM-driven techniques in settings that go beyond the narrow scope of academic papers. This paper posits that such widespread deployment crucially depends on developing appropriate model guardrails for safeguarding the results of the transformation process. Such guardrails can be supported by component exoskeletons, tunable partial specifications extracted mostly automatically from the original, pre-transformed component. Exoskeletons serve as component projections that supplement, and often go through, the entire transformation process, confirming that the new, transformed component meets the original specifications. They show promise on several real-world scenarios and unearth exciting research directions.
PACMI ’25, October 13-16, 2025, Seoul, Republic of Korea
</description>
<pubDate>Mon, 13 Oct 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164203</guid>
<dc:date>2025-10-13T00:00:00Z</dc:date>
</item>
<item>
<title>The Continuous Tensor Abstraction: Where Indices Are Real</title>
<link>https://hdl.handle.net/1721.1/164202</link>
<description>The Continuous Tensor Abstraction: Where Indices Are Real
Won, Jaeyeon; Ahrens, Willow; Collin, Teodoro Fields; Emer, Joel S.; Amarasinghe, Saman
This paper introduces the continuous tensor abstraction, allowing indices to take real-number values (e.g., A[3.14]). It also presents continuous tensor algebra expressions, such as Cx,y = Ax,y ∗ Bx,y, where indices are defined over a continuous domain. This work expands the traditional tensor model to include continuous tensors. Our implementation supports piecewise-constant tensors, on which infinite domains can be processed in finite time. We also introduce a new tensor format for efficient storage and a code generation technique for automatic kernel generation. For the first time, our abstraction expresses domains like computational geometry and computer graphics in the language of tensor programming. Our approach demonstrates competitive or better performance to hand-optimized kernels in leading libraries across diverse applications. Compared to hand-implemented libraries on a CPU, our compiler-based implementation achieves an average speedup of 9.20× on 2D radius search with ∼60× fewer lines of code (LoC), 1.22× on genomic interval overlapping queries (with ∼18× LoC saving), and 1.69× on trilinear interpolation in Neural Radiance Field (with ∼6× LoC saving).
</description>
<pubDate>Thu, 09 Oct 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164202</guid>
<dc:date>2025-10-09T00:00:00Z</dc:date>
</item>
<item>
<title>A Domain-Specific Probabilistic Programming Language for Reasoning about Reasoning (Or: A Memo on memo)</title>
<link>https://hdl.handle.net/1721.1/164201</link>
<description>A Domain-Specific Probabilistic Programming Language for Reasoning about Reasoning (Or: A Memo on memo)
Chandra, Kartik; Chen, Tony; Tenenbaum, Joshua B.; Ragan-Kelley, Jonathan
The human ability to think about thinking ("theory of mind") is a fundamental object of study in many disciplines. In recent decades, researchers across these disciplines have converged on a rich computational paradigm for modeling theory of mind, grounded in recursive probabilistic reasoning. However, practitioners often find programming in this paradigm challenging: first, because thinking-about-thinking is confusing for programmers, and second, because models are slow to run. This paper presents memo, a new domain-specific probabilistic programming language that overcomes these challenges: first, by providing specialized syntax and semantics for theory of mind, and second, by taking a unique approach to inference that scales well on modern hardware via array programming. memo enables practitioners to write dramatically faster models with much less code, and has already been adopted by several research groups.
</description>
<pubDate>Thu, 09 Oct 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164201</guid>
<dc:date>2025-10-09T00:00:00Z</dc:date>
</item>
<item>
<title>Pyrosome: Verified Compilation for Modular Metatheory</title>
<link>https://hdl.handle.net/1721.1/164200</link>
<description>Pyrosome: Verified Compilation for Modular Metatheory
Jamner, Dustin; Kammer, Gabriel; Nag, Ritam; Chlipala, Adam
We present Pyrosome, a generic framework for modular language metatheory that embodies a novel approach to extensible semantics and compilation, implemented in Coq. Common techniques for semantic reasoning are often tied to the specific structures of the languages and compilers that they support. Contextual equivalence is difficult to work with directly, and both logical relations and transition system-based approaches typically fix a specific notion of effect globally. While modular transition systems have been effective in imperative settings, they are suboptimal for functional code. These limitations restrict the extension and composition of semantics in these systems. In Pyrosome, verified compilers are fully extensible, meaning that to extend a language simply requires defining and verifying the compilation of the new feature, reusing the old correctness theorem for all other cases. The novel enabling idea is an inductive formulation of equivalence preservation that supports the addition of new rules to the source language, target language, and compiler.&#13;
&#13;
Pyrosome defines a formal, deeply embedded notion of programming languages with semantics given by dependently sorted equational theories, so all compiler-correctness proofs boil down to type-checking and equational reasoning. We support vertical composition of any compilers expressed in our framework in addition to feature extension. Since our design requires compilers to support open programs, our correctness guarantees support linking with any target code of the appropriate type. As a case study, we present a multipass compiler from System F with simple references, through CPS translation and closure conversion. Specifically, we demonstrate how we can build such a compiler incrementally by starting with a compiler for simply typed lambda-calculus and adding natural numbers, the unit type, recursive functions, and a global heap, then extending judgments with a type environment and adding type abstraction, all while reusing the original theorems. We also present a linear version of the simply typed CPS pass and compile a small imperative language to the simply typed target to show how Pyrosome handles substructural typing and imperative features.
</description>
<pubDate>Thu, 09 Oct 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164200</guid>
<dc:date>2025-10-09T00:00:00Z</dc:date>
</item>
<item>
<title>What You See Is What It Does: A Structural Pattern for Legible Software</title>
<link>https://hdl.handle.net/1721.1/164199</link>
<description>What You See Is What It Does: A Structural Pattern for Legible Software
Meng, Eagon; Jackson, Daniel
The opportunities offered by LLM coders (and their current limitations) demand a reevaluation of how software is structured. Software today is often “illegible”—lacking a direct correspondence between code and observed behavior—and insufficiently modular, leading to a failure of three key requirements of robust coding: incrementality (the ability to deliver small increments by making localized changes), integrity (avoiding breaking prior increments) and transparency (making clear what has changed at build time, and what actions have happened at runtime).&#13;
A new structural pattern offers improved legibility and modularity. Its elements are concepts and synchronizations: fully independent services and event-based rules that mediate between them. A domain-specific language for synchronizations allows behavioral features to be expressed in a granular and declarative way (and thus readily generated by an LLM). A case study of the RealWorld benchmark is used to illustrate and evaluate the approach.
Onward! ’25, Singapore, Singapore
</description>
<pubDate>Thu, 09 Oct 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164199</guid>
<dc:date>2025-10-09T00:00:00Z</dc:date>
</item>
<item>
<title>Gauguin, Descartes, Bayes: A Diurnal Golem’s Brain</title>
<link>https://hdl.handle.net/1721.1/164198</link>
<description>Gauguin, Descartes, Bayes: A Diurnal Golem’s Brain
Chandra, Kartik; Liu, Amanda; Ragan-Kelley, Jonathan; Tenenbaum, Joshua B.
A "quine" is a deterministic program that prints itself. In this essay, I will show you a "gauguine": a probabilistic program that infers itself. A gauguine is repeatedly asked to guess its own source code. Initially, its chances of guessing correctly are of course minuscule. But as the gauguine observes more and more of its own previous guesses, it detects patterns of behavior and gains information about its inner workings. This information allows it to bootstrap self-knowledge, and ultimately discover its own source code. We will discuss how—and why—we might write a gauguine, and what we stand to learn by constructing one.
Onward! ’25, Singapore, Singapore
</description>
<pubDate>Thu, 09 Oct 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164198</guid>
<dc:date>2025-10-09T00:00:00Z</dc:date>
</item>
<item>
<title>Low-Fidelity vs. High-Fidelity Spatial Design in Virtual Reality for Non-professionals</title>
<link>https://hdl.handle.net/1721.1/164197</link>
<description>Low-Fidelity vs. High-Fidelity Spatial Design in Virtual Reality for Non-professionals
Wei, Lan; Dai, Chenyue; Peng, Xuening; Tong, Xin; Liu, Can
In spatial design, non-professionals lack effective hands-on opportunities to participate in the design process. Although VR platforms can support spatial design with immersive interaction, existing tools simply provide high-fidelity 3D objects for users to choose and place around. Low-fidelity design approach is rarely supported, nor investigated in this context. In this work, we present a user study comparing low-fidelity and high-fidelity spatial design in VR. Eighteen participants were recruited to use both versions of a prototype with varied geometric fidelity to complete home designs. Their design outcome and intent was evaluated by professional designers. Our findings show, the low-fidelity version allowed participants to think more openly and creatively, leading to a more holistic expression of their design intent and needs, while the high-fidelity version promoted users’ thinking of realistic scenarios. We discuss the design implications and how they can be combined in co-design activities.
CHCHI 2024, Shenzhen, China
</description>
<pubDate>Tue, 29 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164197</guid>
<dc:date>2024-10-29T00:00:00Z</dc:date>
</item>
<item>
<title>Trans Data: A Research and Design Agenda from Trans Activists' Transformative Data Science</title>
<link>https://hdl.handle.net/1721.1/164196</link>
<description>Trans Data: A Research and Design Agenda from Trans Activists' Transformative Data Science
Stevens, Nikko; D'Ignazio, Catherine; Doğan, Amelia
Trans activists play a deeply important role in caring for and advocating for the transgender community using data. Through an interview study with 16 trans activists working in trans-led and trans-serving organizations in the United States, we document how they use restorative/transformative data science processes of resolving, researching, recording, and refusing and using data. We incorporate their data practices with trans technology and trans competent interaction design approaches to propose a research agenda for trans data: materially improve trans lives, cross data boundaries, and constantly engage in power analysis. We expound on how a trans data research agenda can benefit data advocacy and CSCW research and design.
</description>
<pubDate>Thu, 16 Oct 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164196</guid>
<dc:date>2025-10-16T00:00:00Z</dc:date>
</item>
<item>
<title>Techno-economic analysis and life cycle assessment for catalytic fast pyrolysis of mixed plastic waste</title>
<link>https://hdl.handle.net/1721.1/164195</link>
<description>Techno-economic analysis and life cycle assessment for catalytic fast pyrolysis of mixed plastic waste
Yadav, Geetanjali; Singh, Avantika; Dutta, Abhijit; Uekert, Taylor; DesVeaux, Jason S; Nicholson, Scott R; Tan, Eric CD; Mukarakate, Calvin; Schaidle, Joshua A; Wrasman, Cody J; Carpenter, Alberta C; Baldwin, Robert M; Román-Leshkov, Yuriy; Beckham, Gregg T
yrolysis of waste plastics has gained interest as a candidate chemical recycling technology. To examine the potential of this approach, we conducted a techno-economic analysis (TEA) and life cycle assessment (LCA) of a conceptual catalytic fast pyrolysis (CFP) facility that converts 240 metric tons/day of mixed plastic waste. The modeled base case predicts the minimum selling price (MSP) of a benzene, toluene, and xylenes (BTX) mixture at $1.07 per kg when co-products are sold at their average market prices. We predict that the aromatic product stream can be cost-competitive with virgin BTX mixtures ($0.68/kg) if the mixed waste plastics are available for less than $0.10/kg or if crude oil prices exceed $60/barrel. Moreover, we estimate that CFP-based conversion of waste plastics can reduce the total supply chain energy use by 24% but with a 2.4-fold increase in greenhouse gas (GHG) emissions per kilogram of BTX, relative to incumbent manufacturing process. Sensitivity analysis highlights that feedstock cost, co-product selling prices, capital cost for product separations, and operating costs are key cost drivers. Further, we examine three additional CFP processes that differ in product composition, namely naphtha, and a case where the products are rich in either C2–C4 olefins or BTX aromatic hydrocarbons. Whereas the MSP of naphtha ($2.18/kg) is ∼4-fold higher than virgin naphtha, both the olefin-rich and aromatics-rich product cases exhibit a potential reduction in MSP up to 40%, with a 21%–45% reduction in total supply chain energy and 2.2–3.8-fold increase in GHG emissions relative to incumbent manufacturing processes. LCA predicts that the CFP process exhibits lower fossil fuel depletion than virgin manufacturing across all cases as well as lower acidification, ozone depletion, and smog formation for select cases, but high utility and feedstock preparation requirements result in poorer performance across other metrics. Overall, this study highlights important process parameters for improving CFP of mixed waste plastics from economic and environmental perspectives.
</description>
<pubDate>Mon, 05 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164195</guid>
<dc:date>2023-06-05T00:00:00Z</dc:date>
</item>
<item>
<title>Togedule: Scheduling Meetings with Large Language Models and Adaptive Representations of Group Availability</title>
<link>https://hdl.handle.net/1721.1/164194</link>
<description>Togedule: Scheduling Meetings with Large Language Models and Adaptive Representations of Group Availability
Song, Jaeyoon; Ashktorab, Zahra; Malone, Thomas
Scheduling is a perennial-and often challenging-problem for many groups. Existing tools are mostly static, showing an identical set of choices to everyone, regardless of the current status of attendees' inputs and preferences. In this paper, we propose Togedule, an adaptive scheduling tool that uses large language models to dynamically adjust the pool of choices and their presentation format. With the initial prototype, we conducted a formative study (N=10) and identified the potential benefits and risks of such an adaptive scheduling tool. Then, after enhancing the system, we conducted two controlled experiments, one each for attendees and organizers (total N=66). For each experiment, we compared scheduling with verbal messages, shared calendars, or Togedule. Results show that Togedule significantly reduces the cognitive load of attendees indicating their availability and improves the speed and quality of the decisions made by organizers.
</description>
<pubDate>Thu, 16 Oct 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164194</guid>
<dc:date>2025-10-16T00:00:00Z</dc:date>
</item>
<item>
<title>Core‐passivation: A concept for stable core‐shell nanoparticles in aqueous electrocatalysis</title>
<link>https://hdl.handle.net/1721.1/164193</link>
<description>Core‐passivation: A concept for stable core‐shell nanoparticles in aqueous electrocatalysis
Göhl, Daniel; Paciok, Paul; Wang, Zhenshu; Kang, Jin Soo; Heggen, Marc; Mayrhofer, Karl JJ; Román‐Leshkov, Yuriy; Ledendecker, Marc
The stability of nanoparticles is a major challenge in thermal and electrocatalysis. This is especially true for core‐shell nanoparticles where only a few monolayers of noble metal protect the usually non‐noble core material. In this work, we utilize the practical nobility concept to engineer stable core‐shell nanoparticles with a self‐passivating core material. Specifically, tantalum carbide as core material in combination with a 1–3 monolayer thick platinum shell exhibits exceptional stability in aqueous media. The core‐shell catalyst shows no sign of structural changes after 10,000 degradation cycles up to 1.0 V&lt;jats:sub&gt;RHE&lt;/jats:sub&gt;. Due to the efficient passivation of tantalum carbide at the solid/liquid interface, the dissolution reduces by a factor of eight compared to bare Pt. Our findings confirm that passivating core materials are highly beneficial for the stabilization of core‐shell nanomaterials in aqueous media. They open up new ways for the rational design of cost‐efficient but stable non‐noble core – platinum shell nanoparticles where harsh, oxidizing conditions are employed.
</description>
<pubDate>Thu, 19 Jan 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164193</guid>
<dc:date>2023-01-19T00:00:00Z</dc:date>
</item>
<item>
<title>Interdependence of Solvent and Catalyst Selection on Low Pressure Hydrogen-Free Reductive Catalytic Fractionation</title>
<link>https://hdl.handle.net/1721.1/164192</link>
<description>Interdependence of Solvent and Catalyst Selection on Low Pressure Hydrogen-Free Reductive Catalytic Fractionation
Facas, Gregory G; Brandner, David G; Bussard, Jeremy R; Román-Leshkov, Yuriy; Beckham, Gregg T
Hydrogen-free reductive catalytic fractionation (RCF) is a promising method to produce aromatic compounds directly from native biomass without the use of external hydrogen gas. In this work, we show that by using high boiling point diols as a solvent in hydrogen-free RCF, reaction pressures can be reduced by an order of magnitude compared to conventional RCF with methanol and hydrogen gas, while still producing appreciable aromatic monomer yields. Importantly, the use of diols with secondary alcohol functional groups increases hydrogenation activity on Ru/C, Pt/C, and Ni/C, measured by the yield of aromatic compounds with saturated propyl side chains, compared to processing in ethylene glycol, indicating that the choice of solvent and catalyst together can be tuned to control product selectivity of aromatic monomers in RCF.
</description>
<pubDate>Mon, 13 Mar 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164192</guid>
<dc:date>2023-03-13T00:00:00Z</dc:date>
</item>
<item>
<title>Propylene Metathesis over Molybdenum Silicate Microspheres with Dispersed Active Sites</title>
<link>https://hdl.handle.net/1721.1/164191</link>
<description>Propylene Metathesis over Molybdenum Silicate Microspheres with Dispersed Active Sites
Skoda, David; Zhu, Ran; Hanulikova, Barbora; Styskalik, Ales; Vykoukal, Vit; Machac, Petr; Simonikova, Lucie; Kuritka, Ivo; Poleunis, Claude; Debecker, Damien P; Román-Leshkov, Yuriy
In this work, we demonstrate that amorphous and porous molybdenum silicate microspheres are highly active catalysts for heterogeneous propylene metathesis. Homogeneous molybdenum silicate microspheres and aluminum-doped molybdenum silicate microspheres were synthesized via a nonaqueous condensation of a hybrid molybdenum biphenyldicarboxylate-based precursor solution with (3-aminopropyl)triethoxysilane. The as-prepared hybrid metallosilicate products were calcined at 500 °C to obtain amorphous and porous molybdenum silicate and aluminum-doped molybdenum silicate microspheres with highly dispersed molybdate species inserted into the silicate matrix. These catalysts contain mainly highly dispersed MoOx species, which possess high catalytic activity in heterogeneous propylene metathesis to ethylene and butene. Compared to conventional silica-supported MoOx catalysts prepared via incipient wetness impregnation (MoIWI), the microspheres with low Mo content (1.5–3.6 wt %) exhibited nearly 2 orders of magnitude higher steady-state propylene metathesis rates at 200 °C, approaching site time yields of 0.11 s–1.
</description>
<pubDate>Wed, 20 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164191</guid>
<dc:date>2023-09-20T00:00:00Z</dc:date>
</item>
<item>
<title>Accessing monomers from lignin through carbon–carbon bond cleavage</title>
<link>https://hdl.handle.net/1721.1/164190</link>
<description>Accessing monomers from lignin through carbon–carbon bond cleavage
Palumbo, Chad T; Ouellette, Erik T; Zhu, Jie; Román-Leshkov, Yuriy; Stahl, Shannon S; Beckham, Gregg T
Lignin, the heterogeneous aromatic macromolecule found in the cell walls of vascular plants, is an abundant feedstock for the production of biochemicals and biofuels. Many valorization schemes rely on lignin depolymerization, with decades of research focused on accessing monomers through C–O bond cleavage, given the abundance of β–O–4 bonds in lignin and the large number of available C–O bond cleavage strategies. Monomer yields are, however, invariably lower than desired, owing to the presence of recalcitrant C–C bonds whose selective cleavage remains a major challenge in catalysis. In this Review, we highlight lignin C–C cleavage reactions, including those of linkages arising from biosynthesis (β–1, β–5, β–β and 5–5) and industrial processing (5–CH2–5 and α–5). We examine multiple approaches to C–C cleavage, including homogeneous and heterogeneous catalysis, photocatalysis and biocatalysis, to identify promising strategies for further research and provide guidelines for definitive measurements of lignin C–C bond cleavage.
</description>
<pubDate>Fri, 04 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164190</guid>
<dc:date>2024-10-04T00:00:00Z</dc:date>
</item>
<item>
<title>Direct propylene epoxidation via water activation over Pd-Pt electrocatalysts</title>
<link>https://hdl.handle.net/1721.1/164189</link>
<description>Direct propylene epoxidation via water activation over Pd-Pt electrocatalysts
Chung, Minju; Maalouf, Joseph H; Adams, Jason S; Jiang, Chenyu; Román-Leshkov, Yuriy; Manthiram, Karthish
Direct electrochemical propylene epoxidation by means of water-oxidation intermediates presents a sustainable alternative to existing routes that involve hazardous chlorine or peroxide reagents. We report an oxidized palladium-platinum alloy catalyst (PdPtOx/C), which reaches a Faradaic efficiency of 66 ± 5% toward propylene epoxidation at 50 milliamperes per square centimeter at ambient temperature and pressure. Embedding platinum into the palladium oxide crystal structure stabilized oxidized platinum species, resulting in improved catalyst performance. The reaction kinetics suggest that epoxidation on PdPtOx/C proceeds through electrophilic attack by metal-bound peroxo intermediates. This work demonstrates an effective strategy for selective electrochemical oxygen-atom transfer from water, without mediators, for diverse oxygenation reactions.
</description>
<pubDate>Thu, 04 Jan 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164189</guid>
<dc:date>2024-01-04T00:00:00Z</dc:date>
</item>
<item>
<title>Interaction Configurations and Prompt Guidance in Conversational AI for Question Answering in Human-AI Teams</title>
<link>https://hdl.handle.net/1721.1/164188</link>
<description>Interaction Configurations and Prompt Guidance in Conversational AI for Question Answering in Human-AI Teams
Song, Jaeyoon; Ashktorab, Zahra; Pan, Qian; Dugan, Casey; Geyer, Werner; Malone, Thomas
Understanding the dynamics of human-AI interaction in question answering is crucial for enhancing collaborative efficiency. Extending from our initial formative study, which revealed challenges in human utilization of conversational AI support, we designed two configurations for prompt guidance: a Nudging approach, where the AI suggests potential responses for human agents, and a Highlight strategy, emphasizing crucial parts of reference documents to aid human responses. Through two controlled experiments, the first involving 31 participants and the second involving 106 participants, we compared these configurations against traditional human-only approaches, both with and without AI assistance. Our findings suggest that effective human-AI collaboration can enhance response quality, though merely combining human and AI efforts does not ensure improved outcomes. In particular, the Nudging configuration was shown to help improve the quality of the output when compared to AI alone. This paper delves into the development of these prompt guidance paradigms, offering insights for refining human-AI collaborations in conversational question-answering contexts and contributing to a broader understanding of human perceptions and expectations in AI partnerships.
</description>
<pubDate>Thu, 16 Oct 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164188</guid>
<dc:date>2025-10-16T00:00:00Z</dc:date>
</item>
<item>
<title>Pushing on an Open Door: Japan’s Evolutionary Security Posture</title>
<link>https://hdl.handle.net/1721.1/164187</link>
<description>Pushing on an Open Door: Japan’s Evolutionary Security Posture
Heginbotham, Eric; Leiter, Samuel; Samuels, Richard J
At the 2022 Shangri­-La Dialogue, Japan’s Prime Minister Fumio Kishida warned defense ministers from across the Indo-Pacific region that “Ukraine today may be East Asia tomorrow.” Russia’s war of aggression and China’s tacit support for the invasion have amplified the urgency of the threat posed by China’s economic and military rise and have informed material changes to Japanese defense policy.
</description>
<pubDate>Thu, 13 Jul 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164187</guid>
<dc:date>2023-07-13T00:00:00Z</dc:date>
</item>
<item>
<title>Dissecting User Experience of Social Virtual Reality: A Tale of Five Platforms</title>
<link>https://hdl.handle.net/1721.1/164186</link>
<description>Dissecting User Experience of Social Virtual Reality: A Tale of Five Platforms
Cheng, Ruizhi; Li, Jie; Chen, Songqing; Han, Bo
Social virtual reality (VR) has the potential to replace conventional online social media by offering quasi-realworld social experiences. As such, it has been extensively examined by the research community. However,&#13;
existing studies fall short of providing a comprehensive understanding of how different aspects of social&#13;
VR platforms interact to affect user experience. Motivated by this limitation, we conduct a user study with&#13;
Oculus Quest 2 headsets and dissect the user experience on five social VR platforms. We evenly and randomly&#13;
divide 42 participants into short-term (spending 10–30 minutes/platform) and long-term (spending at least&#13;
120 minutes/platform) groups. Besides employing surveys and interviews, we measure the frame rate and&#13;
resolution of these platforms and explore how various factors interplay to influence the user experience of&#13;
social VR. Our findings reveal that the frame rate, resolution, and interactive events of social VR platforms&#13;
have a more significant impact on the experience of long-term users compared to short-term users. The&#13;
scalability limitations of these platforms, as evidenced by decreased frame rates with the increasing number&#13;
of concurrent users, result in an increased prevalence of motion sickness among long-term users, negatively&#13;
impacting their overall experience. Moreover, the absence of highly interactive events also deteriorates their&#13;
overall experience, and the low resolution combined with the lack of interactive events further decreases their&#13;
sense of social presence. Additionally, our study demonstrates several common limitations negatively affecting&#13;
the experience of both long-term and short-term users. For example, the harassment prevention mechanisms&#13;
on all five platforms are inadequate, and being harassed has a detrimental effect on users’ overall experience&#13;
and sense of social presence. The avatar embodiment of investigated platforms has limited contribution to&#13;
users’ sense of social presence, mainly due to the lack of realism and full-body tracking. Our findings call for&#13;
more research in scalability support, motion sickness relief, interactive event design, harassment prevention,&#13;
and avatar development for improving social VR platforms in the future.
</description>
<pubDate>Thu, 16 Oct 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164186</guid>
<dc:date>2025-10-16T00:00:00Z</dc:date>
</item>
<item>
<title>Investigation of critical heat flux enhancement on nanoengineered surfaces in pressurized subcooled flow boiling using infrared thermometry</title>
<link>https://hdl.handle.net/1721.1/164185</link>
<description>Investigation of critical heat flux enhancement on nanoengineered surfaces in pressurized subcooled flow boiling using infrared thermometry
Wang, Chi; Su, Guanyu; Akinsulire, Olorunsola; Zhang, Limiao; Rahman, Md Mahamudur; Bucci, Matteo
Enhancing the flow boiling critical heat flux (CHF) is beneficial to the economics and safety margins of many industrial applications cooled by boiling heat transfer. While many studies have shown that surfaces with hydrophilic nanoscale and micro-scale features can enhance CHF in pool boiling, it is still not clear how these engineered surfaces affect the CHF in subcooled flow boiling at ambient pressure, let alone high-pressure conditions. Here, two nano-engineered surfaces, i.e., a surface coated with a porous layer of hydrophilic silica nanoparticles and a surface coated with zinc oxide nanowires, were tested. Flow boiling tests with a 10 K subcooling and a mass flux of 1000 kg/(m2·s) were conducted at 1 bar and 4 bars using infrared thermometry diagnostics. At 1 bar, the CHF enhancement is around 15% for both coatings. At 4 bars, the CHF enhancement is around 17% for the nanowire surface, and around 25% for the nano-porous surface. Infrared thermometry measurements reveal that the CHF enhancement comes from an increase of both two-phase heat transfer and single-phase heat transfer mechanisms, which is due to a change of bubble dynamics on the nanoengineered surfaces. It is also shown that the boiling crisis can be predicted using a percolation model based on Monte Carlo (MC) simulations.
</description>
<pubDate>Tue, 28 Mar 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164185</guid>
<dc:date>2023-03-28T00:00:00Z</dc:date>
</item>
<item>
<title>Connecting Digitalization and Sustainability: Proptech in the Real Estate Operations and Management</title>
<link>https://hdl.handle.net/1721.1/164184</link>
<description>Connecting Digitalization and Sustainability: Proptech in the Real Estate Operations and Management
Tan, Zhengzhen; Miller, Norm G.
Digitalization of building operations and maintenance enable real-time monitoring, optimization, and automation for environment sustainability. Proptech startups are important change agents in accelerating building digitalization. While many researchers analyze economic and environmental savings from deployment of digital technology, far less attention has been devoted to challenges for proptech startups to transform efficiency gains into viable businesses. We analyze the Unissu global proptech startup database to reveal the scope and competitive landscape of proptech solutions. We conduct interviews with building owners/operators to understand what impedes the adoption of proptech solutions. Despite rapid growth, ongoing challenges remain for sustainability-focused proptech firms with three adoption barriers: (1) integration of the technology stacks; (2) integration of technology stacks with business processes; and (3) integration of owner/operators and the occupants’ solutions. Proptech with applications that work with existing infrastructure or provide more complete holistic solutions with extensive capital reserves, are more likely to survive. Other pathways include having data standardization and security protocols in place; technology partnership with technology incumbents; and effective communication with owners/operators to fill the knowledge gap. Findings can provide insights to emerging digital proptech startups as they spearhead market adoption in the real estate sector and monetize the sustainability value creation.
</description>
<pubDate>Thu, 27 Apr 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164184</guid>
<dc:date>2023-04-27T00:00:00Z</dc:date>
</item>
<item>
<title>Driver response and recovery following automation initiated disengagement in real-world hands-free driving</title>
<link>https://hdl.handle.net/1721.1/164183</link>
<description>Driver response and recovery following automation initiated disengagement in real-world hands-free driving
Gershon, Pnina; Mehler, Bruce; Reimer, Bryan
Objective&#13;
Advanced driver assistance systems are increasingly available in consumer vehicles, making the study of drivers’ behavioral adaptation and the impact of automation beneficial for driving safety. Concerns over driver’s being out-of-the-loop, coupled with known limitations of automation, has led research to focus on time-critical, system-initiated disengagements. This study used real-world data to assess drivers’ response to, and recovery from, automation-initiated disengagements by quantifying changes in visual attention, vehicle control, and time to steady-state behaviors.&#13;
&#13;
Methods&#13;
Fourteen drivers drove for one month each a Cadillac CT6 equipped with Super Cruise (SC), a partial automation system that, when engaged, enables hands-free driving. The vehicles were instrumented with data acquisition systems recording driving kinematics, automation use, GPS, and video. The dataset included 265 SC-initiated disengagements identified across 5,514 miles driven with SC.&#13;
&#13;
Results&#13;
Linear quantile mixed-effects models of glance behavior indicated that following SC-initiated disengagement, the proportions of glances to the Road decreased (Q50Before=0.91, Q50After=0.69; Q85Before=1.0, Q85After=0.79), the proportions of glances to the Instrument Cluster increased (Q50Before=0.14, Q50After=0.25; Q85Before=0.34, Q85After=0.45), and mean glance duration to the Road decreased by 4.86 sec in Q85. Multinomial logistic regression mixed-models of glance distributions indicated that the number of transitions between glance locations following disengagement increased by 43% and that glances were distributed across fewer locations. When driving hands-free, take over time was significantly longer (2.4 sec) compared to when driving with at least one hand on the steering wheel (1.8 sec). Analysis of moment-to-moment distributional properties of visual attention and steering wheel control following disengagement indicated that on average it took drivers 6.1 sec to start the recovery of glance behavior to the Road and 1.5 sec for trend-stationary proportions of at least one hand on the steering wheel.&#13;
&#13;
Conclusions&#13;
Automation-initiated disengagements triggered substantial changes in driver glance behavior including shorter on-road glances and frequent transitions between Road and Instrument Cluster glance locations. This information seeking behavior may capture drivers’ search for information related to the disengagement or the automation state and is likely shaped by the automation design. The study findings can inform the design of more effective driver-centric information displays for smoother transitions and faster recovery.
</description>
<pubDate>Wed, 29 Mar 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164183</guid>
<dc:date>2023-03-29T00:00:00Z</dc:date>
</item>
<item>
<title>Lessons in Sanctions-Proofing from Russia</title>
<link>https://hdl.handle.net/1721.1/164182</link>
<description>Lessons in Sanctions-Proofing from Russia
Glenn, Caileigh
Government actors and other observers across Europe and the United States called the multilateral sanctions imposed on Russia in early 2022 “unprecedented.”Footnote1 Even Russian President Vladimir Putin acknowledged their severity when he stressed “the need to counter economic restrictions that were imposed on us, which are truly unprecedented without any exaggeration.”Footnote2 Part of the response to the Russian invasion of Ukraine, these financial and trade sanctions—imposed on Russia by Western governments—target key firms in the financial and energy sectors, debt financing, technology, Russia’s foreign currency reserves, and more recently, most Russian oil and transportation insurers.
</description>
<pubDate>Tue, 04 Apr 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164182</guid>
<dc:date>2023-04-04T00:00:00Z</dc:date>
</item>
<item>
<title>Why Haven’t We Applied the Lessons from Lean to Innovation?</title>
<link>https://hdl.handle.net/1721.1/164181</link>
<description>Why Haven’t We Applied the Lessons from Lean to Innovation?
Wright, Randall S.
Yes, I know. People have been doingLean innovation—increasing efficiencyby capturing customer feedback earlyand often and minimizing waste in theproduct development cycle—for the last10 years.I’m not talking about applying Leanprinciples to innovation. I’m talkingabout how American business leadershad the humility to admit their firmsneeded to learn Lean from Japaneseculture to master globally competitiveoperations, and why they need now tolearn innovation from the culture ofuniversities to master globally compet-itive innovation.
</description>
<pubDate>Thu, 20 Apr 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164181</guid>
<dc:date>2023-04-20T00:00:00Z</dc:date>
</item>
<item>
<title>Becoming Infrastructure: A Critical Realist Account of the Evolution of DHIS2 as Digital Public Health Infrastructure in Sierra Leone</title>
<link>https://hdl.handle.net/1721.1/164180</link>
<description>Becoming Infrastructure: A Critical Realist Account of the Evolution of DHIS2 as Digital Public Health Infrastructure in Sierra Leone
Ndubuisi-Obi, Innocent; Chen, Nuole; Tsai, Lily
Today, the District Health Information System 2 (DHIS2) has become the de-facto standard for open-source health management information systems and Sierra Leone's status as the first country in sub-Saharan Africa to implement DHSI2 makes it a productive place for researchers interested in understanding the end-to-end process of infrastructuring in a low-resource bureaucratic setting. In this article, we examine its design, implementation, and maintenance in Sierra Leone over a period of 14 years - from 2008 to 2022. We present an intensive case study discretized by three morphogenetic cycles (decentralization, centralization, and fragmentation) and furnished with explanatory account's of DHIS2's evolution using a critical realist research methodology to describe the emergence of DHIS2 as digital public health infrastructure. These accounts highlight the structural and cultural systems of DHIS2, their elaborations, and their interaction with agents over successive periods of DHIS2's evolution. Our study finds that, despite its continued use in Sierra Leone, the increasing generativity in the structural and cultural systems of DHSI2 and Sierra Leone&amp;#8217;s public health system engenders a persistent instability that requires continuous resolution. Though we find that extant literature aids in our understanding of DHIS2&amp;#8217;s evolution, we proffer two mechanisms, infrastructural capture and socio-technical debt, which aid our explanation of events observed in our case study. Our work makes a case for more ontologically-diverse theorizing of bureaucracy-aware computing systems.
</description>
<pubDate>Thu, 16 Oct 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164180</guid>
<dc:date>2025-10-16T00:00:00Z</dc:date>
</item>
<item>
<title>Factorization in additive monoids of evaluation polynomial semirings</title>
<link>https://hdl.handle.net/1721.1/164179</link>
<description>Factorization in additive monoids of evaluation polynomial semirings
Ajran, Khalid; Bringas, Juliet; Li, Bangzheng; Singer, Easton; Tirador, Marcos
For a positive real α, we can consider the additive submonoid M of the real line that is generated by the nonnegative powers of α. When α is transcendental, M is a unique factorization monoid. However, when α is algebraic, M may not be atomic, and even when M is atomic, it may contain elements having more than one factorization (i.e., decomposition as a sum of irreducibles). The main purpose of this paper is to study the phenomenon of multiple factorizations inside M. When α is algebraic but not rational, the arithmetic of factorizations in M is highly interesting and complex. In order to arrive to that conclusion, we investigate various factorization invariants of M, including the sets of lengths, sets of Betti elements, and catenary degrees. Our investigation gives continuity to recent studies carried out by Chapman et al. in 2020 and by Correa-Morris and Gotti in 2022.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164179</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Argentella scandal: why French officials did not make Corsica a nuclear test site in 1960</title>
<link>https://hdl.handle.net/1721.1/164178</link>
<description>The Argentella scandal: why French officials did not make Corsica a nuclear test site in 1960
Cooper, Austin R.
Top French officials made plans in early 1960 to transform an abandoned silver mine in Corsica, called the Argentella Massif, into an underground site for nuclear explosions. By June 1960, they had canceled these plans. This article shows how a mass movement on the Mediterranean island forced their hand, and it explains why Corsicans of diverse political affiliations took to the streets. The Argentella project—and the health, environmental, and strategic risks that it entailed—looked in Corsica like evidence that Paris saw the islanders as second-class citizens, even residents of an internal colony. French police intelligence, which maintained surveillance on the Corsican anti-nuclear movement, feared that this movement might have drawn inspiration from the contemporaneous struggle for national liberation in Algeria, where French nuclear explosions began. The Argentella protests illustrated national disagreements about French nuclear ambitions that previous scholarship, proposing official consensus, has minimized. They show how, in a nuclear-armed democracy, local officials, political activists, and ordinary citizens can shape nuclear-weapons policy. But Corsican anti-nuclear action in 1960 did not demand disarmament. These protests also illuminate a longer trajectory in French nuclear history, which involved atmospheric explosions in colonized territories in Algeria and Polynesia until the 1970s, despite local and international resistance.
</description>
<pubDate>Mon, 17 Apr 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164178</guid>
<dc:date>2023-04-17T00:00:00Z</dc:date>
</item>
<item>
<title>Towards Foundation Model for Spatiotemporal Data Analysis</title>
<link>https://hdl.handle.net/1721.1/164177</link>
<description>Towards Foundation Model for Spatiotemporal Data Analysis
Wu, Yuankai; Chen, Xinyu; Zhuang, Dingyi
Spatiotemporal data modeling has long been a fundamental task&#13;
across disciplines such as climate &amp; environmental science, and&#13;
transportation engineering. A typical goal is to estimate unknown&#13;
information at specific spatiotemporal points based on partially&#13;
observed data—for example, interpolating weather conditions at&#13;
unmeasured locations, reconstructing missing historical records, or&#13;
forecasting the future trajectories of financial markets. These are&#13;
all core tasks within the broader scope of spatiotemporal modeling.&#13;
This tutorial (1 hours) introduces a cohesive view of spatiotemporal data modeling, tracing the evolution from traditional statistical&#13;
approaches to modern deep learning paradigms. We begin by revisiting Kriging and time series decomposition to highlight the essential&#13;
assumptions and strengths of these classical methods. Next, we explore low-rank matrix and tensor completion techniques, which&#13;
leverage the structured patterns of spatiotemporal data. We then&#13;
elaborate on spatiotemporal graph neural networks, which characterize complex dependencies by integrating graph structures with&#13;
dynamic temporal features. Finally, we discuss recent advances in&#13;
applying large foundation models to spatiotemporal tasks, including their capabilities and current limitations.&#13;
Throughout the tutorial, we emphasize how lessons from traditional methods—such as the importance of locality, periodicity, and&#13;
smoothness priors—can inspire new directions for developing and&#13;
fine-tuning foundation models in the spatiotemporal domain. We&#13;
conclude by outlining key challenges and opportunities in bridging&#13;
classical wisdom with emerging AI capabilities.
SSTD ’25, Osaka, Japan
</description>
<pubDate>Tue, 14 Oct 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164177</guid>
<dc:date>2025-10-14T00:00:00Z</dc:date>
</item>
<item>
<title>Gender gaps in South Korea’s labour market: children explain most of the gender employment gap, but little of the gender wage gap</title>
<link>https://hdl.handle.net/1721.1/164176</link>
<description>Gender gaps in South Korea’s labour market: children explain most of the gender employment gap, but little of the gender wage gap
Stansbury, Anna; Kirkegaard, Jacob Funk; Dynan, Karen
South Korea’s gender wage and employment gaps are among the largest in the OECD. Using labour force survey data over 2010–19, we estimate gender wage and employment gaps, and child earnings penalties, for women aged 25–54. We show (i) that the large gender gaps in South Korea’s labour market are mostly not a function of differential sorting by gender along education, occupation, or industry lines, (ii) that caring for children (and, perhaps increasingly, for the elderly) is the major factor inhibiting women’s labour force participation, and (iii) that large gender wage gaps exist even for women without care responsibilities. These findings suggest that improving opportunities for work–family balance is crucial to helping increase women’s labour force participation, but may do little to close gender wage gaps: other major obstacles also appear to stand in the way of Korean women’s full inclusion in the labour force.
</description>
<pubDate>Wed, 03 May 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164176</guid>
<dc:date>2023-05-03T00:00:00Z</dc:date>
</item>
<item>
<title>Madman or Mad Genius? The International Benefits and Domestic Costs of the Madman Strategy</title>
<link>https://hdl.handle.net/1721.1/164175</link>
<description>Madman or Mad Genius? The International Benefits and Domestic Costs of the Madman Strategy
Schwartz, Joshua A.
According to the “Madman Theory” outlined by Daniel Ellsberg and Thomas C. Schelling, and embraced by Presidents Richard Nixon and Donald Trump, being perceived as mad can help make seemingly incredible threats—such as starting a nuclear war—more credible. However, recent research has largely concluded that the Madman Theory does not work. In this study, I theorize that the international benefits of the Madman Theory have been underestimated, but also that there are significant domestic barriers associated with adopting such a strategy that undermine its effectiveness. Through a series of five novel survey experiments, I find evidence that perceived madness provides limited advantages in coercive bargaining vis-à-vis foreign adversaries, but it also entails significant domestic costs that potentially erode its efficacy. Overall, this study provides clearer support for the Madman Theory than most previous literature has found, but also breaks new theoretical ground by analyzing the domestic politics of perceived madness.
</description>
<pubDate>Thu, 04 May 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164175</guid>
<dc:date>2023-05-04T00:00:00Z</dc:date>
</item>
<item>
<title>Purrfect Pitch: Exploring Pitch Interval Learning through an Audio-Haptic Interface</title>
<link>https://hdl.handle.net/1721.1/164174</link>
<description>Purrfect Pitch: Exploring Pitch Interval Learning through an Audio-Haptic Interface
Chin, Sam; Fang, Cathy Mengying; Singh, Nikhil; Ibrahim, Ibrahim; Paradiso, Joe; Maes, Pattie
We introduce Purrfect Pitch, a system consisting of a wearable haptic device and a custom-designed learning interface for musical ear training. We focus on the ability to identify pitch intervals (sequences of two musical notes), a perceptually ambiguous task that usually requires rote training. With our system, users hear two tones while simultaneously receiving two corresponding vibrotactile stimuli on the back. Providing haptic feedback on the back makes the auditory distance between tones salient, and the back-worn design is comfortable and unobtrusive. During training, users receive multi-sensory feedback from our system and input their guessed interval value on our web-based learning interface. Our study with 18 participants shows that our system enables novice learners to identify intervals more accurately and consistently than those who only received audio feedback, even after removing the haptic feedback. We also share further insights on designing a multisensory learning system.
AHs 2025, Masdar City, Abu Dhabi, United Arab Emirates
</description>
<pubDate>Thu, 09 Oct 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164174</guid>
<dc:date>2025-10-09T00:00:00Z</dc:date>
</item>
<item>
<title>Teaching AI to Feel: A Collaborative, Full-Body Exploration of Emotive Communication</title>
<link>https://hdl.handle.net/1721.1/164173</link>
<description>Teaching AI to Feel: A Collaborative, Full-Body Exploration of Emotive Communication
Lemus, Lissette; Pilcher, Kris; Sprengel, Holger; Sabater-Mir, Jordi; Tütüncü, Esen K.
Commonaiverse is an interactive installation exploring human emotions through full-body motion tracking and real-time AI feedback. Participants engage in three phases: Teaching, Exploration and the Cosmos Phase, collaboratively expressing and interpreting emotions with the system. The installation integrates MoveNet for precise motion tracking and a multi-recommender AI system to analyze emotional states dynamically, responding with adaptive audiovisual outputs. By shifting from top-down emotion classification to participant-driven, culturally diverse definitions, we highlight new pathways for inclusive, ethical affective computing. We discuss how this collaborative, out-of-the-box approach pushes multimedia research beyond single-user facial analysis toward a more embodied, co-created paradigm of emotional AI. Furthermore, we reflect on how this reimagined framework fosters user agency, reduces bias, and opens avenues for advanced interactive applications.
MM ’25, October 27–31, 2025, Dublin, Ireland
</description>
<pubDate>Mon, 27 Oct 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164173</guid>
<dc:date>2025-10-27T00:00:00Z</dc:date>
</item>
<item>
<title>Personalized Animations for Affective Feedback: Generative AI Helps to Visualize Skin Conductance</title>
<link>https://hdl.handle.net/1721.1/164172</link>
<description>Personalized Animations for Affective Feedback: Generative AI Helps to Visualize Skin Conductance
Scheirer, Jocelyn; Picard, Rosalind; Cantrell, Aubrey
Biofeedback interfaces traditionally rely on abstract visualizations,&#13;
tones, or haptics to convey physiological states—but these often lack&#13;
personal relevance, emotional salience, and engagement. In this&#13;
paper, we present a novel system that bridges wearable sensing and&#13;
generative AI to create real-time, personalized animated&#13;
biofeedback experiences. Users describe emotionally meaningful&#13;
objects or scenes to a language model in our system, which outputs&#13;
generate customized Processing animations. These animations are&#13;
then dynamically driven by electrodermal activity (EDA) signals&#13;
from a wrist sensor. We co-design and evaluate the system with&#13;
autistic adults, many of whom have unique “special interests” that&#13;
are likely to engage them more than a one-sized-fits-all&#13;
visualization. Many of these individuals also have difficulty with&#13;
interoception -- feeling or sensing their own internal and&#13;
physiological state changes. We built this tool to transform passive&#13;
physiological monitoring into an interactive multimedia&#13;
experience, where the visual representation of the body is authored&#13;
by the user. We introduce a prompt-engineered GPT-based&#13;
interface that streamlines code generation, sensor mapping, and&#13;
iterative refinement, requiring no prior coding expertise. The&#13;
technical pipeline we built includes signal filtering, dynamic&#13;
parameter mapping, and natural language-based customization—&#13;
delivering a real-time, visually immersive feedback loop. We report&#13;
on initial case studies with 12 autistic adults using the system,&#13;
which highlight both the expressive potential and individual&#13;
variability of user responses, reinforcing the need for adaptable&#13;
multimedia frameworks in health technologies. By merging realtime physiological data with generative animation and natural&#13;
language interaction, this work expands the creative frontier of&#13;
personalized affective biofeedback. We also address ethical&#13;
challenges arising from using AI with physiological sensors.
MRAC '25, October 27–28, 2025, Dublin, Ireland
</description>
<pubDate>Sun, 26 Oct 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164172</guid>
<dc:date>2025-10-26T00:00:00Z</dc:date>
</item>
<item>
<title>Hands-on Strategies for Teaching Social and Societal Impacts of Computing</title>
<link>https://hdl.handle.net/1721.1/164171</link>
<description>Hands-on Strategies for Teaching Social and Societal Impacts of Computing
Kurkovsky, Stan; Nnamani, Manee Ngozi; Hunter, Aaron; Sobomehin, Olatunde; Braught, Grant; Goldweber, Michael
The topic of hands-on strategies for teaching the social and societal impacts of computing is of growing interest to the computer&#13;
science education community because it addresses a critical gap in&#13;
traditional CS curricula [7]. While technical skills remain central,&#13;
educators increasingly recognize the need to prepare students for&#13;
the ethical, social, and human-centered challenges posed by modern computing technologies. From AI-driven decision-making to&#13;
digital accessibility and data privacy, computing profoundly affects&#13;
individuals and communities, making it essential for students to&#13;
engage with these issues through experiential learning [12].&#13;
Different viewpoints on this topic emerge based on pedagogical&#13;
approaches, disciplinary perspectives, and technological optimism&#13;
or skepticism. Some educators advocate for integrating servicelearning and community-based projects, arguing that real-world&#13;
engagement fosters empathy and ethical awareness. Others emphasize case studies and simulations, providing structured exposure to&#13;
societal challenges without the unpredictability of external partnerships. Additionally, viewpoints may diverge on the role of AI: while&#13;
some see AI tools as an opportunity to enhance social good, others&#13;
worry they may exacerbate biases and reduce human agency in&#13;
computing. Despite these differences, there is broad agreement that&#13;
computing education must go beyond technical training to include&#13;
a deeper understanding of computing’s role in society.
CompEd 2025, October 21–25, 2025, Gaborone, Botswana
</description>
<pubDate>Tue, 21 Oct 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164171</guid>
<dc:date>2025-10-21T00:00:00Z</dc:date>
</item>
<item>
<title>Analysis of Driver and Pedestrian Gesture Use in the Boston Area. Automated Vehicles May Need More Than Kinematics in Ambiguous Situations</title>
<link>https://hdl.handle.net/1721.1/164118</link>
<description>Analysis of Driver and Pedestrian Gesture Use in the Boston Area. Automated Vehicles May Need More Than Kinematics in Ambiguous Situations
Weibert, Alexander; Manstetten, Dietrich; Reimer, Bryan; Gershon, Pnina; Mehler, Bruce; Abdenebaoui, Larbi; Hatice Şahin, İppoliti
Roadways, despite their formal regulations, are dynamic spaces where humans interact beyond formal rules to resolve conflicts. In ambiguous situations, the right of way is often unclear. Self-driving vehicles in urban traffic introduce challenges to their coexistence with humans, indicating a need for greater social awareness in these vehicles. To investigate social interactions among roadway users, we analyzed a naturalistic driving dataset focusing on instances where drivers yielded to pedestrians, by noting gestures. Video analysis showed that gestures were more common in ambiguous situations than in regulated scenarios. Drivers used gestures to navigate the right of way efficiently, while pedestrians used them to express gratitude. These findings highlight the importance of understanding social expressions in designing socially aware self-driving vehicles.
AutomotiveUI Adjunct ’25, Brisbane, QLD, Australia
</description>
<pubDate>Wed, 08 Oct 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164118</guid>
<dc:date>2025-10-08T00:00:00Z</dc:date>
</item>
<item>
<title>Characterizing and Optimizing Realistic Workloads on a Commercial Compute-in-SRAM Device</title>
<link>https://hdl.handle.net/1721.1/164117</link>
<description>Characterizing and Optimizing Realistic Workloads on a Commercial Compute-in-SRAM Device
Zhang, Niansong; Zhu, Wenbo; Golden, Courtney; Ilan, Dan; Chen, Hongzheng; Batten, Christopher; Zhang, Zhiru
Compute-in-SRAM architectures offer a promising approach to&#13;
achieving higher performance and energy efficiency across a range&#13;
of data-intensive applications. However, prior evaluations have&#13;
largely relied on simulators or small prototypes, limiting the understanding of their real-world potential. In this work, we present&#13;
a comprehensive performance and energy characterization of a&#13;
commercial compute-in-SRAM device, the GSI APU, under realistic&#13;
workloads. We compare the GSI APU against established architectures, including CPUs and GPUs, to quantify its energy efficiency&#13;
and performance potential. We introduce an analytical framework&#13;
for general-purpose compute-in-SRAM devices that reveals fundamental optimization principles by modeling performance trade-offs,&#13;
thereby guiding program optimizations.&#13;
Exploiting the fine-grained parallelism of tightly integrated&#13;
memory-compute architectures requires careful data management.&#13;
We address this by proposing three optimizations: communicationaware reduction mapping, coalesced DMA, and broadcast-friendly&#13;
data layouts. When applied to retrieval-augmented generation&#13;
(RAG) over large corpora (10GB–200GB), these optimizations enable&#13;
our compute-in-SRAM system to accelerate retrieval by 4.8×–6.6×&#13;
over an optimized CPU baseline, improving end-to-end RAG latency by 1.1×–1.8×. The shared off-chip memory bandwidth is&#13;
modeled using a simulated HBM, while all other components are&#13;
measured on the real compute-in-SRAM device. Critically, this system matches the performance of an NVIDIA A6000 GPU for RAG&#13;
while being significantly more energy-efficient (54.4×-117.9× reduction). These findings validate the viability of compute-in-SRAM&#13;
for complex, real-world applications and provide guidance for advancing the technology.
MICRO ’25, Seoul, Republic of Korea
</description>
<pubDate>Fri, 17 Oct 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164117</guid>
<dc:date>2025-10-17T00:00:00Z</dc:date>
</item>
<item>
<title>Voice to Vision: A Sociotechnical System for Transparent Civic Decision-Making</title>
<link>https://hdl.handle.net/1721.1/164116</link>
<description>Voice to Vision: A Sociotechnical System for Transparent Civic Decision-Making
Hughes, Margaret; Overney, Cassandra; Kamra, Ashima; Tepale, Jasmin; Hamby, Elizabeth; Jasim, Mahmood; Roy, Deb
Communities frequently report sending feedback “into a void” during community engagement processes like neighborhood planning, creating a critical disconnect between public input and decision-making. Voice to Vision addresses this gap with a sociotechnical system that comprises three integrated components: a flexible data architecture linking community input to planning outputs, a sensemaking interface for planners to analyze and synthesize feedback, and a community-facing platform that makes the entire engagement process transparent. By creating a shared information space between stakeholders, our system demonstrates how structured data and specialized interfaces can foster cooperation across stakeholder groups, while addressing tensions in accessibility and trust formation. Our CSCW demonstration will showcase this system’s ability to transform opaque civic decision-making processes into collaborative exchanges, inviting feedback on its potential applications beyond urban planning.
CSCW Companion ’25, Bergen, Norway
</description>
<pubDate>Fri, 17 Oct 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164116</guid>
<dc:date>2025-10-17T00:00:00Z</dc:date>
</item>
<item>
<title>Augmenting Collaborative Problem-Solving: Exploring the Design and Use of GenAI for Groupwork</title>
<link>https://hdl.handle.net/1721.1/164115</link>
<description>Augmenting Collaborative Problem-Solving: Exploring the Design and Use of GenAI for Groupwork
Johnson, Janet; Rick, Steven; Gr?nb?k, Jens Emil; Wong, Emily; Yin, Ming; Nebeling, Michael; Klein, Mark; Ackerman, Mark; Malone, Thomas
Complex problem-solving and creative work in the real world are rarely individual endeavors and typically unfold within teams and group settings. While advancements in generative artificial intelligence (GenAI) have shown promise in augmenting creativity and productivity, these tools are primarily designed for individual use and overlook group dynamics and the collaborative aspects of teamwork. This workshop will provide a platform for researchers and practitioners to explore the design of future human-AI groups across four key themes: (1) the role of GenAI in group settings, (2) collaborative and multimodal interactions with GenAI, (3) evaluating GenAI’s influence within groups and designing for appropriate reliance, and (4) evolving group practices in the presence of GenAI. We hope to build a community and construct alignment across participants around how to pursue research that understands how GenAI can augment, undermine, or bring new practices to collaborative settings and groupwork.
CSCW Companion ’25, Bergen, Norway
</description>
<pubDate>Fri, 17 Oct 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164115</guid>
<dc:date>2025-10-17T00:00:00Z</dc:date>
</item>
<item>
<title>Development of systematic uncertainty-aware neural network trainings for binned-likelihood analyses at the LHC</title>
<link>https://hdl.handle.net/1721.1/164114</link>
<description>Development of systematic uncertainty-aware neural network trainings for binned-likelihood analyses at the LHC
CMS Collaboration
We propose a neural network training method capable of accounting for the effects of systematic variations of the data model in the training process and describe its extension towards neural network multiclass classification. The procedure is evaluated on the realistic case of the measurement of Higgs boson production via gluon fusion and vector boson fusion in the τ τ decay channel at the CMS experiment. The neural network output functions are used to infer the signal strengths for inclusive production of Higgs bosons as well as for their production via gluon fusion and vector boson fusion. We observe improvements of 12 and 16% in the uncertainty in the signal strengths for gluon and vector-boson fusion, respectively, compared with a conventional neural network training based on cross-entropy.
</description>
<pubDate>Wed, 26 Nov 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164114</guid>
<dc:date>2025-11-26T00:00:00Z</dc:date>
</item>
<item>
<title>Designs Related Through Projective and Hopf Maps</title>
<link>https://hdl.handle.net/1721.1/164113</link>
<description>Designs Related Through Projective and Hopf Maps
Lindblad, Ayodeji
We verify a construction which, for K the reals, complex numbers, quaternions, or octonions, builds a spherical t-design by placing a spherical t-design on each K -projective or K -Hopf fiber associated to the points of a ⌊ t / 2 ⌋ -design on a quotient projective space K P n ≠ O P 2 or sphere. This generalizes work of König and Kuperberg, who verified the K = C case of the projective settings, and of Okuda, who (inspired by independent observation of this construction by Cohn, Conway, Elkies, and Kumar) verified the K = C case of the generalized Hopf settings.
</description>
<pubDate>Fri, 28 Nov 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164113</guid>
<dc:date>2025-11-28T00:00:00Z</dc:date>
</item>
<item>
<title>A generative deep learning approach to de novo antibiotic design</title>
<link>https://hdl.handle.net/1721.1/164112</link>
<description>A generative deep learning approach to de novo antibiotic design
Krishnan, Aarti; Anahtar, Melis N.; Valeri, Jacqueline A.; Jin, Wengong; Donghia, Nina M.; Sieben, Leif; Luttens, Andreas; Zhang, Yu; Modaresi, Seyed Majed; Hennes, Andrew; Fromer, Jenna; Bandyopadhyay, Parijat; Chen, Jonathan C.; Rehman, Danyal; Desai, Ronak; Edwards, Paige; Lach, Ryan S.; Aschtgen, Marie-Stéphanie; Gaborieau, Margaux; Gaetani, Massimiliano; Palace, Samantha G.; Omori, Satotaka; Khonde, Lutete; Moroz, Yurii S.; Blough, Bruce; Jin, Chunyang; Loh, Edmund; Grad, Yonatan H.; Saei, Amir Ata; Coley, Connor W.; Wong, Felix; Collins, James J.
The antimicrobial resistance crisis necessitates structurally distinct antibiotics. While deep learning approaches can identify antibacterial compounds from existing libraries, structural novelty remains limited. Here, we developed a generative artificial intelligence framework for designing de novo antibiotics through two approaches: a fragment-based method to comprehensively screen &gt;107 chemical fragments in silico against Neisseria gonorrhoeae or Staphylococcus aureus, subsequently expanding promising fragments, and an unconstrained de novo compound generation, each using genetic algorithms and variational autoencoders. Of 24 synthesized compounds, seven demonstrated selective antibacterial activity. Two lead compounds exhibited bactericidal efficacy against multidrug-resistant isolates with distinct mechanisms of action and reduced bacterial burden in vivo in mouse models of N. gonorrhoeae vaginal infection and methicillin-resistant S. aureus skin infection. We further validated structural analogs for both compound classes as antibacterial. Our approach enables the generative deep-learning-guided design of de novo antibiotics, providing a platform for mapping uncharted regions of chemical space.
</description>
<pubDate>Thu, 16 Oct 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164112</guid>
<dc:date>2025-10-16T00:00:00Z</dc:date>
</item>
<item>
<title>Frontiers of biological material intelligence</title>
<link>https://hdl.handle.net/1721.1/164111</link>
<description>Frontiers of biological material intelligence
Marom, Lee; Buehler, Markus J.
Biological materials exhibit a form of intelligence that enables them to sense, adapt, and self-optimize in response to their environments. Unlike synthetic materials, which are often designed for singular, static functions, natural material systems integrate sensing, memory, and feedback directly into their architectures. As industries face increasing demands for resilience, sustainability, and efficiency, the development of intelligent materials has become a promising step toward the future of material innovation. Advances in artificial intelligence and machine learning, along with mathematical frameworks spanning graph theory and category theory, provide powerful tools to uncover the underlying design principles of intelligent biological materials. Simultaneously, digital fabrication methods, including additive manufacturing and biofabrication, allow the scalable realization of adaptive material systems. As the integration of deep biological insight, computational modeling, and advanced fabrication continues to evolve, it sets the stage for a profound shift in how we conceive, create, and deploy materials. Advancing this convergence will accelerate the development of intelligent systems that are capable of autonomous adaptation, long-term resilience, and embedded functionality across scales and environments.
</description>
<pubDate>Wed, 26 Nov 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164111</guid>
<dc:date>2025-11-26T00:00:00Z</dc:date>
</item>
<item>
<title>Leveraging community engagement and human-centered design to develop multilevel implementation strategies to enhance adoption of a health equity intervention</title>
<link>https://hdl.handle.net/1721.1/164110</link>
<description>Leveraging community engagement and human-centered design to develop multilevel implementation strategies to enhance adoption of a health equity intervention
Price, Maggi A.; Mulkern, Patrick J.; Condon, Madelaine; Rakhilin, Marina; Johansen, Kara; Lyon, Aaron R.; Saldana, Lisa; Pachankis, John; Woodward, Sue A.; Roeder, Kathryn M.; Moran, Lyndsey R.; Jerskey, Beth A.
Background Health equity intervention implementation (which promotes positive health outcomes for populations experiencing disproportionately worse health) is often impeded by health-equity-specific barriers like provider bias; few studies demonstrate how to overcome these barriers through implementation strategies. An urgent health equity problem in the U.S. is the mental health of transgender youth. To address this, we developed Gender-Affirming Psychotherapy (GAP), a health equity intervention comprising best-practice mental health care for transgender youth. This paper details the identification of implementation determinants and the development of targeted strategies to promote provider adoption of GAP. Methods This study represents part of a larger study of mental health provider adoption of GAP. Here we describe the first 2 stages of the 3-stage community-engaged and human-centered design process – Discover, Design/Build, and Test – to identify implementation determinants of adoption and develop implementation strategies with transgender youth, their parents, and mental health providers. This process involved collecting data via focus groups, design meetings, usability testing, and champion meetings. Data were analyzed using rapid and conventional content analysis. Qualitative coding of implementation determinants was guided by the Health Equity Implementation Framework, and implementation strategy coding was facilitated by the ERIC Implementation Strategy Compilation. Results We identified 15 determinants of GAP adoption, and all were specific to the transgender population (e.g., inclusive record system, anti-transgender attitudes). Seventeen implementation strategies were recommended and 12 were developed, collectively addressing all identified determinants. Most strategies were packaged into an online self-paced mental health provider training (implementation intervention) with 6 training tools. Additional inner-setting strategies were designed to support training uptake (e.g., mandate training) and GAP adoption (e.g., change record system). Conclusions Community-engaged and human-centered design methods can identify health equity intervention implementation determinants and develop targeted strategies. We highlight five generalizable takeaways for health equity implementation scientists: (1) implementer bias may be a key barrier, (2) experience with the health equity population may be an important facilitator, (3) stakeholder stories may be an effective training tool, (4) inner-setting-level implementation strategies may be necessary, and (5) teaching implementers how to build implementation strategies can overcome resource-constraints. Trial registration November 11, 2022; NCT05626231.
</description>
<pubDate>Mon, 24 Nov 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164110</guid>
<dc:date>2025-11-24T00:00:00Z</dc:date>
</item>
<item>
<title>Breeding of microbiomes conferring salt tolerance to plants</title>
<link>https://hdl.handle.net/1721.1/164109</link>
<description>Breeding of microbiomes conferring salt tolerance to plants
Guilherme Pereira, Caio; Edwards, Joseph A.; Khasanova, Albina; Carlson, Alexis; Brisson, Vanessa; Schaefer, Estelle; Glavina del Rio, Tijana; Tringe, Susannah; Vogel, John P.; Des Marais, David L.; Juenger, Thomas E.; Mueller, Ulrich G.
Background Microbiome breeding through host-mediated selection is a technique to artificially select for microbiomes conferring beneficial properties to plants. Using a systematic selection protocol that maximises the heritability of microbiome effects, transmission fidelity, and microbiome stability through multiple selection cycles, we previously developed root-associated microbial communities conferring sodium and aluminium tolerance to Brachypodium distachyon, a model for cereal crops. Here, we explore the physiological mechanisms underlying our selected microbiomes’ effect on plant fitness and analyse how our selection protocol shaped the composition and structure of these microbiomes. We analysed the effects of our selected microbiomes on plant fitness and tissue-nutrient concentration, then used 16S rRNA amplicon sequencing to examine microbial community composition and co-occurrence network patterns. Results Our sodium-selected microbiomes reduced leaf sodium concentration by ~ 50%, whereas the aluminium-selected microbiomes had no effect on leaf-tissue nutrient concentration, suggesting different mechanisms underlying the microbiome-mediated stress tolerance. By testing the selected microbiomes in a cross-fostering experiment, we show that our artificially selected microbiomes attained (a) ecological robustness contributing to transplantability (i.e. inheritance) of microbiome-encoded effects between plants; and (b) network features identifying key bacteria promoting salt-stress tolerance. Conclusions Combined, these findings elucidate critical mechanisms underlying host-mediated artificial selection as a framework to breed microbiomes with targeted benefits for plants under salt stresses, with significant implications for sustainable agriculture.
</description>
<pubDate>Thu, 27 Nov 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164109</guid>
<dc:date>2025-11-27T00:00:00Z</dc:date>
</item>
<item>
<title>Measurement of ψ(2S) to J/ψ cross-section ratio as function of multiplicity in pPb collisions at √sNN = 8.16 TeV</title>
<link>https://hdl.handle.net/1721.1/164108</link>
<description>Measurement of ψ(2S) to J/ψ cross-section ratio as function of multiplicity in pPb collisions at √sNN = 8.16 TeV
Aaij, R.; Abdelmotteleb, A. S. W.; Abellan Beteta, C.; Abudinén, F.; Ackernley, T.; Adefisoye, A. A.; Adeva, B.; Adinolfi, M.; Adlarson, P.; Agapopoulou, C.; Aidala, C. A.; Ajaltouni, Z.; Akar, S.; Akiba, K.; Albicocco, P.; Albrecht, J.; Alessio, F.; The LHCb collaboration
The production ratio of ψ(2S) to J/ψ charmonium states is presented as a function of multiplicity in proton-lead collisions at a centre-of-mass energy of s NN = 8.16 TeV, for both prompt and nonprompt sources. The total luminosity recorded by the LHCb experiment corresponds to 13.6 nb−1 for pPb collisions and 20.8 nb−1 for Pbp collisions, where the first particle corresponds to the particle traveling towards the detector. Measurements are performed in the dimuon final state at forward (backward) centre-of-mass rapidity, with respect to the proton direction, 1.5 &lt; y* &lt; 4.0 (−5.0 &lt; y* &lt; −2.5) for pPb (Pbp) collisions. A multiplicity dependence of the prompt production ratio is observed in pPb collisions, whereas no dependence is found in nonprompt production, nor in either prompt or nonprompt production in Pbp collisions. These results suggest that in the Pb-going direction additional suppression mechanisms beyond comover effects may be present, possibly related to the formation of quark-gluon plasma. This highlights a transition from small to large collision systems and provides important insight into the suppression of charmonia in proton-nucleus collisions.
</description>
<pubDate>Wed, 26 Nov 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164108</guid>
<dc:date>2025-11-26T00:00:00Z</dc:date>
</item>
<item>
<title>CURENet: combining unified representations for efficient chronic disease prediction</title>
<link>https://hdl.handle.net/1721.1/164107</link>
<description>CURENet: combining unified representations for efficient chronic disease prediction
Dao, Cong-Tinh; Phan, Nguyen M. T.; Ding, Jun-En; Wu, Chenwei; Restrepo, David; Luo, Dongsheng; Zhao, Fanyi; Liao, Chun-Chieh; Peng, Wen-Chih; Wang, Chi-Te; Chen, Pei-Fu; Chen, Ling; Ju, Xinglong; Liu, Feng; Hung, Fang-Ming
Electronic health records (EHRs) are designed to synthesize diverse data types, including unstructured clinical notes, structured lab tests, and time-series visit data. Physicians draw on these multimodal and temporal sources of EHR data to form a comprehensive view of a patient’s health, which is crucial for informed therapeutic decision-making. Yet, most predictive models fail to fully capture the interactions, redundancies, and temporal patterns across multiple data modalities, often focusing on a single data type or overlooking these complexities. In this paper, we present CURENet, a multimodal model (Combining Unified Representations for Efficient chronic disease prediction) that integrates unstructured clinical notes, lab tests, and patients’ time-series data by utilizing large language models (LLMs) for clinical text processing and textual lab tests, as well as transformer encoders for longitudinal sequential visits. Curenet has been capable of capturing the intricate interaction between different forms of clinical data and creating a more reliable predictive model for chronic illnesses. We evaluated CURENet using the public MIMIC-III and private FEMH datasets, where it achieved over 94% accuracy in predicting the top 10 chronic conditions in a multi-label framework. Our findings highlight the potential of multimodal EHR integration to enhance clinical decision-making and improve patient outcomes.
</description>
<pubDate>Thu, 27 Nov 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164107</guid>
<dc:date>2025-11-27T00:00:00Z</dc:date>
</item>
<item>
<title>Effective field theory factorization for diffraction</title>
<link>https://hdl.handle.net/1721.1/164106</link>
<description>Effective field theory factorization for diffraction
Lee, Kyle; Schindler, Stella T.; Stewart, Iain W.
We derive a factorization formula for coherent and incoherent ep diffraction using the soft collinear effective theory, utilizing multiple power expansion parameters to handle different kinematic regions. This goes beyond the known hard-collinear diffractive factorization to address the small-x Regge dynamics and Pomeron exchange from first principles. The effective field theory analysis also uncovers and factorizes an important irreducible incoherent background generated by color-nonsinglet exchange, dubbed “quasi-diffraction”, for which we calculate the associated Sudakov suppression. For unpolarized scattering we show that there are four diffractive structure functions at leading power, and point out the importance of studying F 3 , 4 D through asymmetries, in addition to F 2 , L D . For the quasi-diffractive background, we make model independent predictions for ratios of the corresponding structure functions in a perturbative kinematic region. Our analysis also makes predictions for six leading-power spin-dependent structure functions. Finally, we provide connections to diffractive parton distributions, and assess the Ingelman-Schlein model. Our work lays a path for further QCD-based studies of diffraction.
</description>
<pubDate>Tue, 25 Nov 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164106</guid>
<dc:date>2025-11-25T00:00:00Z</dc:date>
</item>
<item>
<title>Coherent photoproduction of ρ0, ω and excited vector mesons in ultraperipheral PbPb collisions</title>
<link>https://hdl.handle.net/1721.1/164105</link>
<description>Coherent photoproduction of ρ0, ω and excited vector mesons in ultraperipheral PbPb collisions
Aaij, R.; Abdelmotteleb, A. S. W.; Abellan Beteta, C.; Abudinén, F.; Ackernley, T.; Adefisoye, A. A.; Adeva, B.; Adinolfi, M.; Adlarson, P.; Agapopoulou, C.; Aidala, C. A.; Ajaltouni, Z.; Akar, S.; Akiba, K.; Albicocco, P.; Albrecht, J.
The invariant-mass distribution for the coherent photoproduction of dipions in ultraperipheral PbPb collisions is measured using data, corresponding to an integrated luminosity of 224.6 ± 9.6μb−1, collected by the LHCb experiment in 2018 at a nucleon-nucleon centre-of-mass energy s NN = 5.02 TeV. In the mass range from 400 to 1200 MeV, the results are consistent with previous experiments, with the spectrum dominated by the ρ0 meson, which interferes with a nonresonant component, together with a smaller ω meson contribution. In an extended mass range up to 2300 MeV, models previously used do not fit the data and a consistent description requires the introduction of two resonances at masses of 1350 ± 20 MeV and 1790 ± 20 MeV with widths of about 300 MeV. The cross-section for each meson is measured differentially in twelve bins of rapidity from 2.05 to 4.90. The ρ0 cross-section increases with rapidity from about 400 to 600 mb and is measured with a typical precision of 8%, while the cross-section times branching fraction for the ω, ρ′ and ρ′′, with the statistical precision of the data, do not have a pronounced rapidity dependence and are between 0.5 and 1.5mb, with uncertainties up to 30%. A large nuclear suppression is observed for the ρ0 meson compared to expectations based on photoproduction on the proton that use the impulse approximation. Significant suppression is also observed compared to that predicted by elastic scattering described in the Glauber approach, or with the addition of inelastic scattering in a Gribov-Glauber model.
</description>
<pubDate>Tue, 18 Nov 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164105</guid>
<dc:date>2025-11-18T00:00:00Z</dc:date>
</item>
<item>
<title>Forcing with Invariant Measures</title>
<link>https://hdl.handle.net/1721.1/164104</link>
<description>Forcing with Invariant Measures
Ackerman, Nathanael; Freer, Cameron; Golshani, Mohammad; Mirabi, Mostafa; Patel, Rehana
This paper introduces a model-theoretic generalization of the notion of forcing with random reals, in which forcing gives rise to random generic structures. Specifically, we consider forcing with κ -Borel probability measures on the space of L -structures with a (possibly uncountable) infinite set X, focusing on those that are invariant under the action of the symmetric group Sym ( X ) . We demonstrate how any Sym ( X ) -invariant measure where X is countable can be uniquely extended to a Sym ( Y ) -invariant measure where Y is uncountable, and prove that forcing with such measures satisfies the countable chain condition. We also show that we can uniformly distinguish between these random generic structures and the Cohen generic structures that arise from forcing with a strong Fraïssé class: There is a κ -Borel set of low complexity that contains every Cohen generic structure that is not highly homogeneous but contains no random generic structure, implying that a structure that is not highly homogeneous cannot be both Cohen generic and random generic. Finally, we answer an open question of Kostana in the case of ω 1 , by establishing a connection between forcing with a strong Fraïssé class and Cohen forcing.
</description>
<pubDate>Mon, 24 Nov 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164104</guid>
<dc:date>2025-11-24T00:00:00Z</dc:date>
</item>
<item>
<title>Gnotobiotic growth and phosphorus limitation of Arabidopsis thaliana and co-occurring microbes on phosphated iron oxides</title>
<link>https://hdl.handle.net/1721.1/164103</link>
<description>Gnotobiotic growth and phosphorus limitation of Arabidopsis thaliana and co-occurring microbes on phosphated iron oxides
Mackie, Amanda M.; Schuler, Christopher J.; McRose, Darcy L.
The macronutrient phosphorus is vital for sustaining cellular processes in all life forms. Due to its frequent adsorption on iron minerals, phosphorus bioavailability is low in many soils. While the abiotic adsorption of phosphate on iron minerals has been well studied, the direct effects of this process on bioavailability to plants and microbes has not been thoroughly investigated in a simplified laboratory system. We developed a hydroponic growth system that uses hydrous ferric oxide (HFO) to induce phosphorus limitation and can enable both plant and microbial cultivation as well as gnotobiotic co-culture. We demonstrate that this system can be used for phosphorus-limited growth of the model plant Arabidopsis thaliana as well as two root-associated bacterial isolates (from the genera Rhizobium and Pseudomonas). Elemental analysis of phosphorus and iron concentration in A. thaliana shoots reveals that the addition of increasing amounts of HFO leads to a progressive decrease in phosphorus concentration but does not affect iron quotas. We also report that phosphorus concentrations in both bacterial isolates decrease when cultivated in media supplemented with HFO. We further show that A. thaliana can be co-cultured with a Rhizobium isolate in our phosphorus-limited hydroponic system with bacteria relying on plant photosynthate as their sole carbon source. Our work provides a controlled demonstration of the effects of mineral adsorption on phosphorus bioavailability and a tool for further investigation of how plants and microbes access phosphorus in the environment.
</description>
<pubDate>Thu, 27 Nov 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164103</guid>
<dc:date>2025-11-27T00:00:00Z</dc:date>
</item>
<item>
<title>Beyond submodular maximization via one-sided smoothness</title>
<link>https://hdl.handle.net/1721.1/164102</link>
<description>Beyond submodular maximization via one-sided smoothness
Ghadiri, Mehrdad; Santiago, Richard; Shepherd, Bruce
The multilinear framework for submodular maximization was developed to achieve a tight 1 - 1 / e approximation for maximizing a monotone submodular function subject to a matroid constraint, including as special case the submodular welfare problem. The framework has a continuous optimization step (solving the multilinear extension of a submodular function) and a rounding part (rounding a fractional solution to an integral one). We extend both parts to provide a framework for a wider array of applications. The continuous part works for a more general class of continuous functions parameterized by a new smoothness parameter σ . A twice differential function F is called σ -one-sided-smooth ( σ -OSS) if its second derivatives are bounded as follows: 1 2 u T ∇ 2 F ( x ) u ≤ σ · ‖ u ‖ 1 ‖ x ‖ 1 u T ∇ F ( x ) for all u , x ≥ 0 , x ≠ 0 . For σ = 0 this includes previously studied continuous DR-Submodular functions as well as quadratics defined by copositive matrices. We give a modification of the continuous greedy algorithm which finds a solution for maximizing a monotone σ -OSS F over a polytope in the non-negative orthant; the solution approximates the optimum to within factors which are functions of σ which depend on additional properties. Interestingly, σ -OSS functions arise as the multilinear extensions of set functions associated with several well-studied diversity maximization problems: max f ( S ) = ∑ i , j ∈ S A ij : | S | ≤ k . For instance, when A ij defines a σ -semi-metric, its extension is σ -OSS. In these settings, we also develop rounding schemes to approximate the discrete problem.
</description>
<pubDate>Mon, 24 Nov 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164102</guid>
<dc:date>2025-11-24T00:00:00Z</dc:date>
</item>
<item>
<title>Shotgun Metagenomics of Gastric Biopsies Reveals Compositional and Functional Microbiome Shifts in High- and Low-Gastric-Cancer-Risk Populations from Colombia, South America</title>
<link>https://hdl.handle.net/1721.1/164098</link>
<description>Shotgun Metagenomics of Gastric Biopsies Reveals Compositional and Functional Microbiome Shifts in High- and Low-Gastric-Cancer-Risk Populations from Colombia, South America
Mannion, Anthony; Sheh, Alexander; Shen, Zeli; Dzink-Fox, JoAnn; Piazuelo, M Blanca; Wilson, Keith T; Peek, Richard; Fox, James G
Along with Helicobacter pylori infection, the gastric microbiota is hypothesized to modulate stomach cancer risk in susceptible individuals. Whole metagenomic shotgun sequencing (WMS) is a sequencing approach to characterize the microbiome with advantages over traditional culture and 16S rRNA sequencing including identification of bacterial and non-bacterial taxa, species/strain resolution, and functional characterization of the microbiota. In this study, we used WMS to survey the microbiome in extracted DNA from antral gastric biopsy samples from Colombian patients residing in the high-risk gastric cancer town Túquerres (n = 10, H. pylori-positive = 7) and low-risk town of Tumaco (n = 10, H. pylori-positive = 6). Kraken2/Bracken was used for taxonomic classification and abundance. Functional gene profiles were inferred by InterProScan and KEGG analysis of assembled contigs and gene annotation. The most abundant taxa represented bacteria, non-human eukaryota, and viral genera found in skin, oral, food, and plant/soil environments including Staphylococus, Streptococcus, Bacillus, Aspergillus, and Siphoviridae. H. pylori was the predominant taxa present in H. pylori-positive samples. Beta diversity was significantly different based on H. pylori-status, risk group, and sex. WMS detected more bacterial taxa than 16S rRNA sequencing and aerobic, anaerobic, and microaerobic culture performed on the same gastric biopsy samples. WMS identified significant differences in functional profiles found between H. pylori-status, but not risk or sex groups. H. pylori-positive samples were significantly enriched for H. pylori-specific genes including virulence factors such as vacA, cagA, and urease, while carbohydrate and amino acid metabolism genes were enriched in H. pylori-negative samples. This study shows WMS has the potential to characterize the taxonomy and function of the gastric microbiome as risk factors for H. pylori-associated gastric disease. Future studies will be needed to compare and validate WMS versus traditional culture and 16S rRNA sequencing approaches for characterization of the gastric microbiome.
</description>
<pubDate>Mon, 27 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164098</guid>
<dc:date>2023-02-27T00:00:00Z</dc:date>
</item>
<item>
<title>Resonance Scattering Treatment with the Windowed Multipole Formalism</title>
<link>https://hdl.handle.net/1721.1/164097</link>
<description>Resonance Scattering Treatment with the Windowed Multipole Formalism
Ridley, Gavin; Forget, Benoit; Burke, Timothy
A new method for directly sampling the resonance upscattering effect is presented. Alternatives have relied on inefficient rejection sampling techniques or large tabular storage of relative velocities. None of these approaches, which require pointwise energy data, are particularly well suited to the windowed multipole cross-section representation. The new method, called multipole analytic resonance scattering, overcomes these limitations by inverse transform sampling from the target relative velocity distribution where the cross section is expressed in the multipole formalism. The closed-form relative speed distribution contains a novel special function we deem the incomplete Faddeeva function, and we present the first results on its efficient numerical evaluation.
</description>
<pubDate>Sun, 03 Mar 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164097</guid>
<dc:date>2024-03-03T00:00:00Z</dc:date>
</item>
<item>
<title>Assessing the Structure-Based Turbulence Model Performance for Thermal Striping Applications Using Symmetric Jet Experiments</title>
<link>https://hdl.handle.net/1721.1/164096</link>
<description>Assessing the Structure-Based Turbulence Model Performance for Thermal Striping Applications Using Symmetric Jet Experiments
Pham, Monica; Petrov, Victor; Manera, Annalisa; Baglietto, Emilio
Turbulent mixing of coolant streams can result in an oscillatory mixing phenomenon called thermal striping. These fluctuations have the potential to lead to anticipated thermal fatigue failures in advanced nuclear reactors. To predict thermal striping, robust and computationally affordable modeling tools that are capable of accurately representing complex turbulence are needed. Hybrid turbulence approaches, such as detached-eddy simulation and scale-adaptive simulation, have shown some success in resolving complex unsteady turbulence for massively separated flows, however the applicability of these models to internal flows is limited. A STRUCTure-based (STRUCT) second-generation Unsteady Reynolds-Averaged Navier–Stokes turbulence model was recently proposed at the Massachusetts Institute of Technology to robustly extend the applicability of hybrid closures. In this work, the STRUCT model is evaluated using experimental data taken at the Reactor Cavity Cooling System separate-effects test facility at the University of Michigan. The experiments observed the interaction of parallel symmetric rectangular jets, and include measurements for mean profiles of velocity and Reynolds stresses. In the present work, the simulation results are assessed against mean profiles of velocity and Reynolds stresses, demonstrating the ability to reproduce the unsteadiness of the jets in close agreement with the measurements at considerably reduced computational cost.
</description>
<pubDate>Tue, 02 Jul 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164096</guid>
<dc:date>2024-07-02T00:00:00Z</dc:date>
</item>
<item>
<title>Deep-learning models for forecasting financial risk premia and their interpretations</title>
<link>https://hdl.handle.net/1721.1/164095</link>
<description>Deep-learning models for forecasting financial risk premia and their interpretations
Lo, Andrew W; Singh, Manish
The measurement of financial risk premia, the amount that a risky asset will outperform a risk-free one, is an important problem in asset pricing. The noisiness and non-stationarity of asset returns makes the estimation of risk premia using machine learning (ML) techniques challenging. In this work, we develop ML models that solve the problems associated with risk premia forecasting by separating risk premia prediction into two independent tasks, a time series model and a cross-sectional model, and using neural networks with skip connections to enable their deep neural network training. These models are tested robustly with different metrics, and we observe that our models outperform several existing standard ML models. A known issue with ML models is their ‘black box’ nature, i.e. their opaqueness to interpretability. We interpret these deep neural networks using local approximation-based techniques that provide explanations for our model's predictions.
</description>
<pubDate>Fri, 12 May 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164095</guid>
<dc:date>2023-05-12T00:00:00Z</dc:date>
</item>
<item>
<title>Machine Learning Distinguishes Plant Bioelectric Recordings with and Without Nearby Human Movement</title>
<link>https://hdl.handle.net/1721.1/164094</link>
<description>Machine Learning Distinguishes Plant Bioelectric Recordings with and Without Nearby Human Movement
Gloor, Peter A.; Weinbeer, Moritz
Background: Quantitatively detecting whether plants exhibit measurable bioelectric differences in the presence of nearby human movement remains challenging, in part because plant signals are low-amplitude, slow, and easily confounded by environmental factors. Methods: We recorded bioelectric activity from 2978 plant samples across three species (basil, salad, tomato) using differential electrode pairs (leaf and soil electrodes) sampling at 142 Hz. Two trained performers executed three specific eurythmic gestures near experimental plants while control plants remained isolated. Random Forest and Convolutional Neural Network classifiers were applied to distinguish the control from treatment conditions using engineered features including spectral, temporal, wavelet, and frequency domain characteristics. Results: Random Forest classification achieved 62.7% accuracy (AUC = 0.67) distinguishing differences in recordings collected near a moving human from control conditions, representing a statistically significant 12.7 percentage point improvement over chance. Individual performer signatures were detectable with 68.2% accuracy, while plant species classification achieved only 44.5% accuracy, indicating minimal species-specific artifacts. Temporal analysis revealed that the plants with repeated exposure exhibited consistently less negative bioelectric amplitudes compared to single-exposure plants. Innovation: We introduce a data-driven approach that pairs standardized, short-window bioelectric recordings with machine-learning classifiers (Random Forest, CNN) to test, in an exploratory manner, whether plant signals differ between human-moving-nearby and isolation conditions. Conclusions: Plants exhibit modest but statistically detectable bioelectric differences in the presence of nearby human movement. Rather than attributing these differences to eurythmic movement itself, the present design can only demonstrate that plant recordings collected within ~1 m of a moving human differ, modestly but statistically, from recordings taken ≥3 m away. The underlying biophysical pathways and specific contributing factors (airflow, VOCs, thermal plumes, vibration, electromagnetic fields) remain unknown. These results should therefore be interpreted as exploratory correlations, not mechanistic evidence of gesture-specific plant sensing.
</description>
<pubDate>Sat, 15 Nov 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164094</guid>
<dc:date>2025-11-15T00:00:00Z</dc:date>
</item>
<item>
<title>Analysis of Regional Surface CO2 Fluxes Using the MEGA Satellite Data Assimilation System</title>
<link>https://hdl.handle.net/1721.1/164093</link>
<description>Analysis of Regional Surface CO2 Fluxes Using the MEGA Satellite Data Assimilation System
Hu, Liting; Hu, Xiaoyi; Jiang, Fei; He, Wei; Deng, Zhu; Fang, Shuangxi; Fang, Xuekun
Understanding the dynamics of terrestrial carbon sources and sinks is crucial for addressing climate change, yet significant uncertainties remain at regional scales. We developed the Monitoring and Evaluation of Greenhouse gAs Flux (MEGA) inversion system with satellite data assimilation and applied it to China using OCO-2 V11.1r XCO2 retrievals. Our results show that China’s terrestrial ecosystems acted as a carbon sink of 0.28 ± 0.15 PgC yr−1 during 2018–2023, consistent with other inversion estimates. Validation against surface CO2 flask measurements demonstrated significant improvement, with RMSE and MAE reduced by 30%–46% and 24–44%, respectively. Six sets of prior sensitivity experiments conclusively demonstrated the robustness of MEGA. In addition, this study is the first to systematically compare model-derived and observation-based background fields in satellite data assimilation. Ten sets of background sensitivity experiments revealed that model-based background fields exhibit superior capability in resolving seasonal flux dynamics, though their performance remains contingent on three key factors: (1) initial fields, (2) flux fields, and (3) flux masks (used to control regional flux switches). These findings highlight the potential for further refinement of the atmospheric inversion system.
</description>
<pubDate>Thu, 13 Nov 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164093</guid>
<dc:date>2025-11-13T00:00:00Z</dc:date>
</item>
<item>
<title>ZeoSyn: A Comprehensive Zeolite Synthesis Dataset Enabling Machine-Learning Rationalization of Hydrothermal Parameters</title>
<link>https://hdl.handle.net/1721.1/164092</link>
<description>ZeoSyn: A Comprehensive Zeolite Synthesis Dataset Enabling Machine-Learning Rationalization of Hydrothermal Parameters
Pan, Elton; Kwon, Soonhyoung; Jensen, Zach; Xie, Mingrou; Gómez-Bombarelli, Rafael; Moliner, Manuel; Román-Leshkov, Yuriy; Olivetti, Elsa
Zeolites, nanoporous aluminosilicates with well-defined porous structures, are versatile materials with applications in catalysis, gas separation, and ion exchange. Hydrothermal synthesis is widely used for zeolite production, offering control over composition, crystallinity, and pore size. However, the intricate interplay of synthesis parameters necessitates a comprehensive understanding of synthesis-structure relationships to optimize the synthesis process. Hitherto, public zeolite synthesis databases only contain a subset of parameters and are small in scale, comprising up to a few thousand synthesis routes. We present ZeoSyn, a dataset of 23,961 zeolite hydrothermal synthesis routes, encompassing 233 zeolite topologies and 921 organic structure-directing agents (OSDAs). Each synthesis route comprises comprehensive synthesis parameters: 1) gel composition, 2) reaction conditions, 3) OSDAs, and 4) zeolite products. Using ZeoSyn, we develop a machine learning classifier to predict the resultant zeolite given a synthesis route with &gt;70% accuracy. We employ SHapley Additive exPlanations (SHAP) to uncover key synthesis parameters for &gt;200 zeolite frameworks. We introduce an aggregation approach to extend SHAP to all building units. We demonstrate applications of this approach to phase-selective and intergrowth synthesis. This comprehensive analysis illuminates the synthesis parameters pivotal in driving zeolite crystallization, offering the potential to guide the synthesis of desired zeolites. The dataset is available at https://github.com/eltonpan/zeosyn_dataset.
</description>
<pubDate>Wed, 06 Mar 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164092</guid>
<dc:date>2024-03-06T00:00:00Z</dc:date>
</item>
<item>
<title>One-Pot Synthesis of CHA/ERI-Type Zeolite Intergrowth from a Single Multiselective Organic Structure-Directing Agent</title>
<link>https://hdl.handle.net/1721.1/164091</link>
<description>One-Pot Synthesis of CHA/ERI-Type Zeolite Intergrowth from a Single Multiselective Organic Structure-Directing Agent
Kwon, Soonhyoung; Bello-Jurado, Estefanía; Ikonnikova, Evgeniia; Lee, Hwajun; Schwalbe-Koda, Daniel; Corma, Avelino; Willhammar, Tom; Olivetti, Elsa A; Gomez-Bombarelli, Rafael; Moliner, Manuel; Román-Leshkov, Yuriy
We report the one-pot synthesis of a chabazite (CHA)/erionite (ERI)-type zeolite intergrowth structure characterized by adjustable extents of intergrowth enrichment and Si/Al molar ratios. This method utilizes readily synthesizable 6-azaspiro[5.6]dodecan-6-ium as the exclusive organic structure-directing agent (OSDA) within a potassium-dominant environment. High-throughput simulations were used to accurately determine the templating energy and molecular shape, facilitating the selection of an optimally biselective OSDA from among thousands of prospective candidates. The coexistence of the crystal phases, forming a distinct structure comprising disk-like CHA regions bridged by ERI-rich pillars, was corroborated via rigorous powder X-ray diffraction and integrated differential-phase contrast scanning transmission electron microscopy (iDPC S/TEM) analyses. iDPC S/TEM imaging further revealed the presence of single offretite layers dispersed within the ERI phase. The ratio of crystal phases between CHA and ERI in this type of intergrowth could be varied systematically by changing both the OSDA/Si and K/Si ratios. Two intergrown zeolite samples with different Si/Al molar ratios were tested for the selective catalytic reduction (SCR) of NO&lt;sub&gt;&lt;i&gt;x&lt;/i&gt;&lt;/sub&gt; with NH&lt;sub&gt;3&lt;/sub&gt;, showing competitive catalytic performance and hydrothermal stability compared to that of the industry-standard commercial NH&lt;sub&gt;3&lt;/sub&gt;-SCR catalyst, Cu-SSZ-13, prevalent in automotive applications. Collectively, this work underscores the potential of our approach for the synthesis and optimization of adjustable intergrown zeolite structures, offering competitive alternatives for key industrial processes.
</description>
<pubDate>Wed, 13 Mar 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164091</guid>
<dc:date>2024-03-13T00:00:00Z</dc:date>
</item>
<item>
<title>Recommendations for improving rigor and reproducibility in site specific characterization</title>
<link>https://hdl.handle.net/1721.1/164090</link>
<description>Recommendations for improving rigor and reproducibility in site specific characterization
Wrasman, Cody J; Bell, Alexis T; Chandler, Bert D; Harris, James W; Kwon, Stephanie; Ball, Madelyn R; Krishna, Siddarth H; Khatib, Sheima J; Bollini, Praveen; Román-Leshkov, Yuriy; “Bean” Getsoian, Andrew; Weber, Robert S; Lercher, Johannes A; Liu, Dongxia; Resasco, Daniel E; Bates, Jason S; Hall, Jacklyn N; Lebrón-Rodríguez, Edgard A; Paz Herrera, Laura; Notestein, Justin M; Schaidle, Joshua A
Heterogeneous catalysis is driven by the interaction of reactant molecules and the catalyst surface. The locus of this interaction as well as the surrounding ensemble of atoms is referred to as the catalyst active site. Active site characterization attempts to distinguish active catalytic sites from inactive surface sites, to elucidate the structural and chemical nature of active sites, and to quantify active site concentration. Numerous techniques have been demonstrated to provide compositional and structural information about the active sites within a catalyst. However, each technique has its own limitations and experimental pitfalls that can lead to data misinterpretation or irreproducible results. This work aims to provide an overview of the types of data that can be collected, to outline common experimental challenges and how to avoid them, and to assemble relevant references for the most used active site characterization techniques. More broadly, we aim to outline best practices for researchers to collect, interpret, and report active site characterization data in a way that provides the most benefit to the broader catalysis community. Increasing the rigor and reproducibility of active site characterization offers a strategy to better link properties with catalytic performance and to enable the community to develop consensus concerning these relationships.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164090</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design and Validation of a High-Throughput Reductive Catalytic Fractionation Method</title>
<link>https://hdl.handle.net/1721.1/164089</link>
<description>Design and Validation of a High-Throughput Reductive Catalytic Fractionation Method
Kenny, Jacob K; Neefe, Sasha R; Brandner, David G; Stone, Michael L; Happs, Renee M; Kumaniaev, Ivan; Mounfield, William P; Harman-Ware, Anne E; Devos, Katrien M; Pendergast, Thomas H; Medlin, J Will; Román-Leshkov, Yuriy; Beckham, Gregg T
Reductive catalytic fractionation (RCF) is a promising method to extract and depolymerize lignin from biomass, and bench-scale studies have enabled considerable progress in the past decade. RCF experiments are typically conducted in pressurized batch reactors with volumes ranging between 50 and 1000 mL, limiting the throughput of these experiments to one to six reactions per day for an individual researcher. Here, we report a high-throughput RCF (HTP-RCF) method in which batch RCF reactions are conducted in 1 mL wells machined directly into Hastelloy reactor plates. The plate reactors can seal high pressures produced by organic solvents by vertically stacking multiple reactor plates, leading to a compact and modular system capable of performing 240 reactions per experiment. Using this setup, we screened solvent mixtures and catalyst loadings for hydrogen-free RCF using 50 mg poplar and 0.5 mL reaction solvent. The system of 1:1 isopropanol/methanol showed optimal monomer yields and selectivity to 4-propyl substituted monomers, and validation reactions using 75 mL batch reactors produced identical monomer yields. To accommodate the low material loadings, we then developed a workup procedure for parallel filtration, washing, and drying of samples and a &lt;sup&gt;1&lt;/sup&gt;H nuclear magnetic resonance spectroscopy method to measure the RCF oil yield without performing liquid-liquid extraction. As a demonstration of this experimental pipeline, 50 unique switchgrass samples were screened in RCF reactions in the HTP-RCF system, revealing a wide range of monomer yields (21-36%), S/G ratios (0.41-0.93), and oil yields (40-75%). These results were successfully validated by repeating RCF reactions in 75 mL batch reactors for a subset of samples. We anticipate that this approach can be used to rapidly screen substrates, catalysts, and reaction conditions in high-pressure batch reactions with higher throughput than standard batch reactors.
</description>
<pubDate>Wed, 05 Jun 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164089</guid>
<dc:date>2024-06-05T00:00:00Z</dc:date>
</item>
<item>
<title>Electrifying Hydroformylation Catalysts Exposes Voltage-Driven C–C Bond Formation</title>
<link>https://hdl.handle.net/1721.1/164088</link>
<description>Electrifying Hydroformylation Catalysts Exposes Voltage-Driven C–C Bond Formation
Zeng, Joy S; Cosner, Emma L; Delgado-Kukuczka, Spencer P; Jiang, Chenyu; Adams, Jason S; Román-Leshkov, Yuriy; Manthiram, Karthish
Electrochemical reactions can access a significant range of driving forces under operationally mild conditions and are thus envisioned to play a key role in decarbonizing chemical manufacturing. However, many reactions with well-established thermochemical precedents remain difficult to achieve electrochemically. For example, hydroformylation (thermo-HFN) is an industrially important reaction that couples olefins and carbon monoxide (CO) to make aldehydes. However, the electrochemical analogue of hydroformylation (electro-HFN), which uses protons and electrons instead of hydrogen gas, represents a complex C-C bond-forming reaction that is difficult to achieve at heterogeneous electrocatalysts. In this work, we import Rh-based thermo-HFN catalysts onto electrode surfaces to unlock electro-HFN reactivity. At mild conditions of room temperature and 5 bar CO, we achieve Faradaic efficiencies of up to 15% and turnover frequencies of up to 0.7 h&lt;sup&gt;-1&lt;/sup&gt;. This electro-HFN rate is an order of magnitude greater than the corresponding thermo-HFN rate at the same catalyst, temperature, and pressure. Reaction kinetics and &lt;i&gt;operando&lt;/i&gt; X-ray absorption spectroscopy provide evidence for an electro-HFN mechanism that involves distinct elementary steps relative to thermo-HFN. This work demonstrates a step-by-step experimental strategy for electrifying a well-studied thermochemical reaction to unveil a new electrocatalyst for a complex and underexplored electrochemical reaction.
</description>
<pubDate>Wed, 19 Jun 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164088</guid>
<dc:date>2024-06-19T00:00:00Z</dc:date>
</item>
<item>
<title>Enabling Lignin Valorization Through Integrated Advances in Plant Biology and Biorefining</title>
<link>https://hdl.handle.net/1721.1/164087</link>
<description>Enabling Lignin Valorization Through Integrated Advances in Plant Biology and Biorefining
Dixon, Richard A; Puente-Urbina, Allen; Beckham, Gregg T; Román-Leshkov, Yuriy
Despite lignin having long been viewed as an impediment to the processing of biomass for the production of paper, biofuels, and high-value chemicals, the valorization of lignin to fuels, chemicals, and materials is now clearly recognized as a critical element for the lignocellulosic bioeconomy. However, the intended application for lignin will likely require a preferred lignin composition and form. To that end, effective lignin valorization will require the integration of plant biology, providing optimal feedstocks, with chemical process engineering, providing efficient lignin transformations. Recent advances in our understanding of lignin biosynthesis have shown that lignin structure is extremely diverse and potentially tunable, while simultaneous developments in lignin refining have resulted in the development of several processes that are more agnostic to lignin composition. Here, we review the interface between in planta lignin design and lignin processing and discuss the advances necessary for lignin valorization to become a feature of advanced biorefining.
</description>
<pubDate>Mon, 22 Jul 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164087</guid>
<dc:date>2024-07-22T00:00:00Z</dc:date>
</item>
<item>
<title>Reducing Solvent Consumption in Reductive Catalytic Fractionation through Lignin Oil Recycling</title>
<link>https://hdl.handle.net/1721.1/164086</link>
<description>Reducing Solvent Consumption in Reductive Catalytic Fractionation through Lignin Oil Recycling
Jang, Jun Hee; Callejón Álvarez, Júlia; Neuendorf, Quinn S; Román-Leshkov, Yuriy; Beckham, Gregg T
Reductive catalytic fractionation (RCF) enables the simultaneous valorization of lignin and carbohydrates in lignocellulosic biomass through solvent-based lignin extraction, followed by depolymerization and catalytic stabilization of the extracted lignin. Process modeling has shown that the use of exogenous organic solvent in RCF is a challenge for economic and environmental feasibility, and previous works proposed that lignin oil, a mixture of lignin-derived monomers and oligomers produced by RCF, can be used as a cosolvent in RCF. Here, we further explore the potential of RCF solvent recycling with lignin oil, extending the feasible lignin oil concentration in the solvent to 100 wt %, relative to the previously demonstrated 0-19 wt % range. Solvents containing up to 80 wt % lignin oil exhibited 83-93% delignification, comparable to 83% delignification with a methanol-water mixture, and notably, using lignin oil solely as a solvent achieved 67% delignification in the absence of water. In additional experiments, applying the RCF solvent recycling approach to ten consecutive RCF reactions resulted in a final lignin oil concentration of 11 wt %, without detrimental impacts on lignin extraction, lignin oil molar mass distribution, aromatic monomer selectivity, and cellulose retention. Overall, this work further demonstrates the potential for using lignin oil as an effective cosolvent in RCF, which can reduce the burden on downstream solvent recovery.
</description>
<pubDate>Wed, 14 Aug 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164086</guid>
<dc:date>2024-08-14T00:00:00Z</dc:date>
</item>
<item>
<title>A Career in Catalysis: Mark E. Davis</title>
<link>https://hdl.handle.net/1721.1/164085</link>
<description>A Career in Catalysis: Mark E. Davis
Arhancet, Juan P; Chen, Cong-Yan; Cybulskis, Viktor J; Gounder, Rajamani; Hong, Suk Bong; Jones, Christopher W; Kang, Jong Hun; Kubota, Yoshihiro; Lee, Hyunjoo; Orazov, Marat; Román-Leshkov, Yuriy; Schmidt, Joel E
Mark E. Davis led an independent research program from 1981 to 2023, beginning at the Virginia Polytechnic Institute and State University (VPI) and then transitioning to the California Institute of Technology (Caltech). His research program was marked by exceptional creativity, breadth, and depth. With classical training in reaction engineering, Davis developed expertise in experimental heterogeneous catalysis and led work in this discipline for more than 40 years. His name is synonymous with zeolites, and today, he is one of the most widely recognized experts in zeolite synthesis, characterization, and catalysis in the world. Early work at the VPI focused on zeolites and catalysis with supported metal coordination complexes. His creativity was evident at the earliest stages of his career, with the development of supported aqueous phase catalysts and the world’s first crystalline, extra-large pore molecular sieve, both reported in the late 1980s. A move to Caltech saw a significant expansion of his zeolite synthesis program and the rapid acceleration of a multidecade collaboration with Dr. Stacey I. Zones of Chevron. At Caltech, his work expanded to include studies of molecular recognition and catalysis with organic/inorganic hybrid materials, and he developed a large, parallel program in drug delivery. His work on catalysis heavily emphasized zeolite catalysis, including major thrusts on the conversion of sugars in the liquid phase and methanol in the gas phase. Numerous new zeolites and molecular sieves were discovered throughout the four decades of the Davis laboratory, highlighted by a successful, multidecade quest to prepare a chiral zeolite with enantioselective catalytic properties. Davis is one of the most decorated researchers of the last four decades. He is one of only 21 living people currently elected to all of the US National Academies (Engineering, Science, Medicine) and elected as a Fellow of the National Academy of Inventors. He was the first engineer to win the NSF’s Alan T. Waterman Award and is one of only two researchers (to date) to win the International Zeolite Association’s Donald Breck Award twice (1989, 2019). Awards from the ACS (Ipatieff, Murphree, and Somorjai Awards), AIChE (Colburn, Professional Progress Awards), and North American Catalysis Society (Emmett Award) are among his accolades.
</description>
<pubDate>Fri, 23 Aug 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164085</guid>
<dc:date>2024-08-23T00:00:00Z</dc:date>
</item>
<item>
<title>Plant Bioelectrical Signals for Environmental and Emotional State Classification</title>
<link>https://hdl.handle.net/1721.1/164084</link>
<description>Plant Bioelectrical Signals for Environmental and Emotional State Classification
Gloor, Peter A.
In this study, we present a pilot investigation using a single Purple Heart plant (Tradescantia pallida) to explore whether bioelectrical signals for dual-purpose classification tasks: environmental state detection and human emotion recognition. Using an AD8232 ECG sensor at 400 Hz sampling rate, we recorded 3 s bioelectrical signal segments with 1 s overlap, converting them to mel-spectrograms for ResNet18 CNN (Convolutional Neural Network) classification. For lamp on/off detection, we achieved 85.4% accuracy with balanced precision (0.85–0.86) and recall (0.84–0.86) metrics across 2767 spectrogram samples. For human emotion classification, our system achieved optimal performance at 73% accuracy with 1 s lag, distinguishing between happy and sad emotional states across 1619 samples. These results should be viewed as preliminary and exploratory, demonstrating feasibility rather than definitive evidence of plant-based emotion sensing. Replication across plants, days, and experimental sites will be essential to establish robustness. The current study is limited by a single-plant setup, modest sample size, and reliance on human face-tracking labels, which together preclude strong claims about generalizability.
</description>
<pubDate>Wed, 05 Nov 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164084</guid>
<dc:date>2025-11-05T00:00:00Z</dc:date>
</item>
<item>
<title>Within-Subtype HIV-1 Polymorphisms and Their Impacts on Intact Proviral DNA Assay (IPDA) for Viral Reservoir Quantification</title>
<link>https://hdl.handle.net/1721.1/164083</link>
<description>Within-Subtype HIV-1 Polymorphisms and Their Impacts on Intact Proviral DNA Assay (IPDA) for Viral Reservoir Quantification
Arikatla, Mohith Reddy; Mathad, Jyoti S.; Reddy, Kavidha; Reddy, Nicole; Ndung’u, Thumbi; Dupnik, Kathryn M.; Lee, Guinevere Q.
The Intact Proviral DNA Assay (IPDA) is widely used to quantify genome-intact HIV proviruses in people living with HIV, but viral sequence diversity has been observed to cause assay failures due to primer/probe mismatches. Adapted for subtype C, IPDA-BC is a modified version of the IPDA validated on South African HIV-1 subtype C. India is also impacted by subtype C, but IPDA performance within-subtype across geographical regions is not well studied. We analyzed Indian (IN) and South African (ZA) subtype C sequences in silico, hypothesizing that IPDA-BC may underperform with IN viruses. Primer/probe binding was predicted using three increasingly stringent nucleotide mismatch criteria, whose sensitivity and specificity were evaluated against experimental IPDA outcomes. Phylogenetic analyses confirmed that IN and ZA subtype C sequences form distinct clusters with significant compartmentalization (p &lt; 0.003). Across criteria, up to 6–10% decreases in primer/probe binding were observed in IN versus ZA, with the env forward primer being the most affected. These criteria showed low sensitivity (18–53%) and variable specificity (67–100%) in predicting experimental outcomes. In conclusion, even within subtype, HIV-1 variation across geographical regions may impact IPDA performance, underscoring the need for improved predictive models to guide assay design for global HIV cure research.
</description>
<pubDate>Fri, 31 Oct 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164083</guid>
<dc:date>2025-10-31T00:00:00Z</dc:date>
</item>
<item>
<title>An Investigation of the Modulating Effects of Sensory Stimulation and Transcranial Magnetic Stimulation on Memory-Related Brain Activity</title>
<link>https://hdl.handle.net/1721.1/164082</link>
<description>An Investigation of the Modulating Effects of Sensory Stimulation and Transcranial Magnetic Stimulation on Memory-Related Brain Activity
Nikolin, Stevan; Wang, Matthew; Moffa, Adriano; Huang, Haijing; Xu, Mei; Pande, Siddhartha Raj; Martin, Donel
Background/Objectives: As the global population ages, the prevalence of disorders associated with memory dysfunction (e.g., Alzheimer’s disease) continues to increase. There is a need for novel interventions that can enhance memory and support affected individuals. Non-invasive brain stimulation provides a promising approach to engage circuits within the hippocampal network, a group of brain regions critical for episodic memory, and thereby improve cognition. Methods: Twenty healthy participants completed a single-blind, within-subject crossover study over four sessions. In each session, they received one of four interventions whilst viewing pictures of real-world objects: 40 Hz synchronised audiovisual stimulation (AVS), theta burst stimulation (TBS), a combination of synchronised 5 Hz repetitive transcranial magnetic stimulation with AVS (rTMS + AVS), or sham rTMS. Electroencephalography (EEG) was recorded to measure associated brain activity changes. Following each intervention, participants completed a recognition memory task. Results: Mixed-effect repeated measure models (MRMMs) revealed no significant differences in recognition memory performance or theta (5 Hz) activity across conditions. However, both TBS and rTMS + AVS significantly increased gamma (40 Hz) activity compared to sham rTMS, and TBS induced a widespread increase in theta-gamma phase-amplitude coupling during picture viewing. Conclusions: While the neuromodulatory interventions did not enhance memory performance, the observed increase in gamma activity, particularly following rTMS-based stimulation, suggests potential engagement of neural processes associated with memory. These findings warrant further investigation into the role of gamma oscillations in memory and cognitive enhancement.
</description>
<pubDate>Fri, 31 Oct 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164082</guid>
<dc:date>2025-10-31T00:00:00Z</dc:date>
</item>
<item>
<title>Shaping In-Vehicle Behaviours through Activity-Centered Design</title>
<link>https://hdl.handle.net/1721.1/164081</link>
<description>Shaping In-Vehicle Behaviours through Activity-Centered Design
Patel, Ankit; Gershon, Pnina; Habibovic, Azra; Novakazi, Fjoll?; Akahoshi, Sakura; Alsaid, Areen; Cha, Kyungjoo
In today’s fast-paced society, most individuals commute either by personal vehicle or public transportation. User preferences and requirements are crucial, with design playing a significant role. The nature of design should be such that it is both inclusive and assimilative, and its purpose is to propel innovation and progress while also improving the quality of life of the user. That is why a general focus was given to the user-centered design approach while developing vehicles, especially, cabin (cockpit) design. With prioritizing the user activities, it is interesting to explore how users’ experience and behavior vary through the application of different design approaches. Nevertheless, existing literature has significantly overlooked the impact of design approaches on “human activity". Therefore, the main objective of the workshop is to examine the relationships between activity-centered design and user behavior.
AutomotiveUI Adjunct ’25, Brisbane, QLD, Australia
</description>
<pubDate>Wed, 08 Oct 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164081</guid>
<dc:date>2025-10-08T00:00:00Z</dc:date>
</item>
<item>
<title>Scale, Engage, or Both?: Potential and Perils of Applying Large Language Models in Interview and Conversation-Based Research</title>
<link>https://hdl.handle.net/1721.1/164080</link>
<description>Scale, Engage, or Both?: Potential and Perils of Applying Large Language Models in Interview and Conversation-Based Research
Hwang, Angel Hsing-Chi; Aubin Le Qu?r?, Marianne; Schroeder, Hope; Cuevas, Alejandro; Dow, Steven; Kapania, Shivani; Rho, Eugenia
An increasing number of studies apply tools powered by large language models (LLMs) to interview and conversation-based research, one of the most commonly used research methods in CSCW. This panel invites the CSCW community to critically debate the role of LLMs in reshaping interview-based methods. We aim to explore how these tools might (1) address persistent challenges in conversation-based research, such as limited scalability and participant engagement, (2) introduce novel methodological possibilities, and (3) surface additional practical, technical, and ethical concerns. The panel discussion will be grounded on the panelists’ prior experience applying LLMs to their own interview and conversation-based research. We ask whether LLMs offer unique advantages to enhance interview research, beyond automating certain aspects of the research process. Through this discussion, we encourage researchers to reflect on how applying LLM tools may require rethinking research design, conversational protocols, and ethical practices.
CSCW Companion ’25, Bergen, Norway
</description>
<pubDate>Fri, 17 Oct 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164080</guid>
<dc:date>2025-10-17T00:00:00Z</dc:date>
</item>
<item>
<title>Yeast Display Reveals Plentiful Mutations That Improve Fusion Peptide Vaccine-Elicited Antibodies Beyond 59% HIV-1 Neutralization Breadth</title>
<link>https://hdl.handle.net/1721.1/164079</link>
<description>Yeast Display Reveals Plentiful Mutations That Improve Fusion Peptide Vaccine-Elicited Antibodies Beyond 59% HIV-1 Neutralization Breadth
França, Camila T; Pletnev, Sergei; Madan, Bharat; Katsamba, Phinikoula S; McKee, Krisha; Morano, Nicholas C; Zhang, Baoshan; Bahna, Fabiana; Bylund, Tatsiana; Lin, Bob C; Louder, Mark K; Mannepalli, Seetha; Nimrania, Rajani; O’Dell, Sijy; Doria-Rose, Nicole A; Kwong, Peter D; Shapiro, Lawrence; Sheng, Zizhang; Zhou, Tongqing; DeKosky, Brandon J
Background/Objectives: Vaccine elicitation of antibodies with high HIV-1 neutralization breadth is a long-standing goal. Recently, the induction of such antibodies has been achieved at the fusion peptide site of vulnerability. Questions remain, however, as to how much anti-fusion peptide antibodies can be improved and whether their neutralization breadth and potency are sufficient to prevent HIV-1 infection. Methods: Here, we use yeast display coupled with deep mutational screening and biochemical and structural analyses to study the improvement of the best fusion peptide-directed, vaccine-elicited antibody, DFPH_a.01, with an initial 59% breadth. Results: Yeast display identified both single and double mutations that improved recognition of HIV-1 envelope trimers. We characterized two paratope-distal light chain (LC) mutations, S10R and S59P, which together increased breadth to 63%. Biochemical analysis demonstrated DFPH-a.01_10R59P-LC, and its component mutations, to have increased affinity and stability. Cryo-EM structural analysis revealed elbow-angle influencing by S10R-LC and isosteric positioning by S59P-LC as explanations for enhanced breadth, affinity, and stability. Conclusions: These results, along with another antibody with enhanced performance (DFPH-a.01_1G10A56K-LC with 64% breadth), suggest that mutations improving DFPH_a.01 are plentiful, an important vaccine insight.
</description>
<pubDate>Mon, 27 Oct 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164079</guid>
<dc:date>2025-10-27T00:00:00Z</dc:date>
</item>
<item>
<title>A Probabilistic Perspective on Tiling Sparse Tensor Algebra</title>
<link>https://hdl.handle.net/1721.1/164078</link>
<description>A Probabilistic Perspective on Tiling Sparse Tensor Algebra
Sharma, Ritvik; Xue, Zi Yu; Zhang, Nathan; Lacouture, Rubens; Kjolstad, Fredrik; Achour, Sara; Horowitz, Mark
Sparse tensor algebra computations are often memory-bound due to irregular access patterns and low arithmetic intensity. We present D2T2 (Data-Driven Tensor Tiling), a framework that optimizes static coordinate-space tiling schemes to minimize memory traffic by identifying and leveraging relevant high-level statistics from input operands. For a given tensor algebra computation, D2T2 collects statistics from input tensors, builds a probability distribution-based model of the tensor computation, and uses it to predict traffic for various tiling configurations. It searches over tile shape and size configurations to minimize total traffic. We evaluate D2T2 against Tailors and DRT, two state of the art tiling schemes for sparse tensor algebra. We find that D2T2 achieves, on average, a 2.54 × speedup over Tailors and a 1.13× lower memory bandwidth compared to DRT for sparse-sparse matrix multiplication (SpMSpM). We also achieve 1.22–48.94× lower bandwidth for SpMSpM and up to 34.31× lower bandwidth for tensor operations (TTM and MTTKRP) than conservative static tiling schemes. Unlike prior tiling techniques, D2T2 is deployable without specialized hardware support. On Opal, a 16nm sparse tensor algebra accelerator, D2T2 generated tiling configurations that achieve 1.23–3.34 × speedups compared to their original hand-tuned configurations.
MICRO ’25, Seoul, Republic of Korea
</description>
<pubDate>Fri, 17 Oct 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164078</guid>
<dc:date>2025-10-17T00:00:00Z</dc:date>
</item>
<item>
<title>HapticHearing: A Haptic Feedback System for Complementing Auditory Speech Perception for Mild-to-Moderate Hearing Loss</title>
<link>https://hdl.handle.net/1721.1/164077</link>
<description>HapticHearing: A Haptic Feedback System for Complementing Auditory Speech Perception for Mild-to-Moderate Hearing Loss
Chin, Sam; Fitz-Gibbon, Emmie; Huang, Bingjian; Paradiso, Joseph
Age-related hearing loss is often caused by cochlear hair cell degradation. This creates a challenge for hearing aids, which rely on sound amplification. Once hearing ability in a specific frequency is lost, amplification alone provides little benefit. Previous haptic systems have tried to solve this with complete sensory substitution, converting audio signals like phonemes to tactile patterns. However, these systems require significant amount of time to learn, and induce high cognitive load in haptic perception. Our system, HapticHearing, takes an alternative approach of leveraging a user’s residual hearing and complementing it with tactile feedback. We present a custom multi-actuator haptic device, designed to translate phonemic information from speech into tactile patterns that are customized to a user’s hearing loss and speech perception abilities. The system consists of a microphone for speech capture, four-band energy envelope extraction with vowel embedding, a custom USB-to-haptic driver PCB, and wearable devices containing eight vibrotactile actuators that deliver personalized tactile feedback based on the user’s audiogram. Psychophysical validation (n=9) showed neck-worn devices achieved better spatial localization (67% vs 53%) while while bracelet and necklace devices had lower detection thresholds than over-ear (thresholds  0.09 vs 0.18).
ASSETS ’25, Denver, CO, USA
</description>
<pubDate>Wed, 22 Oct 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164077</guid>
<dc:date>2025-10-22T00:00:00Z</dc:date>
</item>
<item>
<title>Resonance: Drawing from Memories to Imagine Positive Futures through AI-Augmented Journaling</title>
<link>https://hdl.handle.net/1721.1/164076</link>
<description>Resonance: Drawing from Memories to Imagine Positive Futures through AI-Augmented Journaling
Zulfikar, Wazeer; Chiaravalloti, Treyden; Shen, Jocelyn; Picard, Rosalind; Maes, Pattie
People inherently use experiences of their past while imagining their future, a capability that plays a crucial role in mental health. Resonance is an AI-powered journaling tool designed to augment this ability by offering AI-generated, action-oriented suggestions for future activities based on the user’s own past memories. Suggestions are offered when a new memory is logged and are followed by a prompt for the user to imagine carrying out the suggestion. In a two-week randomized controlled study (N=55), we found that using Resonance significantly improved mental health outcomes, reducing the users’ PHQ8 scores, a measure of current depression, and increasing their daily positive affect, particularly when they would likely act on the suggestion. Notably, the effectiveness of the suggestions was higher when they were personal, novel, and referenced the user’s logged memories. Finally, through open-ended feedback, we discuss the factors that encouraged or hindered the use of the tool.
AHs 2025, Masdar City, Abu Dhabi, United Arab Emirates
</description>
<pubDate>Thu, 09 Oct 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164076</guid>
<dc:date>2025-10-09T00:00:00Z</dc:date>
</item>
<item>
<title>A11yShape: AI-Assisted 3-D Modeling for Blind and Low-Vision Programmers</title>
<link>https://hdl.handle.net/1721.1/164075</link>
<description>A11yShape: AI-Assisted 3-D Modeling for Blind and Low-Vision Programmers
Zhang, Zhuohao (Jerry); Li, Haichang; Yu, Chun Meng; Faruqi, Faraz; Xie, Junan; Kim, Gene; Fan, Mingming; Forbes, Angus; Wobbrock, Jacob; Guo, Anhong; He, Liang
Building 3-D models is challenging for blind and low-vision (BLV) users due to the inherent complexity of 3-D models and the lack of support for non-visual interaction in existing tools. To address this issue, we introduce A11yShape, a novel system designed to help BLV users who possess basic programming skills understand, modify, and iterate on 3-D models. A11yShape leverages LLMs and integrates with OpenSCAD, a popular open-source editor that generates 3-D models from code. Key functionalities of A11yShape include accessible descriptions of 3-D models, version control to track changes in models and code, and a hierarchical representation of model components. Most importantly, A11yShape employs a cross-representation highlighting mechanism to synchronize semantic selections across all model representations—code, semantic hierarchy, AI description, and 3-D rendering. We conducted a multi-session user study with four BLV programmers, where, after an initial tutorial session, participants independently completed 12 distinct models across two testing sessions, achieving results that aligned with their own satisfaction. The result demonstrates that participants were able to comprehend provided 3-D models, as well as independently create and modify 3-D models—tasks that were previously impossible without assistance from sighted individuals.
ASSETS ’25, Denver, CO, USA
</description>
<pubDate>Wed, 22 Oct 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164075</guid>
<dc:date>2025-10-22T00:00:00Z</dc:date>
</item>
<item>
<title>Benthic: Perceptually Congruent Structures for Accessible Charts and Diagrams</title>
<link>https://hdl.handle.net/1721.1/164074</link>
<description>Benthic: Perceptually Congruent Structures for Accessible Charts and Diagrams
Mei⁎, Catherine; Pollock⁎, Josh; Hajas, Daniel; Zong, Jonathan; Satyanarayan, Arvind
Graphical representations — such as charts and diagrams — have a visual structure that communicates the relationship between visual elements. For instance, we might consider two elements to be connected when there is a line or arrow between them, or for there to be a part-to-whole relationship when one element is contained within the other. Yet, existing screen reader solutions rarely surface this structure for blind and low-vision readers. Recent approaches explore hierarchical trees or adjacency graphs, but these structures capture only parts of the visual structure — containment or direct connections, respectively. In response, we present Benthic, a system that supports perceptually congruent screen reader structures, which align screen reader navigation with a graphic’s visual structure. Benthic models graphical representations as hypergraphs: a relaxed tree structure that allows a single hyperedge to connect a parent to a set of children nodes. In doing so, Benthic is able to capture both hierarchical and adjacent visual relationships in a manner that is domain-agnostic and enables fluid (i.e., concise and reversible) traversal. To evaluate Benthic, we conducted a study with 15 blind participants who were asked to explore two kinds of graphical representations that have previously been studied with sighted readers. We find that Benthic’s perceptual congruence enabled flexible, goal-driven exploration and supported participants in building a clear understanding of each diagram’s structure.
ASSETS ’25, Denver, CO, USA
</description>
<pubDate>Wed, 22 Oct 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164074</guid>
<dc:date>2025-10-22T00:00:00Z</dc:date>
</item>
<item>
<title>Quartz: A Reconfigurable, Distributed-Memory Accelerator for Sparse Applications</title>
<link>https://hdl.handle.net/1721.1/164073</link>
<description>Quartz: A Reconfigurable, Distributed-Memory Accelerator for Sparse Applications
Golden, Courtney; Feldmann, Axel; Emer, Joel; Sanchez, Daniel
Iterative sparse matrix computations lie at the heart of many scientific computing and graph analytics algorithms. On conventional systems, their irregular memory accesses and low arithmetic intensity create challenging memory bandwidth bottlenecks. To overcome such bottlenecks, distributed-SRAM architectures are structured as an array of tiles, each with a processing element (PE) and a small local memory, to achieve very high aggregate memory bandwidth. However, current distributed-SRAM architectures suffer from either poor programmability due to over-specialized PEs or poor compute performance due to inefficient general-purpose PEs.&#13;
We propose Quartz, a new architecture that uses short dataflow tasks and reconfigurable PEs in a distributed-SRAM system to deliver both high performance and high programmability. Unlike traditional sparse CGRAs or on-die reconfigurable engines, Quartz allows reconfigurable compute to be highly utilized and scaled by (1) providing high memory bandwidth to each processing element and (2) introducing a task-level dataflow execution model that fits this new setting. Our execution model dynamically reconfigures each tile’s PE in response to inter-tile messages to execute tasks on local data. This execution model enables fine-grained data partitioning across tiles. To make execution efficient, we explore novel data partitioning techniques that use graph and hypergraph partitioning to minimize network traffic and balance load in the face of both static-static and static-dynamic operand sparsity. To ensure programmability, we show how a wide range of Einsum-expressible computations and flexible data distributions can be systematically captured in small tasks for execution on Quartz.&#13;
Quartz’s architecture, data partitioning techniques, and programming model together achieve gmean 21.4 × speedup over a prior state-of-the-art system for six different iterative sparse applications from scientific computing and graph analytics.
MICRO ’25, Seoul, Republic of Korea
</description>
<pubDate>Fri, 17 Oct 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164073</guid>
<dc:date>2025-10-17T00:00:00Z</dc:date>
</item>
<item>
<title>Converting Spatial to Social: Using Persistent Homology to Understand Social Groups</title>
<link>https://hdl.handle.net/1721.1/164072</link>
<description>Converting Spatial to Social: Using Persistent Homology to Understand Social Groups
Chen, Valerie; Liang, Claire; Shah, Julie; Andrist, Sean
In social settings, people display sophisticated spatial behaviors—for example, one might naturally enter into a conversation by sidling up to a group. Artificial agents will need the ability to reason about spatial representations of social information to understand not only how social groups form, but also how to interact within and around them. Leveraging the insight that people reason about shared space topologically rather than geometrically, we employ techniques from applied topology to introduce a new method for social group analysis that improves quantifiability and enables rigorous analysis of social group structure. We present a novel topological mathematical formalism called the social simplicial complex that provides an equivalence relation for socially analogous configurations of people and is provably robust against small perturbations and noise. Moreover, this formalism suggests quantifiable metrics to assess the confidence of social group existence and the social closeness of people within groups. We further use this formalism to introduce an open-source toolkit for evaluating possible models of social relationships, which we name the Social Topological Analysis (SoTA) Toolkit. Finally, we explore algebraic topology’s potential to serve more generally as a powerful tool for multi-modal social data processing, and its possibilities for further applications in social-spatial analysis.
ICMI ’25, Canberra, ACT, Australia
</description>
<pubDate>Sun, 12 Oct 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164072</guid>
<dc:date>2025-10-12T00:00:00Z</dc:date>
</item>
<item>
<title>LLMs in Citation Intent Classification: Progress, Precision, and Reproducibility Challenges</title>
<link>https://hdl.handle.net/1721.1/164071</link>
<description>LLMs in Citation Intent Classification: Progress, Precision, and Reproducibility Challenges
Fogelson, Alex; Thompson, Neil; Trišović, Ana
Understanding the intent behind scientific citations is critical for&#13;
advancing scholarly search and knowledge mapping. This paper&#13;
reflects on the methodological use of large language models (LLMs)&#13;
for multi-class citation intent classification. Our experiments evaluating a diverse range of models and approaches reveal striking&#13;
disagreement among state-of-the-art (SotA) systems. This inconsistency suggests that citation intent classification remains a challenging task for LLMs raising questions about the robustness, reliability&#13;
and replicability of current methods. Moreover, our findings highlight a concerning dependency on proprietary LLMs that, even&#13;
with access to compute resources, were necessary to achieve sufficient accuracy. This introduces new challenges, as silent updates,&#13;
lack of versioning, and opaque training pipelines pose threats to&#13;
methodological transparency and long-term reproducibility in LLMenabled research.
ACM REP ’25, Vancouver, BC, Canada
</description>
<pubDate>Tue, 21 Oct 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164071</guid>
<dc:date>2025-10-21T00:00:00Z</dc:date>
</item>
<item>
<title>Ancestral Technology: Inside Colombia’s Hidden Technological Landscape</title>
<link>https://hdl.handle.net/1721.1/164070</link>
<description>Ancestral Technology: Inside Colombia’s Hidden Technological Landscape
Reynolds-Cuellar, Pedro
Luz Marina Burgos’ fingers moved deliberately across the threads, constructing a tšombiach—a ceremonial sash commonly used to protect and strengthen the body. “This is the frog; to us, it represents fertility,” she explained, pointing to an emerging pattern. “This is the sun. Families weave it differently. This is how the tšombiach helps us tell our own story.” What I witnessed in this Colombian village was not simply craft—it was a technology for encoding and transmitting intergenerational knowledge.&#13;
&#13;
Passing most of my time between MIT and Harvard created a sense of technology as merely technical or socio-technical systems serving as a means to undetermined progress that only a few seem able to influence or have power over. A sense of relentless push towards the new, often at the expense of the old. Learning from Luz Marina, a traditional weaver from the Quillasinga Indigenous people, helped me make sense of radically different technological values, motivations and purposes. She is part of a centuries-long tradition of sustaining technologies designed for a different purpose entirely: cultural preservation. These technological systems solve immediate problems while maintaining the social fabric that makes problem-solving possible across generations.&#13;
&#13;
During five years of fieldwork in Colombia’s rural communities —ultimately leading to my doctoral dissertation— I encountered technologies that function according to entirely different logics than those driving “modern” narratives of innovation. I began —along with my collaborators in Colombia— conceptualizing these as “ancestral technologies”: forms of world-making —some of which take the form of artifacts— that primarily support cultural cohesion, remain rooted in specific geographies and carry their history through collective memory. Unlike modern technologies optimized for profit, efficiency or scale, these ancestral systems optimize for continuity and collective meaning. In an era when predictive technology sells the fantasy that unlimited computational power must be our goal as a society, perhaps the question is not whether we can build more powerful systems, but whether we can build systems that help us preserve what matters most.
</description>
<pubDate>Thu, 30 Oct 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164070</guid>
<dc:date>2025-10-30T00:00:00Z</dc:date>
</item>
<item>
<title>Asymmetric linker generates intrinsically disordered metal–organic framework with local MOF-74 structure</title>
<link>https://hdl.handle.net/1721.1/164069</link>
<description>Asymmetric linker generates intrinsically disordered metal–organic framework with local MOF-74 structure
Dinakar, Bhavish; Oppenheim, Julius J; Vandone, Marco; Torres, Juan F; Iliescu, Andrei; Yang, Zhentao; Román-Leshkov, Yuriy; Dincă, Mircea
Here, we report an intrinsically disordered MOF in the MOF-74 family,&#13;
Mg2x(as-dobpdc) (as-dobpdc4 = 30&#13;
,4-dioxidobiphenyl-3,40&#13;
-dicarboxylate). Despite the absence of crystallinity, this material exhibits&#13;
local ordering consistent with that of its crystalline isomers, maintains&#13;
porosity, and exhibits a high density of open metal sites.
</description>
<pubDate>Thu, 14 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164069</guid>
<dc:date>2025-08-14T00:00:00Z</dc:date>
</item>
<item>
<title>Sustainable aviation fuels from biomass and biowaste via bio- and chemo-catalytic conversion: Catalysis, process challenges, and opportunities</title>
<link>https://hdl.handle.net/1721.1/164068</link>
<description>Sustainable aviation fuels from biomass and biowaste via bio- and chemo-catalytic conversion: Catalysis, process challenges, and opportunities
Zhang, Junyan; Webber, Matthew S; Pu, Yunqiao; Li, Zhenglong; Meng, Xianzhi; Stone, Michael L; Wei, Bingqing; Wang, Xueqi; Yuan, Sainan; Klein, Bruno; Seemala, Bhogeswararao; Wyman, Charles E; Ramasamy, Karthikeyan K; Thorson, Mike; Langholtz, Matthew H; Heyne, Joshua S; Koishybay, Aibolat; Adhikari, Shiba; Cao, Sufeng; Sutton, Andrew D; Tuskan, Gerald A; Román-Leshkov, Yuriy; Ragauskas, Arthur J; Ling, Tao; Davison, Brian H
Sustainable aviation fuel (SAF) production from biomass and biowaste streams is an attractive option for decarbonizing the aviation sector, one of the most-difficult-to-electrify transportation sectors. Despite ongoing commercialization efforts using ASTM-certified pathways (e.g., lipid conversion, Fischer–Tropsch synthesis), production capacities are still inadequate due to limited feedstock supply and high production costs. New conversion technologies that utilize lignocellulosic feedstocks are needed to meet these challenges and satisfy the rapidly growing market. Combining bio- and chemo-catalytic approaches can leverage advantages from both methods, i.e., high product selectivity via biological conversion, and the capability to build C-C chains more efficiently via chemical catalysis. Herein, conversion routes, catalysis, and processes for such pathways are discussed, while key challenges and meaningful R&amp;D opportunities are identified to guide future research activities in the space. Bio- and chemo-catalytic conversion primarily utilize the carbohydrate fraction of lignocellulose, leaving lignin as a waste product. This makes lignin conversion to SAF critical in order to utilize whole biomass, thereby lowering overall production costs while maximizing carbon efficiencies. Thus, lignin valorization strategies are also reviewed herein with vital research areas identified, such as facile lignin depolymerization approaches, highly integrated conversion systems, novel process configurations, and catalysts for the selective cleavage of aryl C–O bonds. The potential efficiency improvements available via integrated conversion steps, such as combined biological and chemo-catalytic routes, along with the use of different parallel pathways, are identified as key to producing all components of a cost-effective, 100% SAF.
</description>
<pubDate>Sun, 01 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164068</guid>
<dc:date>2025-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Methods for Carbon Mass Closure in Polyolefin Hydrocracking</title>
<link>https://hdl.handle.net/1721.1/164067</link>
<description>Methods for Carbon Mass Closure in Polyolefin Hydrocracking
Brenner, Anna E; Drake, Griffin; Beckham, Gregg T; Román-Leshkov, Yuriy
Heterogeneous catalytic hydrocracking of polyolefins is a promising approach for the processing of postconsumer plastics, but product quantification methods remain inconsistent across the literature. In systems that generate a large fraction of vapor-phase products, typical product capture methods can result in large carbon balance deficits, exceeding 50%, compromising reported yields and selectivities. Here, we identify the major sources of product loss and develop enhanced capture methods to improve the quantification accuracy. Seven supplemental techniques were evaluated, targeting either increased vapor recovery (by increasing the volatility or system volume) or enhanced retention in the liquid phase (by decreasing volatility). Among these, a flow collection approach using a continuous helium sweep and downstream gas sampling bag capture yielded the highest recovery, achieving a 96 ± 9.2% carbon balance closure. We show that the efficacy of these methods is strongly dependent on product distribution. In general, solvent addition was most effective when condensable species dominate the product distribution, while flow collection was preferred when both condensable species and light gases are present in high concentrations. These results highlight the need for method-specific workup strategies and demonstrate that no single protocol is universally optimal. We provide general guidelines for selecting and implementing robust product capture techniques, enabling accurate yield and selectivity determinations in polyolefin hydrocracking systems.
</description>
<pubDate>Thu, 24 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164067</guid>
<dc:date>2025-07-24T00:00:00Z</dc:date>
</item>
<item>
<title>Lignin Extraction and Condensation as a Function of Temperature, Residence Time, and Solvent System in Flow-through Reactors</title>
<link>https://hdl.handle.net/1721.1/164066</link>
<description>Lignin Extraction and Condensation as a Function of Temperature, Residence Time, and Solvent System in Flow-through Reactors
Brandner, David G; Gracia Vitoria, Jaime; Kenny, Jacob K; Bussard, Jeremy R; Jang, Jun Hee; Woodworth, Sean P; Vanbroekhoven, Karolien; Román-Leshkov, Yuriy; Beckham, Gregg T
Solvolytic extraction of lignin from biomass is a critical step in lignin-first biorefining, including the reductive catalytic fractionation (RCF) process. Key to optimal RCF processing is the ability to rapidly extract lignin from biomass at high delignification extents and transfer the lignin molecules to a catalyst surface in a time frame that minimizes lignin condensation reactions. Here, we use a flow-through reactor to study the effects of temperature (175-250 °C), residence time (9 to 36 min), and solvent composition (methanol and methanol-water) on lignin extraction and condensation. We evaluated three metrics at each condition: total delignification, delignification rate, and extent of condensation, the latter measured by a decrease in monomer yield for batch hydrogenolysis reactions of solvolysis liquor compared to batch RCF reactions. We observe that delignification is predominantly determined by temperature, while residence time dictates the lignin condensation extent. Moreover, the extent of both extraction and condensation increased in the methanol-water solvent system compared to that in the methanol system. Lignin extracted in methanol is stable up to 18-min residence times at or below 225 °C, while a majority of the lignin extracted in methanol-water is condensed with a 9-min residence time at 200 °C. These results can inform reactor designs and solvent selection for lignin-first biorefining processes that aim to physically separate the biomass and catalyst.
</description>
<pubDate>Fri, 01 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164066</guid>
<dc:date>2025-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>The legacy of Rudolf Nieuwenhuys in perspective</title>
<link>https://hdl.handle.net/1721.1/164026</link>
<description>The legacy of Rudolf Nieuwenhuys in perspective
Pignatelli, Michele; Rockland, Kathleen S.
Professor Nieuwenhuys is among the great neuroanatomists and a historical figure of the later 20th and early 21st centuries. His legacy is manifold. There is the tangible legacy of the multiple scientific volumes, at once physical and conceptual entities. There is the generational legacy of handed-on scientific and intellectual traditions, and there is the legacy of specific scientific directions. In this brief Commentary, we highlight just two examples of his scientific contributions.
</description>
<pubDate>Mon, 24 Nov 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164026</guid>
<dc:date>2025-11-24T00:00:00Z</dc:date>
</item>
<item>
<title>Some Advice on Sustainability: ‘I Would Never Get into a Business I Did Not Really Understand’</title>
<link>https://hdl.handle.net/1721.1/164025</link>
<description>Some Advice on Sustainability: ‘I Would Never Get into a Business I Did Not Really Understand’
Wright, Randall S.
My father, Chester S. Wright, was a business executive. He was president of two manufacturing companies and a member of the board of directors of four others.&#13;
&#13;
As a young boy, I remember him coming home from work in a big, black Chrysler Imperial—a “company car”—fitted out with shining chromium bumpers and gleaming radiator grill. After a wonderful home-cooked dinner my mother always made for my father, my two sisters, and me, he and I would head out in the Imperial to Gray’s Drug Store so he could buy House of Windsor cigars, and we could pick up the latest copies of Popular Mechanics, Popular Science, and Mechanix Illustrated.
</description>
<pubDate>Tue, 01 Nov 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164025</guid>
<dc:date>2022-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>The 5-methylcytosine DNA glycosylase ROS1 prevents paternal genome hypermethylation in Arabidopsis endosperm</title>
<link>https://hdl.handle.net/1721.1/164024</link>
<description>The 5-methylcytosine DNA glycosylase ROS1 prevents paternal genome hypermethylation in Arabidopsis endosperm
Hemenway, Elizabeth A.; Gehring, Mary
Background DNA methylation patterning is a consequence of opposing activities of DNA methyltransferases and DNA demethylases. In many plant and animal species, reproduction is a period of significant epigenome lability. In flowering plants, two distinct female gametes, the egg cell and the central cell, are fertilized to produce the embryo and the endosperm of the seed. The endosperm is an unusual tissue, exemplified by triploidy and reduced DNA methylation. In Arabidopsis thaliana, a 5-methylcytosine DNA glycosylase, DME, demethylates regions of the central cell genome, leading to methylation differences between maternally- and paternally-inherited endosperm genomes after fertilization. Expression of DME in the central cell is required for gene imprinting, or parent-of-origin specific gene expression, in endosperm. DME is part of a four member gene family in Arabidopsis that includes ROS1, DML2, and DML3. It is unknown whether any of the other DNA glycosylases are required for endosperm methylation patterning. Results Using whole-genome methylation profiling, we identify ROS1 target regions in the endosperm. We show that ROS1 prevents hypermethylation of paternally-inherited alleles in the endosperm at regions that lack maternal or paternal allele methylation in wild-type endosperm. Additionally, we demonstrate that at many ROS1 target regions the maternal alleles are demethylated by DME. Conclusions ROS1 promotes epigenetic symmetry between parental genomes in the endosperm by preventing CG methylation gain on the paternal genome. We conclude that ROS1 and DME act in a parent-of-origin-specific manner at shared endosperm targets, and consider possible implications for the evolution of imprinting mechanisms.
</description>
<pubDate>Thu, 18 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164024</guid>
<dc:date>2025-09-18T00:00:00Z</dc:date>
</item>
<item>
<title>Players chatter and dice clatter: exploring sonic power relations in posthuman game-based learning ecologies</title>
<link>https://hdl.handle.net/1721.1/164023</link>
<description>Players chatter and dice clatter: exploring sonic power relations in posthuman game-based learning ecologies
Woods, Peter J; Jones, Karis
Responding to both recent interest in sound within qualitative education research and sound studies literature that conceptualizes sound as a posthuman technology, we use this paper to explore the following research questions: How does sound both enact and unveil posthuman learning ecologies? And how can education scholars engage sound within posthuman research? Through a posthuman framework, we position noise as an analytical tool for exploring and unveiling more-than-human relations. We then draw parallels between posthuman qualitative research into sound (via noise) and the ideological foundation of experimental music, a musical tradition deeply invested in working with sound as an agentic actor. Within this alignment, we propose using graphic scores to transcribe sonic data without reinscribing humanist research aims. To illustrate, we provide a micro-analysis of preservice teachers engaged in a role-playing game activity and uncover the ways sound asserts its agency within learning ecologies.
</description>
<pubDate>Fri, 28 Oct 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164023</guid>
<dc:date>2022-10-28T00:00:00Z</dc:date>
</item>
<item>
<title>Palonosetron, a 5-HT3 Receptor Antagonist, Induces G1 Cell Cycle Arrest and Autophagy in Gastric Cancer Cells</title>
<link>https://hdl.handle.net/1721.1/164022</link>
<description>Palonosetron, a 5-HT3 Receptor Antagonist, Induces G1 Cell Cycle Arrest and Autophagy in Gastric Cancer Cells
Yoo, Young Chul; Lin, Lin; Lee, Sihak; Shin, Yeeun Rachel; Oh, Ju Eun; Kim, Na Young
Serotonin or 5-hydroxytryptamine (5-HT) has been implicated in promoting cancer cell growth by acting on 5-HT receptors, such as 5-HT1 and 5-HT2 receptors. However, the role of 5-HT3 receptor antagonists in gastric cancer cell lines remains unclear. This study aimed to evaluate the effect of 5-HT3 receptor antagonists (ondansetron, palonosetron, and ramosetron) on cancer cell growth using AGS and MKN-1 cell lines, as well as the xenograft mouse model. All the three antagonists inhibited cell proliferation, migration, and colony formation in AGS cells. Specifically, palonosetron induced G1 cell cycle arrest, autophagy, and phosphorylation of GSK3β, along with increased expression of p27, p53, and LC3B. In vivo studies demonstrated that palonosetron reduced tumor growth and modulated pro-inflammatory cytokines—tumor necrosis factor alpha, interleukin 6, and interleukin 1β. These findings suggest that 5-HT3 receptor antagonists, especially palonosetron, exert anti-tumor effects in gastric cancer through G1 cell cycle regulation and immunomodulation. The results position palonosetron as a promising lead for further preclinical development in gastric cancer.
</description>
<pubDate>Wed, 15 Oct 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164022</guid>
<dc:date>2025-10-15T00:00:00Z</dc:date>
</item>
<item>
<title>50 years of nanomechanics: Scale-bridging mechanistic insights through the looking glass</title>
<link>https://hdl.handle.net/1721.1/164021</link>
<description>50 years of nanomechanics: Scale-bridging mechanistic insights through the looking glass
Han, Seung M.; Gianola, Daniel S.; Portela, Carlos M.; Sebastiani, Marco; Kirchlechner, Christoph
Historical and recent advances in the field of nanomechanics, ranging from the early development of nanoindentation to recent advances in artificial intelligence- and machine learning-based characterization and modeling are covered in this article. Early advances were motivated by thin-film mechanics challenges driven by the microelectronics industry. In the ensuing years, different methodologies for probing mechanical properties at length scales relevant to a myriad of applications and materials systems have been developed, coupled with a variety of in situ testing methods that shed insights into new mechanisms. Built upon the knowledge base from nanomechanics, new mechanical metamaterials with otherwise unachievable material properties have been discovered, and new methods in testing and analyzing properties for extreme conditions have been recently reported. This article discusses the journey that the nanomechanics community has gone through over the past 50 years and shares the scale-bridging mechanistic insights through the looking glass.
</description>
<pubDate>Mon, 17 Nov 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164021</guid>
<dc:date>2025-11-17T00:00:00Z</dc:date>
</item>
<item>
<title>Sustainable Synthesis of CoFe2O4/Fe2O3 Catalyst for Hydrogen Generation from Sodium Borohydride Hydrolysis</title>
<link>https://hdl.handle.net/1721.1/164020</link>
<description>Sustainable Synthesis of CoFe2O4/Fe2O3 Catalyst for Hydrogen Generation from Sodium Borohydride Hydrolysis
Teixeira, Lucas Tonetti; Medeiros, Marcos; Liu, Liying; Park, Vinicius Novaes; Valente-Rodriguez, Célio; Letichevsky, Sonia; Fajardo, Humberto Vieira; de Siqueira, Rogério Navarro Correia; Maia da Costa, Marcelo Eduardo Huguenin; Botelho Junior, Amilton Barbosa
Hydrogen has been explored as a greener alternative for greenhouse gas emissions reduction. Sodium borohydride (NaBH4) is a favorable hydrogen carrier due to its high hydrogen content, safe handling, and rapid hydrogen release. This work presents a novel synthesis of the catalyst CoFe2O4/Fe2O3 using nanocellulose fibers (TCNF) as reactive templates for metal adsorption and subsequent calcination. The resulting material was tested for H2 production from basic NaBH4 aqueous solutions (10–55 °C). The catalyst’s composition is 74.8 wt% CoFe2O4, 25 wt% Fe2O3, and 0.2 wt% Fe2(SO4)3 with agglomerated spheroidal particles (15–20 nm) and homogeneous Fe and Co distribution. The catalyst produced 1785 mL of H2 in 15 min at 25 °C (50 mg catalyst, 4.0% NaBH4, and 2.5 wt% NaOH), close to the stoichiometric maximum (2086 mL). The maximum H2 generation rate (HGR) reached 3.55 L min−1 gcat−1 at 40 °C. Activation energies were determined using empirical (38.4 ± 5.3 kJ mol−1) and Langmuir–Hinshelwood (L–H) models (42.2 ± 5.8 kJ mol−1), consistent with values for other Co-ferrite catalysts. Kinetic data fitted better to the L–H model, suggesting that boron complex adsorption precedes H2 evolution.
</description>
<pubDate>Wed, 01 Oct 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164020</guid>
<dc:date>2025-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>On the potential of microtubules for scalable quantum computation</title>
<link>https://hdl.handle.net/1721.1/164019</link>
<description>On the potential of microtubules for scalable quantum computation
Mavromatos, Nick E.; Mershin, Andreas; Nanopoulos, Dimitri V.
We examine the quantum coherence properties of tubulin heterodimers arranged into the protofilaments of cytoskeletal microtubules. In the physical model proposed by the authors, the microtubule interiors are treated as high-Q quantum electrodynamics (QED) cavities that can support decoherence-resistant entangled states under physiological conditions, with decoherence times of the order of O ( 10 - 6 )  s. We identify strong electric dipole interactions between tubulin dimers and ordered water dipole quanta within the microtuble interior as the mechanism responsible for the extended coherence times. Classical nonlinear (pseudospin) σ -models describing solitonic excitations are reinterpreted as emergent quantum-coherent—or possibly pointer—states, arising from incomplete collapse of dipole-aligned quantum states. These solitons mediate dissipation-free energy transfer along microtubule filaments. We discuss logic-gate-like behaviour facilitated by microtubule-associated proteins, and outline how such structures may enable scalable, ambient-temperature quantum computation, with the fundamental unit of information storage realized as a quDit encoded in the tubulin dipole state. We further describe a process akin to “decision-making” that emerges following an external stimulus, whereby optimal, energy-loss-free signal and information transport pathways are selected across the microtubular network. Finally, we propose experimental approaches—including Rabi-splitting spectroscopy and entangled surface plasmon probes—to validate the use of biomatter as a substrate for scalable quantum computation.
</description>
<pubDate>Wed, 19 Nov 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164019</guid>
<dc:date>2025-11-19T00:00:00Z</dc:date>
</item>
<item>
<title>Can Artificial Intelligence Improve the Appropriate Use and Decrease the Misuse of REBOA?</title>
<link>https://hdl.handle.net/1721.1/164018</link>
<description>Can Artificial Intelligence Improve the Appropriate Use and Decrease the Misuse of REBOA?
Bokenkamp, Mary; Ma, Yu; Dorken-Gallastegi, Ander; Proaño-Zamudio, Jefferson A; Gebran, Anthony; Velmahos, George C; Bertsimas, Dimitris; Kaafarani, Haytham MA
Background: The use of resuscitative endovascular balloon occlusion of the aorta (REBOA) for control of noncompressible torso hemorrhage remains controversial. We aimed to utilize a novel and transparent/interpretable artificial intelligence (AI) method called Optimal Policy Trees (OPTs) to improve the appropriate use and decrease the misuse of REBOA in hemodynamically unstable blunt trauma patients. Methods: We trained and then validated OPTs that “prescribe” REBOA in a 50:50 split on all hemorrhagic shock blunt trauma patients in the 2010–2019 ACS-TQIP database based on rates of survival. Hemorrhagic shock was defined as a systolic blood pressure ≤90 on arrival or a transfusion requirement of ≥4 units of blood in the first 4 h of presentation. The expected 24 h mortality rate following OPT prescription was compared to the observed 24 h mortality rate in patients who were or were not treated with REBOA. Results: Out of 4.5 million patients, 100,615 were included, and 803 underwent REBOA. REBOA patients had a higher rate of pelvic fracture, femur fracture, hemothorax, pneumothorax, and thoracic aorta injury (p &lt; 0.001). The 24 h mortality rate for the REBOA vs. non-REBOA group was 47% vs. 21%, respectively (p &lt; 0.001). OPTs resulted in an 18% reduction in 24 h mortality for REBOA and a 0.8% reduction in non-REBOA patients. We specifically divert the misuse of REBOA by recommending against REBOA in cases where it leads to worse outcomes. Conclusions: This proof-of-concept study shows that interpretable AI models can improve mortality in unstable blunt trauma patients by optimizing the use and decreasing the misuse of REBOA. To date, these models have been used to predict outcomes, but their groundbreaking use will be in prescribing interventions and changing outcomes.
</description>
<pubDate>Thu, 25 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164018</guid>
<dc:date>2025-09-25T00:00:00Z</dc:date>
</item>
<item>
<title>Ultra-High Resolution 9.4T Brain MRI Segmentation via a Newly Engineered Multi-Scale Residual Nested U-Net with Gated Attention</title>
<link>https://hdl.handle.net/1721.1/164017</link>
<description>Ultra-High Resolution 9.4T Brain MRI Segmentation via a Newly Engineered Multi-Scale Residual Nested U-Net with Gated Attention
Kalluvila, Aryan; Patel, Jay B.; Johnson, Jason M.
A 9.4T brain MRI is the highest resolution MRI scanner in the public market. It offers submillimeter brain imaging with exceptional anatomical detail, making it one of the most powerful tools for detecting subtle structural changes associated with neurological conditions. Current segmentation models are optimized for lower-field MRI (1.5T–3T), and they struggle to perform well on 9.4T data. In this study, we present the GA-MS-UNet++, the world’s first deep learning-based model specifically designed for 9.4T brain MRI segmentation. Our model integrates multi-scale residual blocks, gated skip connections, and spatial channel attention mechanisms to improve both local and global feature extraction. The model was trained and evaluated on 12 patients in the UltraCortex 9.4T dataset and benchmarked against four leading segmentation models (Attention U-Net, Nested U-Net, VDSR, and R2UNet). The GA-MS-UNet++ achieved a state-of-the-art performance across both evaluation sets. When tested against manual, radiologist-reviewed ground truth masks, the model achieved a Dice score of 0.93. On a separate test set using SynthSeg-generated masks as the ground truth, the Dice score was 0.89. Across both evaluations, the model achieved an overall accuracy of 97.29%, precision of 90.02%, and recall of 94.00%. Statistical validation using the Wilcoxon signed-rank test (p &lt; 1 × 10−5) and Kruskal–Wallis test (H = 26,281.98, p &lt; 1 × 10−5) confirmed the significance of these results. Qualitative comparisons also showed a near-exact alignment with ground truth masks, particularly in areas such as the ventricles and gray–white matter interfaces. Volumetric validation further demonstrated a high correlation (R2 = 0.90) between the predicted and ground truth brain volumes. Despite the limited annotated data, the GA-MS-UNet++ maintained a strong performance and has the potential for clinical use. This algorithm represents the first publicly available segmentation model for 9.4T imaging, providing a powerful tool for high-resolution brain segmentation and driving progress in automated neuroimaging analysis.
</description>
<pubDate>Wed, 24 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164017</guid>
<dc:date>2025-09-24T00:00:00Z</dc:date>
</item>
<item>
<title>De novo design of a two-step approach targeting Claudin-6 for enhanced drug delivery to solid tumors</title>
<link>https://hdl.handle.net/1721.1/164016</link>
<description>De novo design of a two-step approach targeting Claudin-6 for enhanced drug delivery to solid tumors
Yan, Jiayao; Zhong, Liqing; Chen, Xiaotong; Li, Lin; Liu, Fangcen; Lei, Lei; An, Mengchao; Wei, Xiao; Wang, Ying; Chen, Tianran; Guo, Jingyi; Shao, Jie; Yu, Xiaoxiao; Zhao, Yingjie; Li, Rutian; Liu, Qin
Background Although antibody-conjugated drugs have achieved success in clinical practice for cancer treatment, challenges remain in developing a highly efficient drug delivery system with specific accumulation in tumors and reduction in side effects. With improved pharmacokinetics, strong covalent bonding and quick binding reactions, a pre-targeting approach via molecular pairs represents an attractive platform for two-step delivery system construction. Methods Bioinformatics and immunohistochemistry assays were performed to assess Claudin-6 (CLDN6) as a highly specific tumor target in solid tumors. A phage-displayed library was used to screen and optimize anti-CLDN6 designed ankyrin repeat proteins (DARPins), which were incorporated into a two-step delivery system based on SpyTag/SpyCatcher. Fluorescent staining, flow cytometry and near-infrared imaging were performed to assess the tumor-targeting ability and biodistribution of this delivery system. The cytotoxic drug, Monomethyl auristatin E (MMAE), was conjugated with the delivery system to evaluate its anti-tumor efficacy and safety profile. Results Anti-CLDN6 DARPins exhibited specific binding to CLDN6+ cancer cells with high affinity instead of negative cells in vitro, ex vivo and in vivo. The DARPins-based two-step delivery system improved background clearance with a high signal-to-noise ratio, enhancing the specific accumulation of payloads in tumors. The cytotoxic drug delivered via the two-step system appeared superior to the one-step approach in IC50, biodistribution, and tumor growth inhibition. Conclusions Our study presented the de novo design of a two-step drug delivery system targeting Claudin-6 with enhanced anti-tumor efficacy and improved biosafety. These findings highlighted the potential of this approach to enhance the efficacy of tumor-targeting therapies and reduce adverse effects, paving the way for more effective cancer treatments.
</description>
<pubDate>Thu, 20 Nov 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164016</guid>
<dc:date>2025-11-20T00:00:00Z</dc:date>
</item>
<item>
<title>How Does AI Transform Cyber Risk Management?</title>
<link>https://hdl.handle.net/1721.1/164015</link>
<description>How Does AI Transform Cyber Risk Management?
Zeijlemaker, Sander; Lemiesa, Yaphet K; Schröer, Saskia Laura; Abhishta, Abhishta; Siegel, Michael
Digital transformation embeds smart cities, e-health, and Industry 4.0 into critical infrastructures, thereby increasing reliance on digital systems and exposure to cyber threats and boosting complexity and dependency. Research involving over 200 executives reveals that under rising complexity, only 15% of cyber risk investments are effective, leaving most organizations misaligned or vulnerable. In this context, the role of artificial intelligence (AI) in cybersecurity requires systemic scrutiny. This study analyzes how AI reshapes systemic structures in cyber risk management through a multi-method approach: literature review, expert workshops with practitioners and policymakers, and a structured kill chain analysis of the Colonial Pipeline attack. The findings reveal three new feedback loops: (1) deceptive defense structures that misdirect adversaries while protecting assets, (2) two-step success-to-success attacks that disable defenses before targeting infrastructure, and (3) autonomous proliferation when AI applications go rogue. These dynamics shift cyber risk from linear patterns to adaptive, compounding interactions. The principal conclusion is that AI both amplifies and mitigates systemic risk. The core recommendation is to institutionalize deception in security standards and address drifting AI-powered systems. Deliverables include validated systemic structures, policy options, and a foundation for creating future simulation models to support strategic cyber risk management investment.
</description>
<pubDate>Tue, 23 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164015</guid>
<dc:date>2025-09-23T00:00:00Z</dc:date>
</item>
<item>
<title>Flex-Route Transit for Smart Cities: A Reinforcement Learning Approach to Balance Ridership and Performance</title>
<link>https://hdl.handle.net/1721.1/164014</link>
<description>Flex-Route Transit for Smart Cities: A Reinforcement Learning Approach to Balance Ridership and Performance
Rodriguez, Joseph; Koutsopoulos, Haris N.; Zhao, Jinhua
A major challenge for modern transit systems relying on traditional fixed-route designs is providing broad accessibility to users. Flex-route transit can enhance accessibility in low-density areas, since it combines the directness of fixed-route transit with the coverage of on-demand mobility. Although deviating for optional pickups can increase ridership and transit accessibility, it also deteriorates the service performance for fixed-route riders. To balance this inherent trade-off, this paper proposes a reinforcement learning approach for deviation decisions. The proposed model is used in a case study of a proposed flex-route service in the city of Boston. The performance on competing objectives is evaluated for reward configurations that adapt to peak and off-peak scenarios. The analysis shows a significant improvement of our method compared to a heuristic derived from industry practice as a baseline. To evaluate robustness, we assess performance across scenarios with varying demand compositions (fixed and requested riders). The results show that the method achieves greater improvements than the baseline in scenarios with increased request ridership, i.e., where decision-making is more complex. Our approach improves service performance under dynamic demand conditions and varying priorities, offering a valuable tool for smart cities to operate flex-route services.
</description>
<pubDate>Tue, 16 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164014</guid>
<dc:date>2025-09-16T00:00:00Z</dc:date>
</item>
<item>
<title>Search for a cH signal in the associated production of at least one charm quark with a Higgs boson in the diphoton decay channel in pp collisions at $$\sqrt{s}=13$$ TeV</title>
<link>https://hdl.handle.net/1721.1/164013</link>
<description>Search for a cH signal in the associated production of at least one charm quark with a Higgs boson in the diphoton decay channel in pp collisions at $$\sqrt{s}=13$$ TeV
Chekhovsky, V.; Hayrapetyan, A.; Makarenko, V.; Tumasyan, A.; Adam, W.; Andrejkovic, J. W.; Benato, L.; Bergauer, T.; Chatterjee, S.; Damanakis, K.; Dragicevic, M.; Hussain, P. S.; Jeitler, M.; Krammer, N.; Li, A.; Liko, D.; Mikulec, I.
This paper presents the first search for a cH signal sensitive to the coupling of the charm quark (c) to the Higgs boson (H) in the associated production of at least one charm quark with a Higgs boson decaying to two photons. The results are based on a data set of proton-proton collisions at a center-of-mass energy of 13 TeV collected with the CMS experiment at the LHC, corresponding to an integrated luminosity of 138 fb−1. Assuming the standard model (SM) rates for all other Higgs boson production processes, the observed (expected) upper limit at 95% confidence level on the cH signal strength is 243 (355) times the SM prediction. Under the same assumption, the observed (expected) allowed interval on the Higgs boson to charm quark coupling modifier, κc, is |κc| &lt; 38.1 (|κc| &lt; 72.5) at 95% confidence level.
</description>
<pubDate>Wed, 12 Nov 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164013</guid>
<dc:date>2025-11-12T00:00:00Z</dc:date>
</item>
<item>
<title>A Bunch of Gaps: Factors Behind Service Reliability in Chicago’s High-Frequency Transit Network</title>
<link>https://hdl.handle.net/1721.1/164012</link>
<description>A Bunch of Gaps: Factors Behind Service Reliability in Chicago’s High-Frequency Transit Network
Rodriguez, Joseph; Koutsopoulos, Haris N.; Zhao, Jinhua
Frequent transit services in urban areas have the potential to increase their accessibility to transit-dependent riders and reduce congestion by attracting new ridership through a modal shift. However, bus services operating in mixed traffic face operational challenges that reduce reliability and hinder their attractiveness. The sources of unreliability can range from local-level conditions, like the road infrastructure, to higher-level decisions, like the service plan. For the effective planning of improvement strategies, both scales of analysis must be considered. This paper uses a novel modeling framework to understand reliability by analyzing the route and segment factors separately. The Chicago Transit Authority (CTA) bus network is used as a case study for the analysis. The data reflect the operational, demand, and urban conditions of 50 high-frequency bus routes. At the route level, we use the coefficient of headway variation as the dependent variable and diverse route characteristics as explanatory variables. The results indicate that the most significant contributors to the variability of headways are variability in schedules and dispatching at terminals. It is also found that driver experience impacts reliability and that east–west routes are more unreliable than north–south routes. At the segment level, we use data from trips involved in bunching and gaps. As the dependent variable, a novel measure is formulated to capture how quickly bunching or gaps are formed. The bunching and gap events are treated as separate regression models. Findings suggest that link and dwell time variability are the most significant contributors to gap and bunching formation. In terms of infrastructure, bus lane segments reduce gap formations, and left turns increase bunching and gap formations. The insights presented can inform improvements in service and transit infrastructure planning to improve transit level of service (LOS) and support the future of sustainable, smart cities.
</description>
<pubDate>Thu, 28 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164012</guid>
<dc:date>2025-08-28T00:00:00Z</dc:date>
</item>
<item>
<title>Oil Transport Simulation and Oil Consumption Prediction with a Physics-Based and Data-Driven Digital Twin Model for Internal Combustion Engines</title>
<link>https://hdl.handle.net/1721.1/164011</link>
<description>Oil Transport Simulation and Oil Consumption Prediction with a Physics-Based and Data-Driven Digital Twin Model for Internal Combustion Engines
Zhong, Xinlin; Tian, Tian
Lubrication oil consumption (LOC) is one of the major sources of emissions from internal combustion (IC) engines; yet, analyzing and predicting it through modeling is challenging due to its multi-physics nature, which spans different time and length scales. In this work, a digital twin model is developed to simulate oil transport in the piston ring pack of IC engines and predict the resulting oil consumption with all major physical mechanisms considered. Three main contributors to LOC, namely, top ring up-scraping, oil vaporization on the liner, and reverse gas flows through the top ring gap, are included in the model. It was found that their behaviors are heavily dependent on the arrangement of the piston ring gaps. Therefore, with the ring rotation behavior still not resolved, the current model can predict the LOC range of a given engine profile. Results show that the predicted range can well encapsulate the experimentally measured LOC value.
</description>
<pubDate>Tue, 21 Oct 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164011</guid>
<dc:date>2025-10-21T00:00:00Z</dc:date>
</item>
<item>
<title>LSM and CPT</title>
<link>https://hdl.handle.net/1721.1/164010</link>
<description>LSM and CPT
Seiberg, Nathan; Shao, Shu-Heng; Zhang, Wucheng
We study a number of 1+1d lattice models with anti-unitary symmetries that simultaneously reflect space and reverse time. Some of these symmetries are anomalous, leading to Lieb-Schultz-Mattis-type constraints, thus excluding a trivially gapped phase. Examples include a mod 8 anomaly in the Majorana chain and various mod 2 anomalies in the spin chain. In some cases, there is an exact, non-anomalous lattice symmetry that flows in the continuum to CPT. In some other cases, the CPT symmetry of the continuum theory is emergent or absent. Depending on the model, the anomaly of the lattice model is matched in the continuum in different ways. In particular, it can be mapped to an emergent anomaly of an emanant symmetry.
</description>
<pubDate>Wed, 19 Nov 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/164010</guid>
<dc:date>2025-11-19T00:00:00Z</dc:date>
</item>
<item>
<title>A biomimetic chip to assess subcutaneous bioavailability of monoclonal antibodies in humans</title>
<link>https://hdl.handle.net/1721.1/163990</link>
<description>A biomimetic chip to assess subcutaneous bioavailability of monoclonal antibodies in humans
Chandran Suja, Vineeth; Qi, Qin M; Halloran, Kevin; Zhang, Jifeng; Shaha, Suyog; Prakash, Supriya; Kumbhojkar, Ninad; Deslandes, Antoine; Huille, Sylvain; Gokarn, Yatin R; Mitragotri, Samir
Subcutaneous (subQ) injection is a common route for delivering biotherapeutics, wherein pharmacokinetics is largely influenced by drug transport in a complex subQ tissue microenvironment. The selection of good drug candidates with beneficial pharmacokinetics for subQ injections is currently limited by a lack of reliable testing models. To address this limitation, we report here a Subcutaneous Co-Culture Tissue-on-a-chip for Injection Simulation (SubCuTIS). SubCuTIS possesses a 3D coculture tissue architecture, and it allows facile quantitative determination of relevant scale independent drug transport rate constants. SubCuTIS captures key in vivo physiological characteristics of the subQ tissues, and it differentiates the transport behavior of various chemically distinct molecules. We supplemented the transport measurements with theoretical modeling, which identified subtle differences in the local absorption rate constants of seven clinically available mAbs. Accounting for first-order proteolytic catabolism, we established a mathematical framework to assess clinical bioavailability using the local absorption rate constants obtained from SubCuTIS. Taken together, the technology described here broadens the applicability of organs-on-chips as a standardized and easy-to-use device for quantitative analysis of subQ drug transport.
</description>
<pubDate>Mon, 09 Oct 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163990</guid>
<dc:date>2023-10-09T00:00:00Z</dc:date>
</item>
<item>
<title>Nanoparticle-induced lipid membrane deformation influences the design of biomedicine</title>
<link>https://hdl.handle.net/1721.1/163989</link>
<description>Nanoparticle-induced lipid membrane deformation influences the design of biomedicine
Pincus, Isaac; Qi, Qin M
Controlling the physicochemical properties of nanoparticles is important for their performance as drug carriers, pharmaceuticals, or imaging contrast agents in nanomedicine. Predictive models can accelerate experimental designs at reduced time and costs compared to a brute-force approach conventionally used. However, physical principles underlying particle-cell interactions are still poorly understood due to their large size contrast, hindering the model development. In this work, we describe a model that examines the interaction between multiple particles and the membrane of a mammalian cell or an artificial vesicle, thus influencing the outcomes of surface adsorption, detachment or uptake of particles. Compared to existing biophysical models on particle-membrane interactions accounting for membrane adhesion, stretching and bending energies, we make several important updates that are essential to reaching quantitative agreement with existing experimental data. Particle-induced membrane tension changes are crucial to the membrane deformation even at very low surface concentrations (0.1%); we explain this surprising finding using a new length scale previously neglected. Furthermore, a multi-step and non-equilibrium endocytosis mechanism is proposed in the absence of specific receptor-ligand interactions, inspired by recent experimental evidence on the dynamic regulation of membrane tension through the active transport of lipid molecules. We demonstrate the predictive power of our model in generating the adsorption isotherms and shear-induced particle detachment from cell surfaces and the size-dependent rate of particle uptake. Our research provides a framework to design tailor-made nanoparticles with controllable interaction outcomes with various cell types based on a quantitative and fundamental understanding.
</description>
<pubDate>Tue, 21 Jul 2026 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163989</guid>
<dc:date>2026-07-21T00:00:00Z</dc:date>
</item>
<item>
<title>Effective use of biosensors for high-throughput library screening for metabolite production</title>
<link>https://hdl.handle.net/1721.1/163988</link>
<description>Effective use of biosensors for high-throughput library screening for metabolite production
Kaczmarek, Jennifer A; Prather, Kristala LJ
The development of fast and affordable microbial production from recombinant pathways is a challenging endeavor, with targeted improvements difficult to predict due to the complex nature of living systems. To address the limitations in biosynthetic pathways, much work has been done to generate large libraries of various genetic parts (promoters, RBSs, enzymes, etc.) to discover library members that bring about significantly improved levels of metabolite production. To evaluate these large libraries, high throughput approaches are necessary, such as those that rely on biosensors. There are various modes of operation to apply biosensors to library screens that are available at different scales of throughput. The effectiveness of each biosensor-based method is dependent on the pathway or strain to which it is applied, and all approaches have strengths and weaknesses to be carefully considered for any high throughput library screen. In this review, we discuss the various approaches used in biosensor screening for improved metabolite production, focusing on transcription factor-based biosensors.
</description>
<pubDate>Wed, 04 Aug 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163988</guid>
<dc:date>2021-08-04T00:00:00Z</dc:date>
</item>
<item>
<title>A method for correcting the substructure of multiprong jets using the Lund jet plane</title>
<link>https://hdl.handle.net/1721.1/163986</link>
<description>A method for correcting the substructure of multiprong jets using the Lund jet plane
Hayrapetyan, A.; Makarenko, V.; Tumasyan, A.; Adam, W.; Andrejkovic, J. W.; Benato, L.; Bergauer, T.; Damanakis, K.; Dragicevic, M.; Giordano, C.; Hussain, P. S.; Jeitler, M.; Krammer, N.; Li, A.; Liko, D.; Mikulec, I.; Schieck, J.; Schöfbeck, R.
Many analyses at the CERN LHC exploit the substructure of jets to identify heavy resonances produced with high momenta that decay into multiple quarks and/or gluons. This paper presents a new technique for correcting the substructure of simulated large-radius jets from multiprong decays. The technique is based on reclustering the jet constituents into several subjets such that each subjet represents a single prong, and separately correcting the radiation pattern in the Lund jet plane of each subjet using a correction derived from data. The data presented here correspond to an integrated luminosity of 138 fb−1 collected by the CMS experiment between 2016–2018 at a center-of-mass energy of 13 TeV. The correction procedure improves the agreement between data and simulation for several different substructure observables of multiprong jets. This technique establishes, for the first time, a robust calibration for the substructure of jets with four or more prongs, enabling future measurements and searches for new phenomena containing these signatures.
</description>
<pubDate>Mon, 10 Nov 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163986</guid>
<dc:date>2025-11-10T00:00:00Z</dc:date>
</item>
<item>
<title>Inclusionary and Exclusionary Preferences: A Test of Three Cognitive Mechanisms</title>
<link>https://hdl.handle.net/1721.1/163985</link>
<description>Inclusionary and Exclusionary Preferences: A Test of Three Cognitive Mechanisms
Landau-Wells, Marika; Lydic, Kirsten O.; Kennedy, Joachim; Mittman, Benjamin G.; Thompson, Todd W.; Gupta, Akhil; Saxe, Rebecca
Exclusionary social policies take a significant toll on the mental and physical health of targeted groups. Support for specific exclusionary policies does not always align with general antipathy towards the targeted group, however. Does support for specific exclusionary policies rely on particular thought processes (i.e., cognitive mechanisms)? Does opposition? We investigate these questions through the lens of “bathroom laws” across two studies. In Study 1, we use functional neuroimaging to test three candidate cognitive mechanisms from the literature: (1) threat-related emotions (e.g., fear, disgust) supporting exclusionary preferences; (2) mentalizing (e.g., empathy, perspective-taking) supporting inclusionary preferences; and (3) self-regulation (e.g., aligning one’s behavior with one’s goals) supporting inclusionary preferences. Consistent with the intergroup conflict and prejudice literatures, we find evidence of a motivated self-regulation mechanism in bathroom law opponents. In Study 2, we investigate a possible source of this motivation using text analysis of open-ended policy preference justifications. We find that bathroom law opponents link their policy preference to a small number of specific values, particularly autonomy of action. Taken together, these studies point to a value-driven, motivational account of inclusionary preferences that reconciles puzzling patterns of public opinion, offers new levers for tolerance interventions, and provides some insight into the brain-basis of political behavior.
</description>
<pubDate>Sat, 22 Nov 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163985</guid>
<dc:date>2025-11-22T00:00:00Z</dc:date>
</item>
<item>
<title>Accelerated Bayesian Calibration and Uncertainty Quantification of RANS Turbulence Model Parameters for Stratified Atmospheric Boundary Layer Flows</title>
<link>https://hdl.handle.net/1721.1/163984</link>
<description>Accelerated Bayesian Calibration and Uncertainty Quantification of RANS Turbulence Model Parameters for Stratified Atmospheric Boundary Layer Flows
Shin, Ethan Y.; Howland, Michael F.
In operational weather models, the effects of turbulence in the atmospheric boundary layer (ABL) on the resolved flow are modeled using turbulence parameterizations. These parameterizations typically use a predetermined set of model parameters that are tuned to limited data from canonical flows. Using these fixed parameters results in deterministic predictions that neglect uncertainty in the unresolved turbulence processes. In this study, we perform a machine learning-accelerated Bayesian inversion of a single-column model of the ABL. This approach is used to calibrate and quantify uncertainty in model parameters of Reynolds-averaged Navier–Stokes turbulence models. To verify the data-driven uncertainty quantification methodology, we test in an idealized setup in which a prescribed but unobserved set of parameters is learned from noisy approximations of the model output. Following this verification, we learn the parameters and their uncertainties in two different turbulence models conditioned on scale-resolving large-eddy simulation data over a range of ABL stabilities. We show how Bayesian inversion of a numerical model improves flow predictions by investigating the underlying mean momentum budgets. Further, we show that uncertainty quantification based on neutral ABL surface layer data recovers the relationships between parameters that have been predicted using theoretical modeling, but that learning the parameters based on stable ABL data or data from outside the surface layer can lead to different parameter relationships than neutral surface layer theory. Efforts to systematically reduce parameter uncertainty reveal that (1) sampling wind speed up to the ABL height can reduce uncertainty in key model parameters by up to $$84\%$$ , and (2) assimilating fluid flow quantities beyond first-order moment statistics can further reduce uncertainty in ways that baseline wind speed assimilation alone cannot achieve. The parameters learned using Bayesian uncertainty quantification generally yield lower error than standard deterministic parameters in out-of-sample tests and also provide uncertainty intervals on predictions.
</description>
<pubDate>Sat, 22 Nov 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163984</guid>
<dc:date>2025-11-22T00:00:00Z</dc:date>
</item>
<item>
<title>Europa Clipper Magnetometer Boom Deployment: A First Look at the Magnetometer Observations of the Spacecraft and the Interplanetary Magnetic Field</title>
<link>https://hdl.handle.net/1721.1/163983</link>
<description>Europa Clipper Magnetometer Boom Deployment: A First Look at the Magnetometer Observations of the Spacecraft and the Interplanetary Magnetic Field
Cochrane, Corey J.; Joy, Steven P.; Korth, Haje; Biersteker, John B.; Blacksberg, Jordana; Bouchard, Michael; Contreras, Jacob; Dawson, Olivia R.; Khurana, Krishan K.; Murphy, Neil; Palm, Derek; Perley, Mitch O.; Pierce, David R.; Richter, Ingo; Russell, Christopher T.
NASA’s Europa Clipper flagship mission is designed to investigate the habitability of Jupiter’s moon Europa. A key instrument aboard the spacecraft is the Europa Clipper Magnetometer (ECM), a suite of fluxgate magnetometer sensors deployed on a boom to minimize spacecraft-induced magnetic interference. The ECM investigation aims to characterize Europa’s induced magnetic field, offering constraints on the salinity, depth, and thickness of its subsurface ocean. This work presents the first in-flight ECM observations acquired during the magnetometer boom deployment and shortly thereafter. We show how these observations provide the requisite evidence needed to validate a successful deployment. We also demonstrate how these observations can be used to calibrate the sensor offsets and to develop new magnetic field models of the spacecraft of varying complexity, thus enabling the robust removal of the instrument’s zero-levels which is critical for achieving the mission’s science objectives. We finally share preliminary calibrated magnetometer observations acquired over a two-month period after deployment, revealing a very active interplanetary magnetic field characteristic of solar maximum.
</description>
<pubDate>Wed, 19 Nov 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163983</guid>
<dc:date>2025-11-19T00:00:00Z</dc:date>
</item>
<item>
<title>Deuteron identification via time of flight with LHCb</title>
<link>https://hdl.handle.net/1721.1/163981</link>
<description>Deuteron identification via time of flight with LHCb
LHCb Collaboration
It is shown that the timing capabilities of the LHCb detector operated during the LHC Run 2 can be used to identify light ion particles with momenta of a few GeV/c. This is achieved by estimating the particle time of flight through a newly developed technique. A dedicated reconstruction procedure and a neural-network-based estimator of the particle speed have been developed to enable deuteron identification by suppressing the abundant background from lighter particles. The performance of the identification procedure is demonstrated in a sample of proton-helium collisions at s NN  = 110 GeV, where the production of deuteron and triton particles is observed. This novel approach opens the way to study deuteron and antideuteron production for different collision systems at different energy scales, exploiting the rich dataset collected by the LHCb experiment.
</description>
<pubDate>Wed, 19 Nov 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163981</guid>
<dc:date>2025-11-19T00:00:00Z</dc:date>
</item>
<item>
<title>Confidently Comparing Estimates with the c-value</title>
<link>https://hdl.handle.net/1721.1/163980</link>
<description>Confidently Comparing Estimates with the c-value
Trippe, Brian L; Deshpande, Sameer K; Broderick, Tamara
Modern statistics provides an ever-expanding toolkit for estimating unknown parameters. Consequently, applied statisticians frequently face a difficult decision: retain a parameter estimate from a familiar method or replace it with an estimate from a newer or more complex one. While it is traditional to compare estimates using risk, such comparisons are rarely conclusive in realistic settings. In response, we propose the “c-value” as a measure of confidence that a new estimate achieves smaller loss than an old estimate on a given dataset. We show that it is unlikely that a large c-value coincides with a larger loss for the new estimate. Therefore, just as a small p-value supports rejecting a null hypothesis, a large c-value supports using a new estimate in place of the old. For a wide class of problems and estimates, we show how to compute a c-value by first constructing a data-dependent high-probability lower bound on the difference in loss. The c-value is frequentist in nature, but we show that it can provide validation of shrinkage estimates derived from Bayesian models in real data applications involving hierarchical models and Gaussian processes. Supplementary materials for this article are available online.
</description>
<pubDate>Fri, 24 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163980</guid>
<dc:date>2023-02-24T00:00:00Z</dc:date>
</item>
<item>
<title>Future circular collider feasibility study report</title>
<link>https://hdl.handle.net/1721.1/163978</link>
<description>Future circular collider feasibility study report
Benedikt, M.; Zimmermann, F.; Auchmann, B.; Bartmann, W.; Burnet, J. P.; Carli, C.; Chancé, A.; Craievich, P.; Giovannozzi, M.; Grojean, C.; Gutleber, J.; Hanke, K.; Henriques, A.; Janot, P.; Lourenço, C.; Mangano, M.; Otto, T.; Poole, J.; Rajagopalan, S.; Raubenheimer, T.
In response to the 2020 Update of the European Strategy for Particle Physics, the Future Circular Collider (FCC) Feasibility Study was launched as an international collaboration hosted by CERN. This report describes the FCC integrated programme, which consists of two stages: an electron-positron collider (FCC-ee) in the first phase, serving as a high-luminosity Higgs, top, and electroweak factory; followed by a proton-proton collider (FCC-hh) at the energy frontier in the second phase. The FCC-ee is designed to operate at four key centre-of-mass energies: the Z pole, the WW pair production threshold, the ZH production peak, and the top/anti-top production threshold—each delivering the highest possible luminosities to four experiments. Over 15 years of operation, FCC-ee will produce more than 6 trillion Z bosons, 200 million WW pairs, nearly 3 million Higgs bosons, and 2 million top anti-top pairs. Precise energy calibration at the Z pole and WW threshold will be achieved through frequent resonant depolarisation of pilot bunches. The sequence of operation modes between the Z, WW, and ZH substages remains flexible. The FCC-hh will operate at a centre-of-mass energy of approximately 85 TeV—nearly an order of magnitude higher than the LHC—and is designed to deliver 5 to 10 times the integrated luminosity of the upcoming High-Luminosity LHC. Its mass reach for direct discovery extends to several tens of TeV. In addition to proton-proton collisions, the FCC-hh is capable of supporting ion-ion, ion-proton, and lepton-hadron collision modes. This second volume of the Feasibility Study Report presents the complete design of the FCC-ee collider, its operation and staging strategy, the full-energy booster and injector complex, required accelerator technologies, safety concepts, and technical infrastructure. It also includes the design of the FCC-hh hadron collider, development of high-field magnets, hadron injector options, and key technical systems for FCC-hh.
</description>
<pubDate>Mon, 17 Nov 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163978</guid>
<dc:date>2025-11-17T00:00:00Z</dc:date>
</item>
<item>
<title>Nitrous Oxide Distributions in the Oxygenated Water Column of the Sargasso Sea</title>
<link>https://hdl.handle.net/1721.1/163977</link>
<description>Nitrous Oxide Distributions in the Oxygenated Water Column of the Sargasso Sea
Meyer, Annaliese C. S.; Cullen, Jay T.; Grundle, Damian S.
This study presents dissolved nitrous oxide (N2O) concentrations in the water column at the Bermuda Atlantic Time-series Study (BATS) station and uses a subset of these measurements to estimate air-to-sea flux for four specific time points between September 2018 and June 2019. N2O concentrations at BATS were in the range of 4.0 nmol L−1–16.9 nmol L−1, with vertical profiles which were the mirror inverse of dissolved oxygen. Regardless of season, N2O concentration maxima were found within the oxygen minimum zone (OMZ). The highest maximum N2O values were observed in November and lowest in October. As the water column at BATS remains consistently at dissolved oxygen concentrations greater than 140 µmol L−1, and therefore aerobic, we assume that the bulk of N2O production occurs through nitrification. A nitrification source is supported by a correlation between excess N2O (ΔN2O) below the mixed layer, apparent oxygen utilization (AOU) and nitrate concentrations. We estimate a pooled average yield of 0.027% to 0.038% N2O from nitrification at BATS. Finally, estimates of air–sea exchange of N2O using regional average monthly wind speeds indicated that this region acts as a weak source or a sink of atmospheric N2O, and varies between months.
</description>
<pubDate>Thu, 15 Dec 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163977</guid>
<dc:date>2022-12-15T00:00:00Z</dc:date>
</item>
<item>
<title>A holistic model for understanding the dynamics of outsourcing</title>
<link>https://hdl.handle.net/1721.1/163975</link>
<description>A holistic model for understanding the dynamics of outsourcing
Uygun, Yilmaz; Gotsadze, Nikoloz; Schupp, Florian; Gzirishvili, Lizi; Tindjou Nana, Brigitte Stephanie
Outsourcing is a complex process as many external and internal factors that look convincing in the first place might, however, lead to a failure in the long run. Motivated by this, we wanted to get a holistic understanding of such outsourcing decisions. Thus, we created a comprehensive System Dynamics simulation model including all relevant variables to examine the dynamic nature of outsourcing in a holistic manner and over time that consists of more than 200 interrelated variables. Our results show, amongst others, that higher process specialisation that requires substantial investments by the supplier appears to be favourable for an outsourcing company and shifting a larger quantity to such a supplier achieves better cost savings and thus accounts for a better overall outsourcing result. On an operational level, we identified an innovation trap, a bargaining power shift, a plagiarism trap, and a knowledge trap. Based on that, we give specific managerial recommendations to tackles these aspects. We conclude that, amongst others, it is important for innovative companies with rather complex processes and parts to carefully plan which and how many employees to release so as not to lose the knowledge on those outsourced processes and parts.
</description>
<pubDate>Mon, 14 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163975</guid>
<dc:date>2022-02-14T00:00:00Z</dc:date>
</item>
<item>
<title>Mediation and ANCOVA Models to Study the Influence of Solvent Retting Traits and Plant Physique on Bast Fiber Yield and Retting Time</title>
<link>https://hdl.handle.net/1721.1/163973</link>
<description>Mediation and ANCOVA Models to Study the Influence of Solvent Retting Traits and Plant Physique on Bast Fiber Yield and Retting Time
Shuvo, Ikra Iftekhar; Hoque, Md. Saiful; Khandakar, Lovely K. M.
The study aims in applying two statistical tools to analyze the retting beha-vior of plant stems for extracting bast fibers for industrial applications. Atfirst, a mediation model is employed to investigate the first hypothesis of thiswork that involves studying the color response of the retted solvent asa function of retting time on the responsible variable, fiber yield (%).Statistically, there is a significant indirect effect of retting time on fiberyield (%) through retting trait (β = −0.0142, 95% C.I. [−0.0274, −0.0011]) –a statistical inference bolstered by the Sobel test result, confirming themediation effect (p-value = 0.0329 &lt; 0.05; z-score = −2.1334; bootstrappingof 5000 resamples). Next, the second hypothesis of the current work involvesanalyzing the impact of stem form-factors on their retting time using thestatistical tool, ANCOVA. The partial- η2 indicates that cultivar treatmentaccounts for 30% variance of the retting time while controlling for the effectsof two covariates – diameter and length of the stems, in this case. Bycontrolling the Type-I error, Bonferroni and similar post-hoc tests also con-firm the statistical significance of cultivar categories pertaining to their meanretting time. Future work could focus on these underlying hypotheses andstudy the impact of microorganisms, environmental factors, and cultivartreatment variables on the retting time to optimize the overall fiber yieldand production process.
</description>
<pubDate>Mon, 11 Jul 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163973</guid>
<dc:date>2022-07-11T00:00:00Z</dc:date>
</item>
<item>
<title>Improving Autistic Experiences in the Workplace: Key Factors and Actionable Steps</title>
<link>https://hdl.handle.net/1721.1/163972</link>
<description>Improving Autistic Experiences in the Workplace: Key Factors and Actionable Steps
Nishith, Shruti; O’Brien, Amanda M.; Li, Cindy; Bungert, Lindsay; Oddis, Kyle; Riddle, Joseph; Gabrieli, John D. E.
Autistic adults have higher rates of unemployment and underemployment than non-autistic adults with and without disabilities. While previous work has highlighted factors specific to individuals and/or job sectors that serve as barriers or facilitators to autistic employment, the question of how to modify the workplace to best support autistic people remains under-researched. The present study utilized an ecological framework to investigate what workplace factors can be modified to improve autistic experiences and how these modifications may be enacted across different levels of workplace ecosystem to promote autistic success. Autistic participants (N = 85) across employment sectors provided quantitative ratings and written descriptions of positive and negative factors related to their workplace experiences. Quantitative and qualitative analyses were used to examine which factors and overarching principles most impact employment. Actionable strategies to modify these factors were derived from participant responses and validated by autistic collaborators and neuroinclusion experts. On average, participants rated task training as having the most positive, and mental health as having the most negative, impact on their employment. Participants described four themes (acceptance, communication, autonomy, accommodations) that can be embedded in the work environment to improve experiences. Steps to improve autistic employment that can be enacted by stakeholders across levels of the workplace experiences are provided. Autistic adults face multifaceted barriers to employment across levels of the workplace. Modifying the workplace itself, across multiple levels and stakeholders, may serve to improve autistic employment outcomes.
</description>
<pubDate>Wed, 24 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163972</guid>
<dc:date>2025-09-24T00:00:00Z</dc:date>
</item>
<item>
<title>The Connectivity of Friends-and-Strangers Graphs on Complete Multipartite Graphs</title>
<link>https://hdl.handle.net/1721.1/163971</link>
<description>The Connectivity of Friends-and-Strangers Graphs on Complete Multipartite Graphs
Zhu, Honglin
For simple graphs X and Y on n vertices, the friends-and-strangers graph FS ( X , Y ) is the graph whose vertex set consists of all bijections σ : V ( X ) → V ( Y ) , where two bijections σ and σ ′ are adjacent if and only if they agree on all but two adjacent vertices a , b ∈ V ( X ) such that σ ( a ) , σ ( b ) ∈ V ( Y ) are adjacent in Y. Resolving a conjecture of Wang, Lu, and Chen, we completely characterize the connectedness of FS ( X , Y ) when Y is a complete bipartite graph. We further extend this result to when Y is a complete multipartite graph. We also determine when FS ( X , Y ) has exactly two connected components where X is bipartite and Y is a complete bipartite graph.
</description>
<pubDate>Tue, 31 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163971</guid>
<dc:date>2024-12-31T00:00:00Z</dc:date>
</item>
<item>
<title>Reducing Aerodynamic Interference Through Layout Optimization of Symmetrically Cambered Wingsails: A Comparative Study of In-Line and Parallel Configurations</title>
<link>https://hdl.handle.net/1721.1/163970</link>
<description>Reducing Aerodynamic Interference Through Layout Optimization of Symmetrically Cambered Wingsails: A Comparative Study of In-Line and Parallel Configurations
van Reen, Stephan; Lin, Jianfeng; Niu, Jiqiang; Sharpe, Peter; Li, Xiaodong; Yao, Hua-Dong
Rigid wingsails are increasingly adopted for wind-assisted ship propulsion, with Symmetrically Cambered (SC) profiles identified as highly efficient for thrust generation. This study investigates installation layouts for multiple SC wingsails, focusing on aerodynamic interference that limits their performance. A fast 2D potential-flow panel method is employed and benchmarked against wind tunnel and 3D IDDES data. Two representative layouts are analyzed: triple-in-line (TL) and quad-in-parallel (QP). Layout optimization is performed using a genetic algorithm with distances between sails as design variables, constrained by the total installation span, at apparent wind angles (AWAs) of 60◦ , 90◦ , and 120◦ . Results show that thrust generation decreases progressively from upstream to downstream sails due to interference effects, with penalties of about 4–6% in the TL and up to 28% in the QP layout. The optimization improves performance only for the TL layout at 60◦ , while the QP layout shows negligible gains. Analysis of pressure distributions confirms that downstream sails suffer from reduced suction on the leading edge caused by upstream wakes. Overall, the TL layout demonstrates significantly higher aerodynamic reliability than the QP layout. These findings provide new insights into multi-sail configurations and highlight the importance of layout optimization in maximizing thrust efficiency.
</description>
<pubDate>Thu, 16 Oct 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163970</guid>
<dc:date>2025-10-16T00:00:00Z</dc:date>
</item>
<item>
<title>A rapid experimental workflow for studying melt track scaling in laser powder bed fusion using high-precision metal template substrates</title>
<link>https://hdl.handle.net/1721.1/163969</link>
<description>A rapid experimental workflow for studying melt track scaling in laser powder bed fusion using high-precision metal template substrates
Weissbach, Reimar; Penny, Ryan W.; Hart, A. J.
Development and qualification of process parameters in laser powder bed fusion (LPBF) involves many variables. At the outset of development, whether transferring known parameters to a new machine, or exploring a new material, single-track and single-layer experiments are a convenient means of down-selecting key variables and exploring parameter scaling behavior. We present an experimental workflow for single-layer LPBF experiments using high-precision metal template substrates, overcoming challenges with precision single-layer alignment in LPBF systems and enabling efficient processing and cross-sectional analysis. Templates are fabricated using chemical etching and machining, and are characterized using optical profilometry and X-ray transmission imaging of powder layers. Using the etched templates, a single-track parameter study is performed in SS316 including three powder layer thicknesses, and spanning common laser melting modes (lack-of-fusion, conduction, and keyhole mode). Analysis of melt track geometries using automated image processing allows a scaling law to be applied to define the process window, quantifying the amount of material added with increasing powder layer thickness. Single-track results are verified with raster scanning experiments, showing the potential to transfer single-track results to full LPBF builds.
</description>
<pubDate>Thu, 26 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163969</guid>
<dc:date>2025-06-26T00:00:00Z</dc:date>
</item>
<item>
<title>Augmented intelligence should be good for medicine, if medicine is to remain good for us</title>
<link>https://hdl.handle.net/1721.1/163968</link>
<description>Augmented intelligence should be good for medicine, if medicine is to remain good for us
Idan, Daphna; Celi, Leo A.; Einav, Sharon; Frenkel, Amit
Throughout history, the medical community has failed to address health disparities. Augmented intelligence (AI) is poised to cement these structural inequities permanently. The need to establish a triage process that ensures fair and equitable access to medical care, and to consider all patient populations equally researchable, should not overshadow the need to learn how best to exploit AI for furthering medical fairness and equity despite resource limitations. Open discussion of the shortcomings of medical AI, approaching medical AI development, testing, and implementation from a critical ethical perspective, constant testing and analysis of AI outputs, and human oversight in the loop constitute only the first part of ensuring augmented intelligence tools are equitably robust and free of bias.
</description>
<pubDate>Mon, 29 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163968</guid>
<dc:date>2025-09-29T00:00:00Z</dc:date>
</item>
<item>
<title>Design Principles and Impact of a Learning Analytics Dashboard: Evidence from a Randomized MOOC Experiment</title>
<link>https://hdl.handle.net/1721.1/163967</link>
<description>Design Principles and Impact of a Learning Analytics Dashboard: Evidence from a Randomized MOOC Experiment
Borrella, Inma; Ponce-Cueto, Eva
Learning Analytics Dashboards (LADs) are increasingly deployed to support self-regulated learning on online courses. Yet many existing dashboards lack strong theoretical grounding, contextual alignment, or actionable feedback, and some designs have been shown to inadvertently discourage learners through excessive social comparison or high inference costs. In this study, we designed and evaluated a LAD grounded in the COPES model of self-regulated learning and tailored to a credit-bearing Massive Open Online Course (MOOC) using a data-driven approach. We conducted a randomized controlled trial with 8745 learners, comparing a control group, a dashboard without feedback, and a dashboard with ARCS-framed actionable feedback. The results showed that the dashboard with feedback significantly increased learners&amp;rsquo; likelihood of verification (i.e., paying for the certification track), with mixed effects on engagement and no measurable impact on final grades. These findings suggest that dashboards are not uniformly beneficial: while feedback-supported LADs can enhance motivation and persistence, dashboards that lack interpretive support may impose cognitive burdens without improving outcomes. This study contributes to the literature on learning analytics by (1) articulating the design principles for theoretically and contextually grounded LADs and (2) providing experimental evidence on their impact in authentic MOOC settings.
</description>
<pubDate>Mon, 27 Oct 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163967</guid>
<dc:date>2025-10-27T00:00:00Z</dc:date>
</item>
<item>
<title>The Effect of IL-17A and Combined Mechanical Injury on Meniscal Tissue Integrity In Vitro</title>
<link>https://hdl.handle.net/1721.1/163966</link>
<description>The Effect of IL-17A and Combined Mechanical Injury on Meniscal Tissue Integrity In Vitro
Ahrens, Greta; Gellhaus, Florian; Weitkamp, Jan-Tobias; Behrendt, Peter; Cossais, François; Rolauffs, Bernd; Grodzinsky, Alan J; Kurz, Bodo
Objectives: Meniscal integrity is crucial for knee joint stability and the prevention of osteoarthritis (OA) development. Recent studies suggested that mechanical overload and interleukin (IL)-17A may be important intertwined players in meniscal degeneration, but a direct impact of IL-17A on the meniscus has not been investigated. Therefore, the aim of this study was to analyze the effect of IL-17A on meniscal tissue with and without combined mechanical injury (MI). Methods: Meniscal explant disks (1 mm height, 3 mm diameter) were isolated from bovine menisci (preserving the native tibial superficial zone) and exposed to IL-17A [0–100 ng/mL] and/or MI (single compression, 50% strain, strain rate 1 mm/sec). After three days of incubation in a serum-free medium, the proteoglycan release (sGAG; DMMB assay), mRNA level of matrix-degrading enzymes (qRT-PCR), aggrecan degradation (NITEGE immunostaining), and cell death (histomorphometry of nuclear blebbing/apoptosis and condensed nuclei/unspecified cell death) were determined. Statistics: one- and two-way ANOVA with Tukey’s multiple comparisons or Kruskal– Wallis with post hoc testing. Results: IL-17A increased sGAG release in a dose-dependent significant manner. MI also induced the release of sGAG significantly, but the combination with IL-17A showed the highest levels. Both IL-17A and MI individually affected the mRNA levels for ADAMTS4 and MMP-13 slightly, but the combination of both particularly induced a significant increase in mRNA levels. Signals for the ADAMTS4-related aggrecan neoepitope NITEGE were elevated by IL-17A in superficial areas of the excised tissue and by MI in superficial and deeper areas. The combination of both stimuli intensified this signal further. MI increased the number of cells with condensed nuclei significantly and induced apoptosis in a small proportion of cells. IL-17A had no significant impact on the amount of condensed or apoptotic nuclei. Conclusions: Our findings emphasize an interaction between inflammatory cytokine IL-17A signaling and mechanical stress since IL17A induced matrix degeneration in meniscal tissue, which intensified in combination with a trauma. The latter might create a post-traumatic environment that promotes meniscal degeneration and subsequently osteoarthritis progression.
</description>
<pubDate>Fri, 24 Oct 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163966</guid>
<dc:date>2025-10-24T00:00:00Z</dc:date>
</item>
<item>
<title>Molecular hallmarks of excitatory and inhibitory neuronal resilience to Alzheimer’s disease</title>
<link>https://hdl.handle.net/1721.1/163965</link>
<description>Molecular hallmarks of excitatory and inhibitory neuronal resilience to Alzheimer’s disease
Castanho, Isabel; Naderi Yeganeh, Pourya; Boix, Carles A.; Morgan, Sarah L.; Mathys, Hansruedi; Prokopenko, Dmitry; White, Bartholomew; Soto, Larisa M.; Pegoraro, Giulia
Background A significant proportion of individuals maintain cognition despite extensive Alzheimer’s disease (AD) pathology, known as cognitive resilience. Understanding the molecular mechanisms that protect these individuals could reveal therapeutic targets for AD. Methods This study defines molecular and cellular signatures of cognitive resilience by integrating bulk RNA and single-cell transcriptomic data with genetics across multiple brain regions. We analyzed data from the Religious Order Study and the Rush Memory and Aging Project (ROSMAP), including bulk RNA sequencing (n = 631 individuals) and multiregional single-nucleus RNA sequencing (n = 48 individuals). Subjects were categorized into AD, resilient, and control based on β-amyloid and tau pathology, and cognitive status. We identified and prioritized protected cell populations using whole-genome sequencing-derived genetic variants, transcriptomic profiling, and cellular composition. Results Transcriptomics and polygenic risk analysis position resilience as an intermediate AD state. Only GFAP and KLF4 expression distinguished resilience from controls at tissue level, whereas differential expression of genes involved in nucleic acid metabolism and signaling differentiated AD and resilient brains. At the cellular level, resilience was characterized by broad downregulation of LINGO1 expression and reorganization of chaperone pathways, specifically downregulation of Hsp90 and upregulation of Hsp40, Hsp70, and Hsp110 families in excitatory neurons. MEF2C, ATP8B1, and RELN emerged as key markers of resilient neurons. Excitatory neuronal subtypes in the entorhinal cortex (ATP8B+ and MEF2Chigh) exhibited unique resilience signaling through activation of neurotrophin (BDNF-NTRK2, modulated by LINGO1) and angiopoietin (ANGPT2-TEK) pathways. MEF2C+ inhibitory neurons were over-represented in resilient brains, and the expression of genes associated with rare genetic variants revealed vulnerable somatostatin (SST) cortical interneurons that survive in AD resilience. The maintenance of excitatory-inhibitory balance emerges as a key characteristic of resilience. Conclusions We have defined molecular and cellular hallmarks of cognitive resilience, an intermediate state in the AD continuum. Resilience mechanisms include preserved neuronal function, balanced network activity, and activation of neurotrophic survival signaling. Specific excitatory neuronal populations appear to play a central role in mediating cognitive resilience, while a subset of vulnerable interneurons likely provides compensation against AD-associated hyperexcitability. This study offers a framework to leverage natural protective mechanisms to mitigate neurodegeneration and preserve cognition in AD.
</description>
<pubDate>Wed, 01 Oct 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163965</guid>
<dc:date>2025-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Nationwide Trends in Hospitalizations for Sudden Cardiac Arrest Before and During the COVID Outbreak</title>
<link>https://hdl.handle.net/1721.1/163964</link>
<description>Nationwide Trends in Hospitalizations for Sudden Cardiac Arrest Before and During the COVID Outbreak
Daoudi, Sarah; Furer, Ariel; John, Kevin; Chalhoub, Fadi; Chee, Jennifer; Infeld, Margaret; Elbaz-Greener, Gabby; Homoud, Munther; Udelson, James; Madias, Christopher; Rozen, Guy
Background/Objectives: Sudden cardiac arrest (SCA) accounts for ~50% of cardiovascular mortality in the U.S. Cardiovascular complications are common in acute and post-acute COVID-19 infection. We aimed to examine nationwide trends in SCA-related hospitalizations in the United States before and during the COVID-19 outbreak. Methods: Using data from the National Inpatient Sample, we conducted a retrospective analysis of hospitalizations for SCA in the U.S. between 2016 and 2020. Sociodemographic and clinical characteristics and in-hospital mortality were compared between the pre-COVID (2016– 2019) and COVID (2020) eras. Multivariable analysis was performed to identify factors associated with mortality. Results: Among a weighted total of 153,100 SCA hospitalizations between 2016 and 2020, the median age was 65 years, 62.7% were male, and 66.6% were white. There was a trend towards fewer hospitalizations in 2020 compared to prior years (n = 28,585 vs. naverage = 32,129, p = 0.07). In-hospital mortality remained unchanged between the pre-COVID and COVID eras (47.7% vs. 47.3%, p = 0.66). Increased mortality was associated with female sex (OR: 1.21; 95% CI: 1.15–1.28; p &lt; 0.001), non-white race (OR: 1.24; 95% CI: 1.15–1.28; p &lt; 0.001), history of renal failure (OR: 1.08; 95% CI: 1.02–1.15; p = 0.007), and diabetes (OR: 1.32; 95% CI: 1.25–1.39; p &lt; 0.001). In 2020, 1.5% of the study population was diagnosed with COVID-19 infection, which was found to be independently associated with increased in-hospital mortality (OR: 1.57; 95% CI: 1.27–1.95; p &lt; 0.001). Conclusions: In 2020, there was a trend towards a decrease in hospitalizations for SCA, while COVID-19 infection was independently associated with higher in-hospital mortality among patients admitted with SCA.
</description>
<pubDate>Wed, 22 Oct 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163964</guid>
<dc:date>2025-10-22T00:00:00Z</dc:date>
</item>
<item>
<title>Biosensor development for single-cell detection of glucuronate</title>
<link>https://hdl.handle.net/1721.1/163963</link>
<description>Biosensor development for single-cell detection of glucuronate
Nash, Jennifer Kaczmarek; Prather, Kristala LJ
Recent work in biosensors has shown promise to enable high throughput searches through large genetic libraries. However, just as physiological limitations and lack of in-depth mechanistic knowledge can prevent us from achieving high titers in microbial systems; similar roadblocks can appear in the application of biosensors. Here, we characterized a previously developed transcription-factor (ExuR) based galacturonate biosensor for its other cognate ligand, glucuronate. Though we saw an ideal response to glucuronate from the biosensor in controlled and ideal experimental circumstances, these results began to deviate from a well-behaved system when we explored the application of the sensor to different MIOX homologs. Through modifications to circuit architecture and culture conditions, we were able to decrease this variation and use these more optimal conditions to apply the biosensor for the separation of two closely related MIOX homologs.
</description>
<pubDate>Fri, 16 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163963</guid>
<dc:date>2023-06-16T00:00:00Z</dc:date>
</item>
<item>
<title>Strategies in engineering sustainable biochemical synthesis through microbial systems</title>
<link>https://hdl.handle.net/1721.1/163962</link>
<description>Strategies in engineering sustainable biochemical synthesis through microbial systems
Song, Yoseb; Prather, Kristala LJ
Growing environmental concerns and the urgency to address climate change have increased demand for the development of sustainable alternatives to fossil-derived fuels and chemicals. Microbial systems, possessing inherent biosynthetic capabilities, present a promising approach for achieving this goal. This review discusses the coupling of systems and synthetic biology to enable the elucidation and manipulation of microbial phenotypes for the production of chemicals that can substitute for petroleum-derived counterparts and contribute to advancing green biotechnology. The integration of artificial intelligence with metabolic engineering to facilitate precise and data-driven design of biosynthetic pathways is also discussed, along with the identification of current limitations and proposition of strategies for optimizing biosystems, thereby propelling the field of chemical biology towards sustainable chemical production.
</description>
<pubDate>Thu, 01 Aug 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163962</guid>
<dc:date>2024-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Covert reciprocals: a scope-based analysis of reciprocal alternations</title>
<link>https://hdl.handle.net/1721.1/163960</link>
<description>Covert reciprocals: a scope-based analysis of reciprocal alternations
Wehbe, Jad
This paper argues that the class of predicates that participate in reciprocal alternations, like the seemingly 1-place predicate hug in Jane and Mary hugged, should in fact be analyzed as 2-place predicates with a covert reciprocal in object position. The main challenge for this analysis is that there are truth-conditional differences between covert reciprocals and their overt counterparts. Focusing on a few case studies, this paper will argue that these seemingly lexical differences can be reanalyzed in terms of scope, allowing the differences to be systematically predicted once appropriate scope restrictions on covert reciprocals are established. More specifically, I propose that covert reciprocals are simply reciprocals that have to be bound at the lowest possible scope position. I show that these seemingly 1-place predicates behave just like overt reciprocals, modulo the low-scope requirement, for example giving rise to homogeneity and non-maximality. I therefore conclude that in order to account systematically for these inferences, covert reciprocals (at least the case studies that the paper considers) must be treated as having the same LFs as low-scope overt reciprocals.
</description>
<pubDate>Tue, 30 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163960</guid>
<dc:date>2025-09-30T00:00:00Z</dc:date>
</item>
<item>
<title>Robust resonant anomaly detection with NPLM</title>
<link>https://hdl.handle.net/1721.1/163959</link>
<description>Robust resonant anomaly detection with NPLM
Grosso, Gaia; Sengupta, Debajyoti; Golling, Tobias; Harris, Philip
In this study, we investigate the application of the New Physics Learning Machine (NPLM) algorithm as an alternative to the standard CWoLa method with Boosted Decision Trees (BDTs), particularly for scenarios with rare signal events. NPLM offers an end-to-end approach to anomaly detection and hypothesis testing by utilizing an in-sample evaluation of a binary classifier to estimate a log-density ratio, which can improve detection performance without prior assumptions on the signal model. We examine two approaches: (1) a end-to-end NPLM application in cases with reliable background modelling and (2) an NPLM-based classifier used for signal selection when accurate background modelling is unavailable, with subsequent performance enhancement through a hyper-test on multiple values of the selection threshold. Our findings show that NPLM-based methods outperform BDT-based approaches in detection performance, particularly in low signal injection scenarios, while significantly reducing epistemic variance due to hyperparameter choices. This work highlights the potential of NPLM for robust resonant anomaly detection in particle physics, setting a foundation for future methods that enhance sensitivity and consistency under signal variability.
</description>
<pubDate>Sun, 28 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163959</guid>
<dc:date>2025-09-28T00:00:00Z</dc:date>
</item>
<item>
<title>Tackling the Cardio-Kidney-Metabolic Burden in Cancer</title>
<link>https://hdl.handle.net/1721.1/163958</link>
<description>Tackling the Cardio-Kidney-Metabolic Burden in Cancer
Nahle, Tarek; Shah, Viraj; Kunhiraman, Harikrishnan H.; Makram, Omar M.; Ahmed, Ola; Yerraguntla, Sandeep; Gopu, Gaurav; Vy, Jenny; Singh, Shivam; Borse, Tanvi; Kalinsky, Kevin; Deswal, Anita; Sadler, Diego; Chitalia, Vipul; Weintraub, Neal L.
Purpose of the Review This review aims to examine the clinical relevance of cardio-kidney-metabolic syndrome (CKMS) in oncology, highlighting its role as both a preexisting comorbidity and a consequence of cancer treatment. It aims to integrating CKMS staging into personalized cancer care. Recent Findings CKMS is a progressive syndrome marked by dysfunction across cardiovascular, renal, and metabolic systems. Cancer therapies—particularly hormonal agents, immune checkpoint inhibitors, and chemotherapeutics—can accelerate or reveal underlying CKMS through inflammatory and metabolic pathways. Early risk stratification based on CKMS stage enables more effective monitoring, referral, and therapeutic strategies. A stage-based, multidisciplinary approach tailored to cancer type and comorbidity burden is essential for optimizing outcomes. Summary With rising multimorbidity among cancer patients, recognizing and addressing CKMS is increasingly critical. Routine CKMS assessment in oncology offers a pathway for earlier intervention and potentially altering its course. A comprehensive, individualized care model based on CKS stage is necessary to mitigate CKMS-related complications and deliver high-quality, integrated cancer care.
</description>
<pubDate>Mon, 15 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163958</guid>
<dc:date>2025-09-15T00:00:00Z</dc:date>
</item>
<item>
<title>Initial checkout of the Psyche electric propulsion system</title>
<link>https://hdl.handle.net/1721.1/163957</link>
<description>Initial checkout of the Psyche electric propulsion system
Snyder, John S.; Kelly, Charles L.; Garner, Charles; Bradley, Nicholas; Johnson, Ian; Corey, Ron; Ream, Jodie B.; Weiss, Benjamin P.
NASA’s Psyche spacecraft launched on October 13, 2023, and soon afterward the mission operations team began spacecraft initial checkout activities. For the electric propulsion system, the feed system and thruster gimbals were first prepared and then the rest of the subsystem completed an initial operations test during thruster bakeout. Thrust for each thruster was measured across the full range of operating powers and was in good agreement with pre-flight expectations. A weeklong test of the spacecraft and mission operations plan during thrusting activities was successful, but a thruster burn-in phenomenon was observed during full power operation that was longer than expected based on previous flight history. Data accumulated during the initial checkout activities shows that this burn-in behavior is different for each thruster and suggests that it is a result of the thruster discharge transitioning between two different plasma modes that can be mitigated by reducing discharge power and by adjusting the thruster magnet current. At the conclusion of the checkout activities, the subsystem had accumulated 357 h of thrusting operations while consuming 18.5 kg of propellant and was fully ready to begin the cruise phase of the mission.
</description>
<pubDate>Thu, 31 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163957</guid>
<dc:date>2025-07-31T00:00:00Z</dc:date>
</item>
<item>
<title>Derandomizing Logspace With a Small Shared Hard Drive</title>
<link>https://hdl.handle.net/1721.1/163956</link>
<description>Derandomizing Logspace With a Small Shared Hard Drive
Pyne, Edward
We obtain new catalytic algorithms for space-bounded derandomization. In the catalytic computation model introduced by (Buhrman, Cleve, Koucký, Loff, and Speelman STOC 2013), we are given a small worktape, and a larger catalytic tape that has an arbitrary initial configuration. We may edit this tape, but it must be exactly restored to its initial configuration at the completion of the computation. We prove that B P S P A C E [ S ] ⊆ C S P A C E [ S , S 2 ] where B P S P A C E [ S ] corresponds to randomized space S computation, and C S P A C E [ S , C ] corresponds to catalytic algorithms that use O(S) bits of workspace and O(C) bits of catalytic space. Previously, only B P S P A C E [ S ] ⊆ C S P A C E [ S , 2 O ( S ) ] was known. In fact, we prove a general tradeoff, that for every α ∈ [ 1 , 1.5 ] , B P S P A C E [ S ] ⊆ C S P A C E [ S α , S 3 - α ] . We do not use the algebraic techniques of prior work on catalytic computation. Instead, we develop an algorithm that branches based on if the catalytic tape is conditionally random, and instantiate this primitive in a recursive framework. Our result gives an alternate proof of the best known time-space tradeoff for B P S P A C E [ S ] , due to (Cai, Chakaravarthy, and van Melkebeek, Theory Comput. Sys. 2006). As a final application, we extend our results to solve search problems in C S P A C E [ S , S 2 ] . As far as we are aware, this constitutes the first study of search problems in the catalytic computing model.
</description>
<pubDate>Mon, 22 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163956</guid>
<dc:date>2025-09-22T00:00:00Z</dc:date>
</item>
<item>
<title>Dopamine modulation of aggression</title>
<link>https://hdl.handle.net/1721.1/163820</link>
<description>Dopamine modulation of aggression
Dai, Bing; Lin, Dayu
Rationale Aggression is an innate social behavior prevalent across animal species. However, in modern human society, inter-personal aggression is considered disruptive and detrimental to both families and communities. Clinically, antipsychotics, which primarily target dopamine (DA) receptors, have been widely used to suppress hyper-aggression. However, the mechanisms underlying the effect of the antipsychotics remain incompletely understood. Objectives We reviewed key steps in brain DA synthesis and summarized genetic and pharmacological evidence supporting the role of the mesolimbic DA system in aggression. Next, we discussed recent circuit studies that elucidate the DA action in modulating aggression-related brain regions. These lines of evidence collectively suggest that DA acts on different brain regions to facilitate aggression and self-learning, and signals the valence of the fighting experience.
</description>
<pubDate>Tue, 23 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163820</guid>
<dc:date>2025-09-23T00:00:00Z</dc:date>
</item>
<item>
<title>Singularities of Ricci flow and diffeomorphisms</title>
<link>https://hdl.handle.net/1721.1/163800</link>
<description>Singularities of Ricci flow and diffeomorphisms
Colding, Tobias H.; Minicozzi, William P.
We solve a well-known open problem in Ricci flow: Strong rigidity of cylinders. Strong rigidity is an illustration of a shrinker principle that uniqueness radiates out from a compact set. It implies that if one tangent flow at a future singular point is a cylinder, then all tangent flows are. At the heart of this problem in Ricci flow is comparing and recognizing metrics. This can be rather complicated because of the group of diffeomorphisms. Two metrics, that could even be the same, could look completely different in different coordinates. This is the gauge problem. Often it can be avoided if one uses some additional structure of the particular situation. The gauge problem is subtle for non-compact spaces without additional structure. We solve this gauge problem by solving a nonlinear system of PDEs. The PDE produces a diffeomorphism that fixes an appropriate gauge in the spirit of the slice theorem for group actions. We then show optimal bounds for the displacement function of the diffeomorphism. Strong rigidity relies on gauge fixing and several other new ideas. One of these is “propagation of almost splitting”, another is quadratic rigidity in the right gauge, and a third is an optimal polynomial growth bound for PDEs that holds in great generality.
</description>
<pubDate>Mon, 22 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163800</guid>
<dc:date>2025-09-22T00:00:00Z</dc:date>
</item>
<item>
<title>A Parametric, second-order cone representable model of fairness for decision-making problems</title>
<link>https://hdl.handle.net/1721.1/163799</link>
<description>A Parametric, second-order cone representable model of fairness for decision-making problems
Sundar, Kaarthik; Deka, Deepjyoti; Bent, Russell
The article develops a parametric model of fairness called “ ε -fairness” that can be represented using a single second-order cone constraint and incorporated into existing decision-making problem formulations without impacting the complexity of solution techniques. We develop the model from the fundamental result of finite-dimensional norm equivalence in linear algebra and show that this model has a closed-form relationship to an existing metric for measuring fairness widely used in the literature. Finally, a simple case study on the optimal operation of a damaged power transmission network illustrates its effectiveness.
</description>
<pubDate>Thu, 10 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163799</guid>
<dc:date>2025-04-10T00:00:00Z</dc:date>
</item>
<item>
<title>Semiclassical Measures for Complex Hyperbolic Quotients</title>
<link>https://hdl.handle.net/1721.1/163798</link>
<description>Semiclassical Measures for Complex Hyperbolic Quotients
Athreya, Jayadev; Dyatlov, Semyon; Miller, Nicholas
We study semiclassical measures for Laplacian eigenfunctions on compact complex hyperbolic quotients. Geodesic flows on these quotients are a model case of hyperbolic dynamical systems with different expansion/contraction rates in different directions. We show that the support of any semiclassical measure is either equal to the entire cosphere bundle or contains the cosphere bundle of a compact immersed totally geodesic complex submanifold. The proof uses the one-dimensional fractal uncertainty principle of Bourgain–Dyatlov (Ann. Math. (2) 187(3):825–867, 2018) along the fast expanding/contracting directions, in a way similar to the work of Dyatlov–Jézéquel (Ann. Henri Poincaré, 2023) in the toy model of quantum cat maps, together with a description of the closures of fast unstable/stable trajectories relying on Ratner theory.
</description>
<pubDate>Thu, 28 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163798</guid>
<dc:date>2025-08-28T00:00:00Z</dc:date>
</item>
<item>
<title>A divisor generating q-series and cumulants arising from random graphs</title>
<link>https://hdl.handle.net/1721.1/163797</link>
<description>A divisor generating q-series and cumulants arising from random graphs
Agarwal, Archit; Bhoria, Subhash C.; Eyyunni, Pramod; Maji, Bibekananda; Wakhare, Tanay
Uchimura, in 1987, introduced a probability generating function for a random variable X and using properties of this function, he discovered an interesting q-series identity. He further showed that the m-th cumulant with respect to the random variable X is nothing but the generating function for the generalized divisor function σ m - 1 ( n ) . Simon, Crippa, and Collenberg, in 1993, explored the G n , p -model of a random acyclic digraph and defined a random variable γ n ∗ ( 1 ) . Quite interestingly, they found links between the limit of its mean and the generating function for the divisor function d(n). Later in 1997, Andrews, Crippa and Simon extended these results using q-series techniques. They calculated the limit of the mean and the variance of the random variable γ n ∗ ( 1 ) which correspond to the first and second cumulants. In this paper, we generalize the result of Andrews, Crippa and Simon by calculating the limit of the t-th cumulant in terms of the generalized divisor function. Furthermore, we also discover limit forms for identities of Uchimura and Dilcher. This provides a fourth side to the Uchimura–Ramanujan–divisor-type three-way partition identities expounded by the first four authors recently.
</description>
<pubDate>Thu, 20 Nov 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163797</guid>
<dc:date>2025-11-20T00:00:00Z</dc:date>
</item>
<item>
<title>Arithmetic properties encoded in undermonoids</title>
<link>https://hdl.handle.net/1721.1/163796</link>
<description>Arithmetic properties encoded in undermonoids
Gotti, Felix; Li, Bangzheng
Let M be a cancellative and commutative monoid. A submonoid N of M is called an undermonoid if the Grothendieck groups of M and N coincide. For a given property p , we are interested in providing an answer to the following main question: does it suffice to check that all undermonoids of M satisfy p to conclude that all submonoids of M satisfy p ? In this paper, we give a positive answer to this question for the property of being atomic, and then we prove that if M is hereditarily atomic (i.e., every submonoid of M is atomic), then M must satisfy the ACCP, proving a recent conjecture posed by Vulakh and the first author. We also give positive answers to our main question for the following well-studied factorization properties: the bounded factorization property, half-factoriality, and length-factoriality. Finally, we determine all the monoids whose submonoids/undermonoids are half-factorial/length-factorial.
</description>
<pubDate>Fri, 19 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163796</guid>
<dc:date>2025-09-19T00:00:00Z</dc:date>
</item>
<item>
<title>The rocky road to modernity: an assessment of Pakistan’s 75 years</title>
<link>https://hdl.handle.net/1721.1/163795</link>
<description>The rocky road to modernity: an assessment of Pakistan’s 75 years
Hoodbhoy, Pervez
To assess whether Pakistan is moving towards or away from modernity I examine here the evolution of three key aspects: the overall idea system of society, the political system, and national culture. A meaningful analysis must begin with pre-colonial India, examine how British rule made fundamental changes, and the emergence of Pakistan as a result of Muslim religious identity. Although the beginnings of Pakistani modernity were shaky, the earlier inclination was to equalise with the developed world at large. In the mid-1980s this changed profoundly with the advent of political Islam, explicit repudiation of overt forms of western modernity, and a sharply increased tendency to seek examplars in the Islamic past. That trend has since accelerated under the influence of social media. But most Pakistanis, I argue, still want to hedge their bets and seek the fruits of modernity within a framework that they perceive as not inimical to their faith in Islam.
</description>
<pubDate>Mon, 12 Dec 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163795</guid>
<dc:date>2022-12-12T00:00:00Z</dc:date>
</item>
<item>
<title>Precipitate Size in GRCop-42 and GRCop-84 Cu-Cr-Nb Alloy Gas Atomized Powder and L-PBF Additive Manufactured Material</title>
<link>https://hdl.handle.net/1721.1/163794</link>
<description>Precipitate Size in GRCop-42 and GRCop-84 Cu-Cr-Nb Alloy Gas Atomized Powder and L-PBF Additive Manufactured Material
Seltzman, AH; Wukitch, SJ
Laser powder bed fusion (L-PBF) of Glenn Research Copper 42 or 84 (GRCop-42 or GRCop-84) produces a Cr2Nb precipitation-hardened high-conductivity copper alloy with tensile strength superior to other competing copper alloys. Precipitate diameters within GRCop-42 gas-atomized powder increase with powder diameter due to slower cooling rates, however, unlike GRCop-84, no threshold diameter above which extensive precipitate agglomerations form was observed in GRCop-42. Large Cr2Nb crystals were observed in GRCop-42 powder particles, implying formation within the crucible melt. A consistent precipitate volume of ~7% over a range of powder particle diameters indicated a consistent atomization process. Occasional voids were observed in GRCop-42 powder. Precipitate size was refined in L-PBF GRCop-42 to a greater extent than in GRCop-84, improving Orowan strengthening, however, this benefit was lost after heat treatment due to greater coarsening of precipitates. Precipitates in GRCop-42 accumulated on grain boundaries during heat treatment to a greater extent than in GRCop-84.
</description>
<pubDate>Thu, 26 Jan 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163794</guid>
<dc:date>2023-01-26T00:00:00Z</dc:date>
</item>
<item>
<title>US-Russian partnerships in science: working with differences</title>
<link>https://hdl.handle.net/1721.1/163793</link>
<description>US-Russian partnerships in science: working with differences
Dezhina, Irina; Wood, Elizabeth A
In the early 1990s, Russian and US observers were pessimistic about Russian science and its global integration. Yet scientists from the two countries were actively collaborating in new ways nonetheless. In order to explore the nature of those collaborations, we conducted open-ended interviews with 13 US scientists and 13 in Russia who collaborated trans-nationally in 1995–2014. Our results suggest that recognizing and working with differences benefited these colleagues. Despite ongoing political tensions and differences in scientific cultures, respondents told us that understanding those differences – in funding, cultures of doing science, institutional structures, and treatment of graduate students – helped them avoid missteps. Respect for each other’s country’s scientific contributions, interpersonal diplomacy, and personal interconnections further strengthened their work together. Diaspora scientists in particular, played a positive role as mediators and cultural interpreters.
</description>
<pubDate>Wed, 16 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163793</guid>
<dc:date>2022-02-16T00:00:00Z</dc:date>
</item>
<item>
<title>Agrammatic output in non-fluent, including Broca’s, aphasia as a rational behavior</title>
<link>https://hdl.handle.net/1721.1/163792</link>
<description>Agrammatic output in non-fluent, including Broca’s, aphasia as a rational behavior
Fedorenko, Evelina; Ryskin, Rachel; Gibson, Edward
Background: Speech of individuals with non-fluent, including Broca's, aphasia is often characterized as "agrammatic" because their output mostly consists of nouns and, to a lesser extent, verbs and lacks function words, like articles and prepositions, and correct morphological endings. Among the earliest accounts of agrammatic output in the early 1900s was the "economy of effort" idea whereby agrammatic output is construed as a way of coping with increases in the cost of language production. This idea resurfaced in the 1980s, but in general, the field of language research has largely focused on accounts of agrammatism that postulated core deficits in syntactic knowledge.&#13;
Aims: We here revisit the economy of effort hypothesis in light of increasing emphasis in cognitive science on rational and efficient behavior.&#13;
Main contribution: The critical idea is as follows: there is a cost per unit of linguistic output, and this cost is greater for patients with non-fluent aphasia. For a rational agent, this increase leads to shorter messages. Critically, the informative parts of the message should be preserved and the redundant ones (like the function words and inflectional markers) should be omitted. Although economy of effort is unlikely to provide a unifying account of agrammatic output in all patients-the relevant population is too heterogeneous and the empirical landscape too complex for any single-factor explanation-we argue that the idea of agrammatic output as a rational behavior was dismissed prematurely and appears to provide a plausible explanation for a large subset of the reported cases of expressive aphasia.&#13;
Conclusions: The rational account of expressive agrammatism should be evaluated more carefully and systematically. On the basic research side, pursuing this hypothesis may reveal how the human mind and brain optimize communicative efficiency in the presence of production difficulties. And on the applied side, this construal of expressive agrammatism emphasizes the strengths of some patients to flexibly adapt utterances in order to communicate in spite of grammatical difficulties; and focusing on these strengths may be more effective than trying to "fix" their grammar.
</description>
<pubDate>Fri, 18 Nov 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163792</guid>
<dc:date>2022-11-18T00:00:00Z</dc:date>
</item>
<item>
<title>Burns on Strauss’s Liberating Liberal Education</title>
<link>https://hdl.handle.net/1721.1/163791</link>
<description>Burns on Strauss’s Liberating Liberal Education
Rabieh, Linda R.
Leo Strauss on Democracy, Technology, and Liberal Education is an invaluable source of historical learning and philosophic guidance. Timothy W. Burns provides us with an in-depth and careful study of four important writings by Leo Strauss that examine the challenges faced by modern democracy and the ways in which liberal education can supply a modest remedy. According to Burns, Strauss understands the problems facing modern democracy to be rooted in the ascendancy of technology as the ultimate political aim, which prioritizes acquiring the means to pursue whatever ends we happen to desire rather than the good life itself (9). Subsequent developments in the service of this goal have led to our present situation, which Strauss characterizes as “hardly more than the interplay of mass taste with high grade but strictly speaking unprincipled efficiency” (13; see also 35, 69, 75–78). Burns sharpens his analysis of Strauss by comparing Strauss’s understanding of technology with that of Heidegger. In contrast to Heidegger’s argument for a “new thinking” to address modernity’s ills, Strauss looks to an older thinking from which he gleans an argument for liberal education, which he describes as the cultivation of “an aristocracy within democracy,” i.e., a class within society whose thinking is informed by both serious education in tradition and the study of the Great Books (15; see also 21, 84, 166). Although Burns’s book addresses many aspects of Strauss’s account of the way in which technology came to dominate politics and shape our modern world, I will focus on the thread throughout these essays that explains what Strauss means by liberal education and why it is needed today.
</description>
<pubDate>Wed, 18 Jan 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163791</guid>
<dc:date>2023-01-18T00:00:00Z</dc:date>
</item>
<item>
<title>Meta-UNet: enhancing skin-lesion segmentation with multimodal feature integration and uncertainty estimation</title>
<link>https://hdl.handle.net/1721.1/163790</link>
<description>Meta-UNet: enhancing skin-lesion segmentation with multimodal feature integration and uncertainty estimation
Sikha, O. K.; Stone, Alaysia L. B.; González Ballester, Miguel A.
Purpose Medical image segmentation plays a crucial role in diagnostic pipelines. This study investigates the integration of lesion-specific metadata with image data to enhance segmentation accuracy and reduce predictive uncertainty. Methods The standard U-Net architecture was modified to incorporate lesion-specific metadata (Meta-UNet). Various integration strategies, including addition, weighted addition, and embedding layers, were evaluated. Additionally, a Bayesian Meta-UNet with Monte Carlo Dropout (MCD) was developed to assess the impact of metadata integration on model uncertainty. Uncertainty was quantified using measures such as Confidence Maps, Entropy, Mutual Information, and Expected Pairwise Kullback–Leibler divergence (EPKL). An aggregation strategy was also introduced to provide a single comprehensive uncertainty score per image. Results Meta-UNet outperformed standard U-Net across PH2, ISIC 2018, and HAM10000 datasets. On PH2, it achieved 84.64% accuracy and 90.62% Intersection over Union (IoU), compared to 83.36% and 89.19%. On ISIC 2018, U-Net scored 71.02 ± 6.69 IoU and 79.89 ± 5.09 Dice. On HAM10000, Meta-UNet achieved 88.66 ± 6.09 IoU and 93.42 ± 5.19 Dice. Meta-UNet reduced uncertainty (e.g., 0.149 vs. 0.1745), highlighting the benefit of metadata integration in improving segmentation accuracy and model confidence. Conclusion Integrating lesion-specific metadata into the U-Net architecture significantly improves segmentation accuracy and reduces predictive uncertainty. The inclusion of metadata enhances model confidence and reliability, underscoring its potential to strengthen diagnostic segmentation pipelines.
</description>
<pubDate>Wed, 30 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163790</guid>
<dc:date>2025-07-30T00:00:00Z</dc:date>
</item>
<item>
<title>Increasing the quantum tunneling probability through a learned ancilla-assisted protocol</title>
<link>https://hdl.handle.net/1721.1/163789</link>
<description>Increasing the quantum tunneling probability through a learned ancilla-assisted protocol
Testa, Renzo; Rodriguez Garcia, Alejandro; d’Onofrio, Alberto; Trombettoni, Andrea; Benatti, Fabio; Anselmi, Fabio
Increasing the probability of quantum tunneling between two states, while keeping constant the resources of the underlying physical system, is a task of key importance in several physical contexts and platforms, including ultracold atoms confined by double-well potentials and superconducting qubits. We propose a novel ancillary assisted protocol showing that when a quantum system—such as a qubit—is coupled to an ancilla, one can learn the optimal ancillary component and its coupling, to increase the tunneling probability. As a case study, we consider a quantum system that, due to the presence of an energy detuning between two modes, cannot transfer by tunneling the particles from one mode to the other. However, it does it through a learned coupling with an ancillary system characterized by a detuning not smaller than the one of the primary system. We provide several illustrative examples for the paradigmatic case of a two-mode system and a two-mode ancilla in the presence of interacting particles. This reduces to a qubit coupled to an ancillary qubit in the case of one particle in the system and one in the ancilla. Our proposal provides an effective method to increase the tunneling probability in all those physical situations where no direct improvement of the system parameters, such as tunneling coefficient or energy detuning, is either possible or resource efficient. Finally, we also argue that the proposed strategy is not hampered by weak coupling to noisy environments.
</description>
<pubDate>Tue, 05 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163789</guid>
<dc:date>2025-08-05T00:00:00Z</dc:date>
</item>
<item>
<title>Attitudes, aboutness, and indirect restriction</title>
<link>https://hdl.handle.net/1721.1/163788</link>
<description>Attitudes, aboutness, and indirect restriction
von Fintel, Kai; Pasternak, Robert
On its surface, a sentence like If Laura becomes a zombie, she wants you to shoot her looks like a plain conditional with the attitude want in its consequent. However, the most salient reading of this sentence is not about the desires of a hypothetical zombie-Laura. Rather, it asserts that the actual, non-zombie Laura has a certain restricted attitude: her present desires, when considering only possible states of affairs in which she becomes a zombie, are such that you shoot her. This can be contrasted with the shifted reading about zombie-desires that arises with conditional morphosyntax, e.g., If Laura became a zombie, she would want you to shoot her. Furthermore, as Blumberg and Holguín (J Semant 36(3):377–406, 2019) note, restricted attitude readings can also arise in disjunctive environments, as in Either a lot of people are on the deck outside, or I regret that I didn’t bring more friends. We provide a novel analysis of restricted and shifted readings in conditional and disjunctive environments, with a few crucial features. First, both restricted and shifted attitude conditionals are in fact “regular” conditionals with attitudes in their consequents, which accords with their surface-level appearance and contrasts with Pasternak’s (The mereology of attitudes, Ph.D. thesis, Stony Brook University, Stony Brook, NY, 2018) Kratzerian approach, in which the if-clause restricts the attitude directly. Second, whether the attitude is or is not shifted—i.e., zombie versus actual desires—is dependent on the presence or absence of conditional morphosyntax. And third, the restriction of the attitude is effected by means of aboutness, a concept for which we provide two potential implementations. We conclude by discussing our analysis’s prospective repercussions for the theory of conditionals more generally.
</description>
<pubDate>Mon, 04 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163788</guid>
<dc:date>2025-08-04T00:00:00Z</dc:date>
</item>
<item>
<title>Sulfated dietary fiber protects gut microbiota from antibiotics</title>
<link>https://hdl.handle.net/1721.1/163787</link>
<description>Sulfated dietary fiber protects gut microbiota from antibiotics
Wu, Fuqing; Yu, Xiaoqian A.; Angeles-Albores, David; Erdman, Susan E.; Alm, Eric J.
Background Antibiotics, while essential for combating pathogens, also disrupt commensal bacteria, leading to gut microbiota imbalance and associated diseases. However, strategies to mitigate such collateral damage remain largely underexplored. Result In this study, we found that fucoidan, a marine polysaccharide derived from brown seaweed, provides broad-spectrum growth protection against multiple classes of antibiotics for human gut microbial isolates in vitro and for fecal communities ex vivo. This protective effect is dependent on the structural integrity, molecular weight, and sulfur content of the polysaccharide. Transcriptomic analysis showed that while fucoidan had minimal impact on baseline gene expression, it counteracted about 60% of the genes induced by kanamycin, suggesting a potential inhibition of kanamycin. Mass spectrometry results further showed that this inhibition may be due to the non-specific binding of fucoidan to kanamycin in solution. Finally, animal model experiments revealed that fucoidan facilitated the recovery of gut microbes following antibiotic treatment in vivo. Conclusion These findings suggest fucoidan could serve as a potential intervention to help protect gut microbiota during antibiotic therapy. Further studies are needed to evaluate its clinical potential and ensure it does not compromise antimicrobial efficacy. Video Abstract
</description>
<pubDate>Wed, 06 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163787</guid>
<dc:date>2025-08-06T00:00:00Z</dc:date>
</item>
<item>
<title>Additivity, Haag duality, and non-invertible symmetries</title>
<link>https://hdl.handle.net/1721.1/163786</link>
<description>Additivity, Haag duality, and non-invertible symmetries
Shao, Shu-Heng; Sorce, Jonathan; Srivastava, Manu
The algebraic approach to quantum field theory focuses on the properties of local algebras, whereas the study of (possibly non-invertible) global symmetries emphasizes global aspects of the theory and spacetime. We study connections between these two perspectives by examining how either of two core algebraic properties — “additivity” or “Haag duality” — is violated in a 1+1D CFT or lattice model restricted to the symmetric sector of a general global symmetry. For the Verlinde symmetry of a bosonic diagonal RCFT, we find that additivity is violated whenever the symmetry algebra contains an invertible element, while Haag duality is violated whenever it contains a non-invertible element. We find similar phenomena for the Kramers-Wannier and Rep(D8) non-invertible symmetries on spin chains.
</description>
<pubDate>Fri, 01 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163786</guid>
<dc:date>2025-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Observation of the Λ b 0 → J / ψ Ξ - K + and Ξ b 0 → J / ψ Ξ - π + decays</title>
<link>https://hdl.handle.net/1721.1/163785</link>
<description>Observation of the Λ b 0 → J / ψ Ξ - K + and Ξ b 0 → J / ψ Ξ - π + decays
The first observation of the Ξ b 0 → J / ψ Ξ - π + decay and the most precise measurement of the branching fraction of the Λ b 0 → J / ψ Ξ - K + decay are reported, using proton-proton collision data from the LHCb experiment collected in 2016–2018 at a centre-of-mass energy of 13 \,Te V , corresponding to an integrated luminosity of 5.4 \,fb - 1 . Using the Λ b 0 → J / ψ Λ and Ξ b - → J / ψ Ξ - decays as normalisation channels, the ratios of branching fractions are measured to be B ( Λ b 0 → J / ψ Ξ - K + ) B ( Λ b 0 → J / ψ Λ ) = ( 1.17 ± 0.14 ± 0.08 ) × 10 - 2 , B ( Ξ b 0 → J / ψ Ξ - π + ) B ( Ξ b - → J / ψ Ξ - ) = ( 11.9 ± 1.4 ± 0.6 ) × 10 - 2 , where the first uncertainty is statistical and the second systematic.
</description>
<pubDate>Mon, 28 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163785</guid>
<dc:date>2025-07-28T00:00:00Z</dc:date>
</item>
<item>
<title>Incorporating teacher effect when modeling student engagement in smart STEM classrooms: a cluster analysis</title>
<link>https://hdl.handle.net/1721.1/163784</link>
<description>Incorporating teacher effect when modeling student engagement in smart STEM classrooms: a cluster analysis
Shreeve, Kelly; Perry, Anthony; Cassidy, Michael; Jessen Eller, Kathryn; Price, Beth; Jackson, Brandy; Celi, Leo; Lourentzou, Ismini; Hendrik, Luk
Student engagement during learning serves as a critical predictor of academic success and plays a pivotal role in nurturing interest and readiness for future careers. As digital platforms become increasingly important to learning, it is essential that we understand how the interactions that students have with them reflects their engagement with learning. Previous research has often modeled engagement in a fully online context, where students pursue lessons independently and outside the influence of the classroom, paced and structured by digital systems. However, in STEM (Science, Technology, Engineering, and Math) subjects—and many others—learning more frequently happens in a physical classroom setting, under the guidance of a teacher, and involves interactions with other students and tangible objects. Here digital materials are used to scaffold and support learning but are not typically the focus of where learning happens. To study how student interactions with digital materials in these settings might allow us to measure, evaluate and help teachers enhance engagement, we have developed and deployed a smart digital learning platform that guides instruction and captures real-time multimodal student learning events in the physical STEM classroom. Previously we have shown that a subset of student interactions measured with this platform can be used to model student learning and generate human-like insights into engagement. Here we report on the significant influence that teachers have on student interactions with our smart platform in the STEM classroom, and the impact that this has on evaluating their engagement with learning. In an analysis of 108 high school students that used the platform to complete a 19-lesson data science curriculum in 5 different classrooms, we found significant differences between teachers both in the measured time students spent on the lesson and the percentage of the lesson they completed. In this setting, taking teacher influence into account improves the outcomes of our machine learning clustering models that group students based on their level of engagement. These findings inform how we develop smart classroom technology and machine learning applications that are globally informed but locally relevant, and support teachers to enhance student engagement and learning outcomes in dynamic and highly variable STEM classroom learning environments.
</description>
<pubDate>Wed, 10 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163784</guid>
<dc:date>2025-09-10T00:00:00Z</dc:date>
</item>
<item>
<title>On determining αs(mZ) from dijets in e+e− thrust</title>
<link>https://hdl.handle.net/1721.1/163783</link>
<description>On determining αs(mZ) from dijets in e+e− thrust
Benitez, Miguel A.; Hoang, André H.; Mateu, Vicent; Stewart, Iain W.; Vita, Gherardo
We update a previous N3LL′+ O α s 3 determination of the strong coupling from a global fit to thrust data by including newly available perturbative ingredients, upgrading the renormalization scales to include a fully canonical scaling region, and implementing the log resummation in a way which ensures the integrated cross section is unaffected by the leading 1/Q hadronization power corrections. Detailed discussions are provided concerning the stability of the results under variations of the fit range and the importance of summing up higher-order logarithmic terms for convergence and stability. We show that high-precision results can be achieved even when carrying out a more conservative fit by restricting the dataset to a region which is more clearly dominated by dijet events. This leads to αs(mZ) = 0.1136 ± 0.0012 with χ2/dof = 0.86, fully compatible with earlier results using a larger fit range. We also demonstrate that a number of additional effects associated to power corrections have a small impact on this fit result, including modifications to the renormalon substraction scheme for dijet power corrections and the inclusion of three-jet power correction models. The fit is also shown to provide very good agreement with data outside the fit range.
</description>
<pubDate>Fri, 25 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163783</guid>
<dc:date>2025-07-25T00:00:00Z</dc:date>
</item>
<item>
<title>Measurement of the ψ(2S) to J/ψ cross-section ratio as a function of centrality in PbPb collisions at √sNN = 5.02 TeV</title>
<link>https://hdl.handle.net/1721.1/163782</link>
<description>Measurement of the ψ(2S) to J/ψ cross-section ratio as a function of centrality in PbPb collisions at √sNN = 5.02 TeV
Aaij, R.; Abdelmotteleb, A. S. W.; Abellan Beteta, C.; Abudinén, F.; Ackernley, T.; Adefisoye, A. A.; Adeva, B.; Adinolfi, M.; Adlarson, P.; Agapopoulou, C.; Aidala, C. A.; Ajaltouni, Z.; Akar, S.; Akiba, K.; Albicocco, P.; Albrecht, J.; LHCb collaboration
The ratio of prompt production cross-sections of ψ(2S) and J/ψ mesons in their&#13;
dimuon final state is measured as a function of centrality, using data collected by the LHCb&#13;
detector in PbPb collisions at √&#13;
sNN = 5.02 TeV, for the first time in the forward rapidity&#13;
region. The measured ratio shows no dependence on the collision centrality, and is compared&#13;
to the latest theory predictions and to the recent measurements in literature.
</description>
<pubDate>Wed, 23 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163782</guid>
<dc:date>2025-07-23T00:00:00Z</dc:date>
</item>
<item>
<title>Legal causation*</title>
<link>https://hdl.handle.net/1721.1/163781</link>
<description>Legal causation*
Byrne, Thomas
I propose a new formalist account of legal (/proximate) causation – one that holds legal causation to be a matter of amoral, descriptive fact. The account starts with a metaphysical relation, akin to but distinct from common-sense causation, and it argues that legal causation aligns exactly with that relation; it is unified and principled.
</description>
<pubDate>Fri, 14 Oct 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163781</guid>
<dc:date>2022-10-14T00:00:00Z</dc:date>
</item>
<item>
<title>Space Architecture in Microgravity: TESSERAE Project for Large Scale Space Structures</title>
<link>https://hdl.handle.net/1721.1/163780</link>
<description>Space Architecture in Microgravity: TESSERAE Project for Large Scale Space Structures
Ekblaw, Ariel
NASA and international partners are planning a crewed returnto the lunar surface in this decade, with the explicit long-termgoal of establishing sustainable lunar habitat infrastructure.International space agencies and several space entrepreneurshave shared plans for human missions to Mars in the 2030s.A menagerie of “new space” start-up companies is poised tosupport extensive activity for in-space habitation. Space explo-ration is entering an age of burgeoning commercial movement,fueled not only by the unique science experiments performedin microgravity but also by space tourism and a need for inhab-itable next-generation space architecture.Designers such as architects, engineers, and space structurepractitioners should aim to democratize access to space andchallenge the prevailing paradigm of space as an exclusive andinaccessible domain. In that case, they must build space architec-ture that can scale to welcome, safeguard, and inspire human-kind. Our space structures research program applies biomimeticprinciples to design modular, reconfigurable, and self-assemblingspace architecture. Currently, The team includes electrical andmechanical engineers, designers, a university-trained architect,and a spaceflight mission integration specialist.
</description>
<pubDate>Mon, 21 Nov 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163780</guid>
<dc:date>2022-11-21T00:00:00Z</dc:date>
</item>
<item>
<title>Infrastructure, Revenue, and Services: Non-State Governance in Iraq’s Disputed Territories</title>
<link>https://hdl.handle.net/1721.1/163779</link>
<description>Infrastructure, Revenue, and Services: Non-State Governance in Iraq’s Disputed Territories
Cancian, Matthew; Greenwald, Diana B.
While states and non-state armed groups often engage in militarised conflict over contested territory, at other times they co-govern in a tenuous equilibrium. Using a survey of over 1,600 Kurdish soldiers (Peshmerga) and elite interviews, we investigate local variation in shared governance in one such context – the disputed territories of northern Iraq. Despite the area being under Kurdish military control, the Iraqi government continued to provide services in districts where it had pre-existing infrastructural capacity. However, in revenue-producing districts, Kurdish actors appropriated infrastructural power to provide services themselves. This illustrates that non-state governance strategies, and their outputs, can vary locally.
</description>
<pubDate>Wed, 05 Oct 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163779</guid>
<dc:date>2022-10-05T00:00:00Z</dc:date>
</item>
<item>
<title>Which Information Matters? Measuring Landlord Assessment of Tenant Screening Reports</title>
<link>https://hdl.handle.net/1721.1/163778</link>
<description>Which Information Matters? Measuring Landlord Assessment of Tenant Screening Reports
So, Wonyoung
This research studies how tenant screening services’ presentation ofinformation influences landlord decisions. Tenant screening services util-ize criminal records, eviction records, and credit score databases to pro-duce reports that landlords use to inform their decisions about who torent to. However, little is known about how landlords assess the infor-mation presented by tenant screening reports. Through a behavioralexperiment with landlords using simulated tenant screening reports,this study shows that landlords use blanket screening policies, that theyconflate the existence of tenant records with outcomes (e.g., eviction fil-ings with executed evictions), and that they display, on average, tenden-cies toward automation bias that are influenced by the risk assessmentsand scores presented by tenant screening reports. I argue that maintain-ing blanket screening policies and automation bias, combined with thedownstream effects of creating and using racially biased eviction andcriminal records, means that people of color will inevitably experiencedisproportionate exclusion from rental housing due to perceived “risk”on the part of landlords.
</description>
<pubDate>Tue, 30 Aug 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163778</guid>
<dc:date>2022-08-30T00:00:00Z</dc:date>
</item>
<item>
<title>Learning to make noise: toward a process model of artistic practice within experimental music scenes</title>
<link>https://hdl.handle.net/1721.1/163777</link>
<description>Learning to make noise: toward a process model of artistic practice within experimental music scenes
Woods, Peter J
Emerging at the intersection of industrial, punk, electronic music, and avant-garde jazz, noise music represents a niche subgenre reliant on loud, discordant, and arrhythmic sounds to make music. Yet despite its place within the (broadly defined) experimental music tradition, research into experimental music education has largely overlooked the genre. In response, I explore noise music through the lens of situated learning theory by addressing the following research question: how do noise musicians develop their artistic practice? To do so, I present findings from a comparative case study centered on two intertwined experimental music concert and workshop series focused on noise music. I begin by analyzing interview data from seventeen featured artists to construct a process model of artistic practice shared between musicians. I then employ bidirectional artifact analysis to trace the development of one novice participant in the series through this model. In turn, these findings not only illuminate how experimental musicians learn within informal settings but provide a potential model of learning for informal education communities more broadly. This study also holds implications for situated learning theory by asserting the influence of non-anthropocentric actors within communities of practice.
</description>
<pubDate>Fri, 15 Jul 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163777</guid>
<dc:date>2022-07-15T00:00:00Z</dc:date>
</item>
<item>
<title>Experiencer troubles: A reappraisal of the predicate-based asymmetry in child passives</title>
<link>https://hdl.handle.net/1721.1/163776</link>
<description>Experiencer troubles: A reappraisal of the predicate-based asymmetry in child passives
Aravind, Athulya; Koring, Loes
Children’s understanding of passives of certain mental state predicates appears to lag behind passives of so-called actional predicates, an asymmetry that has posed a major empirical challenge for theories of passive acquisition. This paper argues against the dominant view in the literature that treats the predicate-based asymmetry as theoretically irrelevant. We instead propose a novel account that locates the problem in the syntax of experiencer constructions. Synthesizing theoretical and developmental evidence, we build a case for an early misanalysis of transitive subject-experiencer constructions as unaccusatives – structures that, by design, cannot passivize.
</description>
<pubDate>Mon, 17 Oct 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163776</guid>
<dc:date>2022-10-17T00:00:00Z</dc:date>
</item>
<item>
<title>Challenges and Opportunities of Machine Learning on Neutron and X-ray Scattering</title>
<link>https://hdl.handle.net/1721.1/163775</link>
<description>Challenges and Opportunities of Machine Learning on Neutron and X-ray Scattering
Drucker, Nathan C; Liu, Tongtong; Chen, Zhantao; Okabe, Ryotaro; Chotrattanapituk, Abhijatmedhi; Nguyen, Thanh; Wang, Yao; Li, Mingda
Machine learning has been highly successful in boosting the re-search for neutron and X-ray scattering in the past few years [1, 2]. Fordiffraction, machine learning has shown great promise in phase map-ping [3, 4] and crystallographic information determination [5, 6]. Insmall-angle scattering, machine learning shows the power in reachingsuper-resolution [7, 8], reconstructing structures for macromolecules[9], and building structure-property relations [10]. As for absorptionspectroscopy, machine learning has enabled the rapid inverse searchfor optimized structures [11, 12] with improved spectral interpretability[13, 14]. Overall, as a data-driven approach, the success of the machine-learning-based scattering analysis depends on a few criteria, including:• Quantity of available experimental data, and feasibility to extractcertain data labels;• Quality of experimental data that can separate the intrinsic effect(e.g., materials properties) from extrinsic influence (e.g., instru-mental or data artifacts);• Feasibility to generate high volume of computational data;• Accuracy of computational data that can simulate the experimen-tal data.
</description>
<pubDate>Wed, 12 Oct 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163775</guid>
<dc:date>2022-10-12T00:00:00Z</dc:date>
</item>
<item>
<title>Scaled Process Priors for Bayesian Nonparametric Estimation of the Unseen Genetic Variation</title>
<link>https://hdl.handle.net/1721.1/163774</link>
<description>Scaled Process Priors for Bayesian Nonparametric Estimation of the Unseen Genetic Variation
Camerlenghi, Federico; Favaro, Stefano; Masoero, Lorenzo; Broderick, Tamara
There is a growing interest in the estimation of the number of unseen features, mostly driven by biological applications. A recent work brought out a peculiar property of the popular completely random measures (CRMs) as prior models in Bayesian nonparametric (BNP) inference for the unseen-features problem: for fixed prior's parameters, they all lead to a Poisson posterior distribution for the number of unseen features, which depends on the sampling information only through the sample size. CRMs are thus not a flexible prior model for the unseen-features problem and, while the Poisson posterior distribution may be appealing for analytical tractability and ease of interpretability, its independence from the sampling information makes the BNP approach a questionable oversimplification, with posterior inferences being completely determined by the estimation of unknown prior's parameters. In this article, we introduce the stable-Beta scaled process (SB-SP) prior, and we show that it allows to enrich the posterior distribution of the number of unseen features arising under CRM priors, while maintaining its analytical tractability and interpretability. That is, the SB-SP prior leads to a negative Binomial posterior distribution, which depends on the sampling information through the sample size and the number of distinct features, with corresponding estimates being simple, linear in the sampling information and computationally efficient. We apply our BNP approach to synthetic data and to real cancer genomic data, showing that: (i) it outperforms the most popular parametric and nonparametric competitors in terms of estimation accuracy; (ii) it provides improved coverage for the estimation with respect to a BNP approach under CRM priors. Supplementary materials for this article are available online.
</description>
<pubDate>Thu, 29 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163774</guid>
<dc:date>2022-09-29T00:00:00Z</dc:date>
</item>
<item>
<title>Precision DIS thrust predictions for HERA and EIC</title>
<link>https://hdl.handle.net/1721.1/163773</link>
<description>Precision DIS thrust predictions for HERA and EIC
Ee, June-Haak; Kang, Daekyoung; Lee, Christopher; Stewart, Iain W.
We present predictions for the DIS 1-jettiness event shape τ 1 b , or DIS thrust, using the framework of Soft Collinear Effective Theory (SCET) for factorization, resummation of large logarithms, and rigorous treatment of nonperturbative power corrections, matched to fixed-order QCD away from the resummation region. Our predictions reach next-to-next-to-next-to-leading-logarithmic (N3LL) accuracy in resummed perturbation theory, matched to O ( α s 2 ) fixed-order QCD calculations obtained using the program NLOJet++. We include a rigorous treatment of hadronization corrections, which are universal across different event shapes and kinematic variables x and Q at leading power, and supplement them with a systematic scheme to remove O (ΛQCD) renormalon ambiguities in their definition. The framework of SCET allows us to connect smoothly the nonperturbative, resummation, and fixed-order regions, whose relative importance varies with x and Q, and to rigorously estimate theoretical uncertainties, across a broad range of x and Q covering existing experimental results from HERA as well as expected new measurements from the upcoming Electron- Ion-Collider (EIC). Our predictions will serve as an important benchmark for the EIC program, enabling the precise determination of the QCD strong coupling αs and the universal nonperturbative first moment parameter Ω1.
</description>
<pubDate>Thu, 24 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163773</guid>
<dc:date>2025-07-24T00:00:00Z</dc:date>
</item>
<item>
<title>Semi-classical dilaton gravity and the very blunt defect expansion</title>
<link>https://hdl.handle.net/1721.1/163772</link>
<description>Semi-classical dilaton gravity and the very blunt defect expansion
Kruthoff, Jorrit; Levine, Adam
We explore dilaton gravity with general dilaton potentials in the semi-classical limit viewed both as a gas of blunt defects and also as a semi-classical theory in its own right. We compare the exact defect gas picture with that obtained by naively canonically quantizing the theory in geodesic gauge. We find a subtlety in the canonical approach due to a non-perturbative ambiguity in geodesic gauge. Unlike in JT gravity, this ambiguity arises already at the disk level. This leads to a distinct mechanism from that in JT gravity by which the semi-classical approximation breaks down at low temperatures. Along the way, we propose that new, previously un-studied saddles contribute to the density of states of dilaton gravity. This in particular leads to a re-interpretation of the disk-level density of states in JT gravity in terms of two saddles with fixed energy boundary conditions: the disk, which caps off on the outer horizon, and another, sub-leading complex saddle which caps off on the inner horizon. When the theory is studied using a defect expansion, we show how the smooth classical geometries of dilaton gravity arise from a dense gas of very blunt defects in the GN → 0 limit. The classical saddle points arise from a balance between the attractive force on the defects toward negative dilaton and a statistical pressure from the entropy of the configuration. We end with speculations on the nature of the space-like singularity present inside black holes described by certain dilaton potentials.
</description>
<pubDate>Tue, 22 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163772</guid>
<dc:date>2025-07-22T00:00:00Z</dc:date>
</item>
<item>
<title>Measurement of the branching fraction ratio RK at large dilepton invariant mass</title>
<link>https://hdl.handle.net/1721.1/163771</link>
<description>Measurement of the branching fraction ratio RK at large dilepton invariant mass
Aaij, R.; Abdelmotteleb, A. S. W.; Abellan Beteta, C.; Abudinén, F.; Ackernley, T.; Adefisoye, A. A.; Adeva, B.; Adinolfi, M.; Adlarson, P.; Agapopoulou, C.; Aidala, C. A.; Ajaltouni, Z.; Akar, S.; Akiba, K.; Albicocco, P.; Albrecht, J.; The LHCb collaboration
A test of lepton universality between muons and electrons is performed using B+ → K+ℓ+ℓ− decays (where ℓ = e, μ), in the dilepton invariant-mass-squared region above 14.3 GeV2/c4. The data used for the measurement consists of beauty meson decays produced in proton-proton collisions, corresponding to an integrated luminosity of 9 fb−1, collected by the LHCb experiment between 2011 and 2018. The ratio of branching fractions for B+ → K+μ+μ− and B+ → K+e+e− decays is measured to be R K = 1.0 8 − 0.09 + 0.11 stat − 0.04 + 0.04 syst , which is consistent with the Standard Model prediction of unity. This constitutes the most precise test of lepton flavour universality using B+ → K+ℓ+ℓ− decays with dilepton invariant-mass-squared above the ψ(2S) mass, whilst being the first of its kind at a hadron collider.
</description>
<pubDate>Thu, 17 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163771</guid>
<dc:date>2025-07-17T00:00:00Z</dc:date>
</item>
<item>
<title>Iterating Sine, Equivalence Classes of Variable Changes, and Groups with Few Conjugacy Classes</title>
<link>https://hdl.handle.net/1721.1/163770</link>
<description>Iterating Sine, Equivalence Classes of Variable Changes, and Groups with Few Conjugacy Classes
Etingof, Pavel
This is an expository paper about iterations of a&#13;
smooth real function f on [0, ) such that f(0) = 0,&#13;
f E&#13;
(0) = 1, and f(x) &lt; x for x &gt; 0, i.e., the sequence&#13;
defined by xn+1 = f(xn). This sequence has interesting asymptotics, whose study leads to the question of classifying conjugacy classes in the group of formal changes of variable y = f(x), i.e., formal series f(x) = x + a2x2 + a3x2 + ⋯&#13;
with real coefficients (under composition). The same classification applies over a finite field p for suitably truncated&#13;
series f, defining a family of p-groups that have the smallest&#13;
number of conjugacy classes for a given order, i.e., are the&#13;
“most noncommutative” finite groups currently known. The&#13;
paper should be accessible to undergraduates and at least&#13;
partially to advanced high school students.
</description>
<pubDate>Wed, 23 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163770</guid>
<dc:date>2025-07-23T00:00:00Z</dc:date>
</item>
<item>
<title>Quantum information meets high-energy physics: input to the update of the European strategy for particle physics</title>
<link>https://hdl.handle.net/1721.1/163768</link>
<description>Quantum information meets high-energy physics: input to the update of the European strategy for particle physics
Afik, Yoav; Fabbri, Federica; Low, Matthew; Marzola, Luca; Aguilar-Saavedra, Juan A.; Altakach, Mohammad M.; Asbah, Nedaa A.; Bai, Yang; Banks, Hannah; Barr, Alan J.; Bernal, Alexander; Browder, Thomas E.; Caban, Paweł; Casas, J. A.; Cheng, Kun; Déliot, Frédéric; Demina, Regina; Di Domenico, Antonio; Eckstein, Michał; Fabbrichesi, Marco
Some of the most astonishing and prominent properties of Quantum Mechanics, such as entanglement and Bell nonlocality, have only been studied extensively in dedicated low-energy laboratory setups. The feasibility of these studies in the high-energy regime explored by particle colliders was only recently shown and has gathered the attention of the scientific community. For the range of particles and fundamental interactions involved, particle colliders provide a novel environment where quantum information theory can be probed, with energies exceeding by about 12 orders of magnitude those employed in dedicated laboratory setups. Furthermore, collider detectors have inherent advantages in performing certain quantum information measurements and allow for the reconstruction of the state of the system under consideration via quantum state tomography. Here, we elaborate on the potential, challenges, and goals of this innovative and rapidly evolving line of research and discuss its expected impact on both quantum information theory and high-energy physics.
</description>
<pubDate>Tue, 09 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163768</guid>
<dc:date>2025-09-09T00:00:00Z</dc:date>
</item>
<item>
<title>On approximability of Satisfiable k -CSPs: I</title>
<link>https://hdl.handle.net/1721.1/163767</link>
<description>On approximability of Satisfiable k -CSPs: I
Bhangale, Amey; Khot, Subhash; Minzer, Dor
We consider the P -CSP problem for 3-ary predicates P on satisfiable instances. We show that under certain conditions on P and a ( 1 , s ) integrality gap instance of the P -CSP problem, it can be translated into a dictatorship vs. quasirandomness test with perfect completeness and soundness s + ϵ , for every constant ϵ &gt; 0 . Compared to Ragahvendra (in: Proceedings of the fortieth annual ACM symposium on theory of computing (STOC), pp 245–254, 2008), we do not lose perfect completeness. This is particularly interesting as this test implies new hardness results on satisfiable constraint satisfaction problems, assuming the Rich 2-to-1 Games Conjecture by Braverman et al. (in: Lee JR (ed) Volume 185 of Leibniz international proceedings in informatics (LIPIcs), 27:1–27:20. Schloss Dagstuhl–Leibniz-Zentrum für Informatik, Dagstuhl, 2021b. https://drops.dagstuhl.de/opus/volltexte/2021/13566 ).Our result can be seen as the first step of a potentially long-term challenging program of characterizing optimal inapproximability of every satisfiable k -ary CSP. At the heart of the reduction is our main analytical lemma for a class of 3-ary predicates, which is a generalization of a lemma by Mossel (Geom Funct Anal 19(6):1713–1756, 2010). The lemma and a further generalization of it that we conjecture may be of independent interest.
</description>
<pubDate>Tue, 22 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163767</guid>
<dc:date>2025-07-22T00:00:00Z</dc:date>
</item>
<item>
<title>Measurement of light-by-light scattering and the Breit-Wheeler process, and search for axion-like particles in ultraperipheral PbPb collisions at √sNN = 5.02 TeV</title>
<link>https://hdl.handle.net/1721.1/163765</link>
<description>Measurement of light-by-light scattering and the Breit-Wheeler process, and search for axion-like particles in ultraperipheral PbPb collisions at √sNN = 5.02 TeV
Hayrapetyan, A.; Tumasyan, A.; Adam, W.; Andrejkovic, J. W.; Bergauer, T.; Chatterjee, S.; Damanakis, K.; Dragicevic, M.; Hussain, P. S.; Jeitler, M.; Krammer, N.; Li, A.; Liko, D.; Mikulec, I.; Schieck, J.; Schöfbeck, R.; Schwarz, D.; The CMS collaboration
Measurements of light-by-light scattering (LbL, γγ → γγ) and the Breit-Wheeler process (BW, γγ → e+e−) are reported in ultraperipheral PbPb collisions at a centre-of-mass energy per nucleon pair of 5.02 TeV. The data sample, corresponding to an integrated luminosity of 1.7 nb−1, was collected by the CMS experiment at the CERN LHC in 2018. Events with an exclusively produced γγ or e+e− pair with invariant masses mγγ,ee &gt; 5 GeV, along with other fiducial criteria, are selected. The measured BW fiducial production cross section, σfid(γγ → e+e−) = 263.5 ± 1.8(stat) ± 17.8(syst) μb, as well as the differential distributions for various kinematic observables, are in agreement with leading-order quantum electrodynamics predictions complemented with final-state photon radiation. The measured differential BW cross sections allow discrimination between different theoretical descriptions of the photon flux of the lead ion. In the LbL final state, 26 exclusive diphoton candidate events are observed compared with 12.0 ± 2.9 expected for the background. Combined with previous results, the observed significance of the LbL signal with respect to the background-only hypothesis is above five standard deviations. The measured fiducial LbL scattering cross section, σfid(γγ → γγ) = 107 ± 24(stat) ± 13(syst) nb, is in agreement with next- to-leading-order predictions. Limits on the production of axion-like particles coupled to photons are set over the mass range 5–100 GeV, including the most stringent limits to date in the range of 5–10 GeV.
</description>
<pubDate>Fri, 01 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163765</guid>
<dc:date>2025-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>The PLATO mission</title>
<link>https://hdl.handle.net/1721.1/163764</link>
<description>The PLATO mission
Rauer, Heike; Aerts, Conny; Cabrera, Juan; Deleuil, Magali; Erikson, Anders; Gizon, Laurent; Goupil, Mariejo; Heras, Ana; Walloschek, Thomas; Lorenzo-Alvarez, Jose; Marliani, Filippo; Martin-Garcia, César; Mas-Hesse, J. M.; O’Rourke, Laurence; Osborn, Hugh; Pagano, Isabella; Piotto, Giampaolo
PLATO (PLAnetary Transits and Oscillations of stars) is ESA’s M3 mission designed to detect and characterise extrasolar planets and perform asteroseismic monitoring of a large number of stars. PLATO will detect small planets (down to &lt;2R Earth ) around bright stars (&lt;11 mag), including terrestrial planets in the habitable zone of solar-like stars. With the complement of radial velocity observations from the ground, planets will be characterised for their radius, mass, and age with high accuracy (5%, 10%, 10% for an Earth-Sun combination respectively). PLATO will provide us with a large-scale catalogue of well-characterised small planets up to intermediate orbital periods, relevant for a meaningful comparison to planet formation theories and to better understand planet evolution. It will make possible comparative exoplanetology to place our Solar System planets in a broader context. In parallel, PLATO will study (host) stars using asteroseismology, allowing us to determine the stellar properties with high accuracy, substantially enhancing our knowledge of stellar structure and evolution. The payload instrument consists of 26 cameras with 12cm aperture each. For at least four years, the mission will perform high-precision photometric measurements. Here we review the science objectives, present PLATO‘s target samples and fields, provide an overview of expected core science performance as well as a description of the instrument and the mission profile towards the end of the serial production of the flight cameras. PLATO is scheduled for a launch date end 2026. This overview therefore provides a summary of the mission to the community in preparation of the upcoming operational phases.
</description>
<pubDate>Mon, 21 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163764</guid>
<dc:date>2025-04-21T00:00:00Z</dc:date>
</item>
<item>
<title>DisruptionBench and Complimentary New Models: Two Advancements in Machine Learning Driven Disruption Prediction</title>
<link>https://hdl.handle.net/1721.1/163763</link>
<description>DisruptionBench and Complimentary New Models: Two Advancements in Machine Learning Driven Disruption Prediction
Spangher, Lucas; Bonotto, Matteo; Arnold, William; Chayapathy, Dhruva; Gallingani, Tommaso; Spangher, Alexander; Cannarile, Francesco; Bigoni, Daniele; de Marchi, Eliana; Rea, Cristina
Plasma disruptions remain a major obstacle to sustained commercial operation of tokamak-based fusion devices. Although machine learning (ML) methods have shown promise for predicting disruptions, their performance and generalizability suffer from a lack of common benchmarks and comprehensive multi-device evaluations. To address this, we present DisruptionBench, a new benchmarking platform designed to standardize how ML-driven disruption prediction systems are trained and evaluated on multi-machine data. DisruptionBench spans three devices - Alcator C-Mod, DIII-D, and EAST - and includes tasks of varying difficulty: zero-shot, few-shot, and many-shot training regimes to assess each model’s ability to transfer learned representations to new or data-limited machines. We evaluate four state-of-the-art ML architectures. Two are re-implementations of notable prior work: a random forest (Cristina Rea in PPCF 60:084008, 2018) and the Hybrid Deep Learner (HDL) (Zhu in NC 61: 026607, 2020). We also propose two new approaches tailored for disruption prediction: a transformer-based model inspired by GPT-2, capable of learning long-range temporal dependencies through self-attention, and a Continuous Convolutional Neural Network (CCNN) that leverages continuous kernels to capture subtle variations in plasma signals. Across the nine benchmarking tasks, the CCNN demonstrates consistently strong performance and achieves the highest overall Area Under the ROC Curve (AUC) in intra-machine tests (up to 0.97 on C-Mod). Nevertheless, the GPT-2-based approach and HDL can outperform CCNN in specific transfer scenarios, particularly when the test machine is underrepresented in training data. We further analyze the significance of memory length in capturing precursor phenomena, providing evidence that longer context windows can boost predictive accuracy.
</description>
<pubDate>Sat, 24 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163763</guid>
<dc:date>2025-05-24T00:00:00Z</dc:date>
</item>
<item>
<title>From Crucible Steel to the Battlefield: Investigating a Unique Early Medieval Arrowhead from Anatolia</title>
<link>https://hdl.handle.net/1721.1/163762</link>
<description>From Crucible Steel to the Battlefield: Investigating a Unique Early Medieval Arrowhead from Anatolia
Güder, Ümit; Yavaş, Alptekin; Demirel Gökalp, Zeliha; Taşan, Cemal C.; Raabe, Dierk
An arrowhead that was recovered during the excavations of the lower city church of Byzantine Stronghold Amorium in central Anatolia has been subjected to archaeometric analysis. Coins discovered in the same context date the arrowhead to the Middle Byzantine period (ninth–tenth century CE). It is a three-bladed arrowhead with a needle-type tang. Metallography (OM, SEM), SEM–EDS and EBSD techniques were used to examine samples taken from the head and the tang sections of the arrowhead. The arrowhead was determined to be made of manganese-alloyed crucible steel (0.4–1% Mn), shaped through warm forging cycles and selectively quenched and tempered to enhance its mechanical properties. The hardened head, likely designed for armor penetration, along with the potential watered surface pattern (firind), suggests that the arrowhead functioned both as a weapon and a symbol of prestige. Historical sources and archaeometallurgical evidence link the arrowhead to mounted Turkic archers in the Abbasid army during the 838 CE Sack of Amorium. This study of the arrowhead revealed it to be the earliest crucible steel find and the only example of such an object manufactured from crucible steel discovered in medieval Anatolian excavations.
</description>
<pubDate>Fri, 06 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163762</guid>
<dc:date>2025-06-06T00:00:00Z</dc:date>
</item>
<item>
<title>Bayesian Methods for Magnetic and Mechanical Optimization of Superconducting Magnets for Fusion</title>
<link>https://hdl.handle.net/1721.1/163761</link>
<description>Bayesian Methods for Magnetic and Mechanical Optimization of Superconducting Magnets for Fusion
Packman, Sam; Riva, Nicolò; Rodriguez-Fernandez, Pablo
Stellarators as compact fusion power sources have incredible potential to help combat climate change. However, the task of making that a reality faces many challenges. This work uses Bayesian optimization, (BO) which is a method that is well suited to black-box optimizations, to address the complicated optimization problem inherent by stellarator design. In particular it focuses on the mechanical optimization necessary to withstand the Lorentz forces generated by the magnetic coils. This work leverages surrogate models that are constructed to integrate as much information as possible from the available data points, significantly reducing the number of required model evaluations. It showcases the efficacy of Bayesian optimization as a versatile tool for enhancing both magneto-static and mechanical properties within stellarator winding packs. Employing a suite of Bayesian optimization algorithms, we iteratively refine 2D and 3D models of solenoid and stellarator configurations, and demonstrate a 15% increase in optimization speed using multi-fidelity Bayesian optimization. For fusion technology to progresses from experimental stages to commercial viability, precise and efficient design methodologies will be essential. By emphasizing its modularity and transferability, our approach lays the foundation for streamlining optimization processes, facilitating the integration of fusion power into a sustainable energy infrastructure.
</description>
<pubDate>Fri, 14 Mar 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163761</guid>
<dc:date>2025-03-14T00:00:00Z</dc:date>
</item>
<item>
<title>Pseudo-Anosov representatives of stable Hamiltonian structures</title>
<link>https://hdl.handle.net/1721.1/163760</link>
<description>Pseudo-Anosov representatives of stable Hamiltonian structures
Zung, Jonathan
A pseudo-Anosov homeomorphism of a surface is a canonical representative of its mapping class. Conditional on the foundations of symplectic field theory, we explain that a transitive pseudo-Anosov flow is similarly a canonical representative of its stable Hamiltonian class. It follows that there are finitely many pseudo-Anosov flows admitting positive Birkhoff sections on any given rational homology 3-sphere. This result has a purely topological consequence: any 3-manifold can be obtained in at most finitely many ways as p/q surgery on a fibered hyperbolic knot in S 3 for a slope p/q satisfying q ≥ 6 , p ≠ 0 , ± 1 , ± 2 mod q . The proof of the main theorem generalizes an argument of Barthelmé–Bowden–Mann.
</description>
<pubDate>Mon, 08 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163760</guid>
<dc:date>2025-09-08T00:00:00Z</dc:date>
</item>
<item>
<title>Hindered segmental dynamics in associative protein hydrogels studied by neutron spin-echo spectroscopy</title>
<link>https://hdl.handle.net/1721.1/163759</link>
<description>Hindered segmental dynamics in associative protein hydrogels studied by neutron spin-echo spectroscopy
Rao, Ameya; Carrick, Brian R; Yao, Helen; Olsen, Bradley D
Transient binding between associating macromolecules can cause qualitative changes to chain dynamics, including modes of conformational relaxation and diffusion, through tethering effects imparted by long-range connectivity. In this work, the role of binding on short-time segmental dynamics in associative polymer gels is investigated by neutron spin-echo (NSE) measurements on a class of model artificial coiled-coil proteins with a systematically varied architecture, probing timescales of 0.1–130 ns, and length scales close to the molecular radius of gyration. The results illustrate effects of transient cross-linking on chain dynamics on different timescales, manifested in changes in segmental relaxation behavior with variations in strand length, chain concentration, and sticker distribution (endblock- vs midblock-functionalized). In all gels, a short-time cooperative diffusion mode is seen over all wave vectors, analogous to a semidilute solution, with no transitions seen at any known structural length scale. However, the diffusion coefficients are found to decrease with increasing junction density across all gels, with the strand length and number of stickers per chain in each gel appearing to play a relatively minor role. The slowing of cooperative diffusion with junction density contrasts with classical predictions of a greater restoring force for fluctuation dissipation due to the increased elasticity, suggesting additional effects of the coiled-coil junctions such as an enhancement in local viscosity that slows dynamics. Notably, the relaxation rates for all gels can be rescaled by the interjunction spacing inferred from small-angle neutron scattering, where they collapse onto a master curve suggestive of self-similar dynamics even in networks with different strand lengths and chain architectures. On long timescales (but shorter than the junction exchange time), a slowing of network relaxation is observed, resulting in a nondecaying plateau in the spin-echo amplitude attributed to a freezing of chain dynamics due to tethering. A characteristic length scale corresponding to the extent of dynamic fluctuations is estimated for each gel, which appears to be smaller than the interjunction spacing but similar to the correlation blob size of the overlapping strands. The results indicate an important role of transient binding on molecular-scale dynamics in associative polymer gels, even on timescales shorter than the junction exchange time, in addition to its effects on long-range self-diffusion previously observed.
</description>
<pubDate>Wed, 26 Jul 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163759</guid>
<dc:date>2023-07-26T00:00:00Z</dc:date>
</item>
<item>
<title>Techno-economic assessment of co-production of edible bioplastic and food supplements from Spirulina</title>
<link>https://hdl.handle.net/1721.1/163758</link>
<description>Techno-economic assessment of co-production of edible bioplastic and food supplements from Spirulina
Chalermthai, Bushra; Charoensuppanimit, Pongtorn; Nootong, Kasidit; Olsen, Bradley D; Assabumrungrat, Suttichai
Large amount of plastic wastes harming the environment have raised concerns worldwide on finding alternatives to non-biodegradable plastics. Microalgae has been found as a potential source for bioplastic production, besides its more common application in the pharmaceutical and nutraceutical industry. In this study, the objective was to techno-economically evaluate the large-scale co-production of Spirulina powder as food supplements and edible bioplastic for food packaging. The scale of production was large enough to satisfy 1% of local (Thailand) plastic demand (i.e., approx. 1200 MT y&lt;jats:sup&gt;−1&lt;/jats:sup&gt;), and 1% of the global Spirulina demand (approx. 1000 MT y&lt;jats:sup&gt;−1&lt;/jats:sup&gt;) as food supplements. Results showed that the co-production of the Spirulina powder and bioplastic revealed an attractive venture with a payback time (PBT) as low as 2.6 y and ROI as high as 38.5%. This was because the revenues generated were as high as US$ 55.6 million y&lt;jats:sup&gt;−1&lt;/jats:sup&gt;, despite high capital (US$ 55.7 million) and operating (US$ 34.9 million y&lt;jats:sup&gt;−1&lt;/jats:sup&gt;) costs. Sensitivity analysis showed differences in the profitability based on variations of major parameters in the study, where the split ratio of biomass used for food supplement versus bioplastic production and the bioplastic’s selling price were found to be the most sensitive.
</description>
<pubDate>Thu, 22 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163758</guid>
<dc:date>2023-06-22T00:00:00Z</dc:date>
</item>
<item>
<title>Counterfactual Worlds</title>
<link>https://hdl.handle.net/1721.1/163757</link>
<description>Counterfactual Worlds
Brast-McKie, Benjamin
This paper extends Kit Fine’s (2012a, 2012b, 2017a, 2017b, 2017c) truthmaker framework to provide a novel task semantics for tensed counterfactual conditionals. Instead of taking possible worlds to be primitive elements in a model, possible worlds will be defined in terms of states, parthood, tasks, and times where the task relation encodes the possible transitions between states. Rather than invoking primitive relations for similarity or imposition, possible worlds will be compared at a time independent of that time’s past and future where the comparison will be carried out in modal and mereological terms. After reviewing motivations for this approach, I will provide the hyperintensional semantics for counterfactuals that is implemented in the model-checker software along with a unified logic for counterfactual, modal, and tense operators. I will then extend the language to include further tense operators in order to analyze forwards, backwards, and backtracking counterfactuals.
</description>
<pubDate>Tue, 03 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163757</guid>
<dc:date>2025-06-03T00:00:00Z</dc:date>
</item>
<item>
<title>Tight mixed-integer optimization formulations for prescriptive trees</title>
<link>https://hdl.handle.net/1721.1/163756</link>
<description>Tight mixed-integer optimization formulations for prescriptive trees
Biggs, Max; Perakis, Georgia
We focus on modeling the relationship between an input feature vector and the predicted outcome of a trained decision tree using mixed-integer optimization. This can be used in many practical applications where a decision tree or a tree ensemble is incorporated into an optimization problem to model the predicted outcomes of a decision. We propose novel tight mixed-integer optimization formulations for this problem. Existing formulations can be shown to have linear relaxations that have fractional extreme points, even for the simple case of modeling a single decision tree or a very large number of constraints, which leads to slow solve times in practice. A formulation we propose, based on a projected union of polyhedra approach, is ideal (i.e., the extreme points of the linear relaxation are integer when required) for a single decision tree. Although the formulation is generally not ideal for tree ensembles, it generally has fewer extreme points, leading to a faster time to solve. We also study formulations with a binary representation of the feature vector and present multiple approaches to tighten existing formulations. We show that fractional extreme points are removed when multiple splits are on the same feature. At an extreme, we prove that this results in an ideal formulation for a tree ensemble modeling a one-dimensional feature vector. Building on this result, we also show that these additional constraints result in significantly tighter linear relaxations when the feature vector is low dimensional.
</description>
<pubDate>Thu, 29 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163756</guid>
<dc:date>2025-05-29T00:00:00Z</dc:date>
</item>
<item>
<title>Adaptive optimization for prediction with missing data</title>
<link>https://hdl.handle.net/1721.1/163755</link>
<description>Adaptive optimization for prediction with missing data
Bertsimas, Dimitris; Delarue, Arthur; Pauphilet, Jean
When training predictive models on data with missing entries, the most widely used and versatile approach is a pipeline technique where we first impute missing entries and then compute predictions. In this paper, we view prediction with missing data as a two-stage adaptive optimization problem and propose a new class of models, adaptive linear regression models, where the regression coefficients adapt to the set of observed features. We show that some adaptive linear regression models are equivalent to learning an imputation rule and a downstream linear regression model simultaneously instead of sequentially. We leverage this joint-impute-then-regress interpretation to generalize our framework to non-linear models. In settings where data is strongly not missing at random, our methods achieve a 2–10% improvement in out-of-sample accuracy.
</description>
<pubDate>Mon, 24 Mar 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163755</guid>
<dc:date>2025-03-24T00:00:00Z</dc:date>
</item>
<item>
<title>Designing Biodiversity Systems via Digital Kinships: Insights from Community Data Processes and Creative Practice</title>
<link>https://hdl.handle.net/1721.1/163754</link>
<description>Designing Biodiversity Systems via Digital Kinships: Insights from Community Data Processes and Creative Practice
Westerlaken, Michelle
This study details how digital biodiversity data is used and gains meaning in local restoration projects, how these experiences contrast with large-scale innovation patterns, and what new design recommendations emerge from these insights. Digital innovations in biodiversity technologies are increasingly complex, fast-paced, and driven by technological capacities where data generation rather than biodiversity restoration risks becoming the primary goal. Focusing on a biodiversity restoration project with a living lab community in the Netherlands, this participatory research critically examines how plans for emerging technologies, such as biodiversity simulations and digital twins, contrast with local user relations to biodiversity data. Building on qualitative insights from six-months of fieldwork, a digital and physical data portal was designed to simulate ongoing technoscientific innovation and make their complex effects experientially available to users. Findings are brought directly in conversation with emerging technical features through four distinct themes with the aim to share user-insights and produce design recommendations for: environmental storytelling, prediction and future making, interactive dynamics, and simulation aesthetics. These themes articulate the community's preferences towards digital environments that support their nuanced, complex relationships with local biodiversity, suggesting a shift from top-down technocentric approaches to more community-driven and restoration-focused models. Based on this study, design recommendations are articulated for each of these four themes contributing detailed empirical and practice-oriented insights that propose how new biodiversity technologies can resonate more effectively with local biodiversity restoration efforts.
</description>
<pubDate>Mon, 16 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163754</guid>
<dc:date>2025-06-16T00:00:00Z</dc:date>
</item>
<item>
<title>Asilomar Goes Underground: The Long Legacy of Recombinant DNA Hazard Debates for the Greater Boston Area Biotechnology Industry</title>
<link>https://hdl.handle.net/1721.1/163753</link>
<description>Asilomar Goes Underground: The Long Legacy of Recombinant DNA Hazard Debates for the Greater Boston Area Biotechnology Industry
Scheffler, Robin W.
In 1975, a meeting on the potential hazards of recently invented recombinant DNA techniques was held at the Asilomar Conference Center in California. This meeting gave rise to a global debate over the safety and regulation of recombinant DNA (rDNA). In this paper, I use the historical development of recombinant DNA regulation in the Greater Boston Area—now home to the densest cluster of the biotechnology industry in the world—to provide a different interpretation of the legacies of Asilomar. While most accounts of Asilomar have considered its brief and dramatic impact on molecular biology on a national scale, an equally meaningful and overlooked impact is to be found in the development of regulations around recombinant DNA at the local level. Rather than hindering research, these events enabled the operations of the modern commercial biotechnology industry, which was founded on the promise of recombinant DNA. This approach highlights a different legacy of Asilomar, one which did not end with expert consensus that recombinant DNA was safe. Instead, attending to the material, infrastructural aspects of working with recombinant DNA in commercial settings reveals a wide range of communities involved in determining the social impacts of Asilomar—communities asking a broader set of questions about recombinant DNA than those originally posed in 1975.
</description>
<pubDate>Fri, 07 Mar 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163753</guid>
<dc:date>2025-03-07T00:00:00Z</dc:date>
</item>
<item>
<title>A unified semantics for distributive and non-distributive universal quantifiers across languages</title>
<link>https://hdl.handle.net/1721.1/163752</link>
<description>A unified semantics for distributive and non-distributive universal quantifiers across languages
Haslinger, Nina; Hien, Alain N.; Rosina, Emil E.; Schmitt, Viola; Wurm, Valerie
Universal quantifiers differ in whether they are restricted to distributive interpretations, like English every, or permit non-distributive interpretations, like English all. This interpretational difference is traditionally captured by positing two unrelated lexical entries for distributive and non-distributive quantification. But this lexical approach does not explain why distributivity correlates with number: cross-linguistically, distributive universal quantifiers typically take singular complements, while non-distributive quantifiers consistently take plural complements. We derive this correlation by proposing a single lexical meaning for the universal quantifier, which derives a non-distributive interpretation if the restrictor predicate is closed under sum, but a distributive interpretation if it is quantized. Support comes from languages in which the same lexical item expresses distributive or non-distributive quantification depending on the number of the complement. For languages like English that have different expressions for non-distributive and distributive quantification, we propose that the distributive forms contain an additional morphosyntactic element that is semantically restricted to combine with a predicate of atomic individuals. This is motivated by the fact that in several languages, the distributive form is structurally more complex than the non-distributive form and sometimes even contains it transparently. We further show that in such languages, there are empirical advantages to taking the choice between distributive and non-distributive quantifier forms to be driven by semantic properties of the restrictor predicate, rather than morphosyntactic number.
</description>
<pubDate>Wed, 09 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163752</guid>
<dc:date>2025-07-09T00:00:00Z</dc:date>
</item>
<item>
<title>Passivization and composite A/Ā-movement in the Mandarin BEI-construction</title>
<link>https://hdl.handle.net/1721.1/163751</link>
<description>Passivization and composite A/Ā-movement in the Mandarin BEI-construction
Chen, Fulang
The bei-construction in Mandarin is a well-studied construction known for exhibiting both passive-like properties and tough-movement-like properties (see e.g., Feng 1995, 2012; Ting 1995a, 1998; Huang 1999; Tang 2001; Huang et al. 2009; Bruening and Tran 2015; a.o.). In this paper, I argue for a novel analysis of the bei-construction in Mandarin as a passive construction where the passive head/bei hosts a composite probe [ϕ+Ā], which triggers composite A/Ā-movement, in the sense of Van Urk (2015). The subject in the bei-construction is derived via (successive-cyclic) composite A/Ā-movement, followed by a terminating step of A-movement, similar to Longenbaugh’s (2017) analysis of English tough-movement. Under the proposed analysis, the mixed A/Ā-properties associated with the bei-construction are direct consequences of composite A/Ā-movement (following Van Urk 2015; Longenbaugh 2017). The proposed analysis of the bei-construction accounts for two restrictions on long-distance dependencies in the bei-construction – a requirement that no overt, case-less NPs should intervene between the subject of bei and the gap in agent-less bei-constructions, and a subject/object contrast with respect to the possibility of crossing a finite clause boundary to become the subject of bei.
</description>
<pubDate>Wed, 11 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163751</guid>
<dc:date>2025-06-11T00:00:00Z</dc:date>
</item>
<item>
<title>From concept to manufacturing: evaluating vision-language models for engineering design</title>
<link>https://hdl.handle.net/1721.1/163750</link>
<description>From concept to manufacturing: evaluating vision-language models for engineering design
Picard, Cyril; Edwards, Kristen M.; Doris, Anna C.; Man, Brandon; Giannone, Giorgio; Alam, Md F.; Ahmed, Faez
Engineering design is undergoing a transformative shift with the advent of AI, marking a new era in how we approach product, system, and service planning. Large language models have demonstrated impressive capabilities in enabling this shift. Yet, with text as their only input modality, they cannot leverage the large body of visual artifacts that engineers have used for centuries and are accustomed to. This gap is addressed with the release of multimodal vision-language models (VLMs), such as GPT-4V, enabling AI to impact many more types of tasks. Our work presents a comprehensive evaluation of VLMs across a spectrum of engineering design tasks, categorized into four main areas: Conceptual Design, System-Level and Detailed Design, Manufacturing and Inspection, and Engineering Education Tasks. Specifically in this paper, we assess the capabilities of two VLMs, GPT-4V and LLaVA 1.6 34B, in design tasks such as sketch similarity analysis, CAD generation, topology optimization, manufacturability assessment, and engineering textbook problems. Through this structured evaluation, we not only explore VLMs’ proficiency in handling complex design challenges but also identify their limitations in complex engineering design applications. Our research establishes a foundation for future assessments of vision language models. It also contributes a set of benchmark testing datasets, with more than 1000 queries, for ongoing advancements and applications in this field.
</description>
<pubDate>Tue, 01 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163750</guid>
<dc:date>2025-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Review of AI-assisted design of low-carbon cost-effective concrete toward carbon neutrality</title>
<link>https://hdl.handle.net/1721.1/163749</link>
<description>Review of AI-assisted design of low-carbon cost-effective concrete toward carbon neutrality
Mahjoubi, Soroush; Barhemat, Rojyar; Meng, Weina; Bao, Yi
Decarbonizing concrete production is a critical step toward achieving carbon neutrality by 2050. This paper highlights the advancements in artificial intelligence-assisted design of low-carbon cost-effective concrete, focusing on integrating machine learning-based property prediction with multi-objective optimization. Data collection and processing techniques, such as automatic data extraction, artificial data generation, and anomaly detection, are first discussed to address the importance of dataset quality. Strategies that capture physicochemical information of ingredients, including by-product supplementary cementitious materials and recycled aggregates, are then examined to enhance model generalizability. Various machine learning models—from individual regression approaches to heterogeneous ensemble methods—are compared for their predictive accuracy and robustness. Methods for hyperparameter tuning, model evaluation, and interpretation to ensure reliable and interpretable predictions are reviewed. Design optimization approaches are then highlighted, ranging from grid/random searches to more sophisticated gradient-based and metaheuristic algorithms, aimed at minimizing carbon footprint, embodied energy, and cost. Future research avenues encompass (1) application-specific design frameworks that integrate critical objectives—mechanical performance, durability, fresh-state behavior, and time-dependent properties such as creep and shrinkage—tailored to specific structural and environmental requirements; (2) holistic design optimization that simultaneously refines mixture design and structural parameters; and (3) probabilistic approaches to systematically manage uncertainties in materials, structural performance, and loading conditions systematically.
</description>
<pubDate>Sat, 03 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163749</guid>
<dc:date>2025-05-03T00:00:00Z</dc:date>
</item>
<item>
<title>Making the eyes of the state: algorithmic alienation and mundane creativity in Peruvian street-level bureaucrats</title>
<link>https://hdl.handle.net/1721.1/163748</link>
<description>Making the eyes of the state: algorithmic alienation and mundane creativity in Peruvian street-level bureaucrats
Cerna-Aragon, Diego; García, Luis
The production of state legibility has been a prolific subject of study. However, most works have not paid much attention to the quotidian labor of the street-level bureaucrats that implement legibility projects at a local level. The aim of this article is to explore the implementation of a social registry system at a local level to understand how frontline workers make the population legible. Instead of taking legibility as an object of evaluation or critique, we pay close attention to the inner workings of bureaucracies at the instances in which the sociomaterial conditions of the population are translated into data. Drawing from qualitative research in Peruvian municipalities, we describe the operations of an algorithmic system that classifies the population for the distribution of welfare. We observed how under-resourced bureaucrats were constrained by regulations and technologies of the system. Paradoxically, to make the system work for their local realities, the bureaucrats had to bend the rules and find workarounds. From this perspective, the making of legibility looks less like a top-down exercise of bureaucratic compliance or a story of domination over the population. Instead, we find actors attempting to maintain a delicate balance between inadequate legal rules, scarce resources, and sociopolitical demands.
</description>
<pubDate>Sat, 15 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163748</guid>
<dc:date>2025-02-15T00:00:00Z</dc:date>
</item>
<item>
<title>Waveform modelling for the Laser Interferometer Space Antenna</title>
<link>https://hdl.handle.net/1721.1/163747</link>
<description>Waveform modelling for the Laser Interferometer Space Antenna
Afshordi, Niayesh; Akçay, Sarp; Seoane, Pau A.; Antonelli, Andrea; Aurrekoetxea, Josu C.; Barack, Leor; Barausse, Enrico; Benkel, Robert; Bernard, Laura; Bernuzzi, Sebastiano; Berti, Emanuele; Bonetti, Matteo; Bonga, Béatrice
LISA, the Laser Interferometer Space Antenna, will usher in a new era in gravitational-wave astronomy. As the first anticipated space-based gravitational-wave detector, it will expand our view to the millihertz gravitational-wave sky, where a spectacular variety of interesting new sources abound: from millions of ultra-compact binaries in our Galaxy, to mergers of massive black holes at cosmological distances; from the early inspirals of stellar-mass black holes that will ultimately venture into the ground-based detectors’ view to the death spiral of compact objects into massive black holes, and many sources in between. Central to realising LISA’s discovery potential are waveform models, the theoretical and phenomenological predictions of the pattern of gravitational waves that these sources emit. This White Paper is presented on behalf of the Waveform Working Group for the LISA Consortium. It provides a review of the current state of waveform models for LISA sources, and describes the significant challenges that must yet be overcome.
</description>
<pubDate>Thu, 30 Oct 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163747</guid>
<dc:date>2025-10-30T00:00:00Z</dc:date>
</item>
<item>
<title>Systematic discovery of subcellular RNA patterns in the gut epithelium</title>
<link>https://hdl.handle.net/1721.1/163746</link>
<description>Systematic discovery of subcellular RNA patterns in the gut epithelium
Lee, Minkyoung; Acar, Ilhan E.; Eletto, Davide; Adivarahan, Srivathsan; Mhamedi, Farah; Handler, Kristina; Lee, Jihyun; Vinzoni, Elena G.; Aguilar, Gustavo
Background Subcellular RNA localization is crucial for the spatio-temporal control of protein synthesis and underlies key processes during development, homeostasis, and disease. In epithelial cells, RNA can localize asymmetrically along the apico-basal axis. Yet, the localization of most transcripts as well as the diversity of patterns that they adopt remains unexplored. Results Here, we use APEX-seq for proximity labeling and MERFISH for spatial transcriptomics to map subcellular transcript localization in intestinal organoids and tissue from adult mice. Many transcripts present localization bias, often localizing in granular structures. We uncover intrinsic and environmental factors that influence the formation of these patterns. Additionally, we identify translation-dependent and -independent localization patterns and pinpoint the role of 3′ untranslated regions and RNA-binding proteins. Conclusions This subcellular RNA atlas presents a detailed resource for understanding intestinal physiology.
</description>
<pubDate>Wed, 29 Oct 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163746</guid>
<dc:date>2025-10-29T00:00:00Z</dc:date>
</item>
<item>
<title>Identifying delayed human response to external risks: an econometric analysis of mobility change during a pandemic</title>
<link>https://hdl.handle.net/1721.1/163745</link>
<description>Identifying delayed human response to external risks: an econometric analysis of mobility change during a pandemic
Zhang, Gaofei; Osi, Ann; Ghaffarzadegan, Navid; Rahmandad, Hazhir; Xu, Ran
Background Human behavioral responses to changes in risks are often delayed. Methods for estimating these delayed responses either rely on rigid assumptions about the delay distribution (e.g., Erlang distribution), producing a poor fit, or yield period-specific estimates (e.g., estimates from the Autoregressive Distributed Lag (ARDL) model) that are difficult to integrate into simulation models. We propose a hybrid ARDL–Erlang approach that yields an interpretable summary of behavioral responses suitable for incorporation into simulation models. Method We apply the ARDL–Erlang approach to estimate the effect of COVID-19 deaths on mobility across US counties from October 2020 to July 2021. A standard panel autoregressive distributed lag (ARDL) model first estimates the effect of past deaths and past mobility on current mobility. The ARDL model is then transformed into an Infinite Distributed Lag (IDL) model consisting of only past deaths. The coefficients of the past deaths are aggregated into an overall effect and fit to an Erlang distribution, summarized by average delay length and shape parameter. Results Our results show that on the national level, a one-standard-deviation permanent increase in weekly deaths per 100,000 population (log-transformed) is associated with a 0.46-standard-deviation decrease in human mobility in the long run, where the delay distribution follows a first-order Erlang distribution, and the average delay length is about 3.2 weeks. However, there is much heterogeneity across states, with first- to third-order Erlang delays and 2 to 18 weeks of average delay providing a theoretically cogent summary of how mobility followed changes in deaths during the first year and a half of the pandemic. Conclusion This study provides a novel approach to estimating delayed human responses to health risks using a hybrid ARDL-Erlang model. Our findings highlight significant variability in the impact and timing of responses across states, underscoring the need for tailored public health policies. This study can also serve as guidelines and an example for identifying delayed human behavior in other settings.
</description>
<pubDate>Wed, 29 Oct 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163745</guid>
<dc:date>2025-10-29T00:00:00Z</dc:date>
</item>
<item>
<title>Integrating single-cell RNA-seq datasets with substantial batch effects</title>
<link>https://hdl.handle.net/1721.1/163744</link>
<description>Integrating single-cell RNA-seq datasets with substantial batch effects
Hrovatin, Karin; Moinfar, Amir Ali; Zappia, Luke; Parikh, Shrey; Lapuerta, Alejandro T.; Lengerich, Ben; Kellis, Manolis; Theis, Fabian J.
Integration of single-cell RNA-sequencing (scRNA-seq) datasets is standard in scRNA-seq analysis. Nevertheless, current computational methods struggle to harmonize datasets across systems such as species, organoids and primary tissue, or different scRNA-seq protocols, including single-cell and single-nuclei. Conditional variational autoencoders (cVAE) are a popular integration method, however, existing strategies for stronger batch correction have limitations. Increasing the Kullback–Leibler divergence regularization does not improve integration and adversarial learning removes biological signals. Here, we propose sysVI, a cVAE-based method employing VampPrior and cycle-consistency constraints. We show that sysVI integrates across systems and improves biological signals for downstream interpretation of cell states and conditions.
</description>
<pubDate>Thu, 30 Oct 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163744</guid>
<dc:date>2025-10-30T00:00:00Z</dc:date>
</item>
<item>
<title>High-throughput experimentation for discovery of biodegradable polyesters</title>
<link>https://hdl.handle.net/1721.1/163743</link>
<description>High-throughput experimentation for discovery of biodegradable polyesters
Fransen, Katharina A; Av-Ron, Sarah HM; Buchanan, Tess R; Walsh, Dylan J; Rota, Dechen T; Van Note, Lana; Olsen, Bradley D
The consistent rise of plastic pollution has stimulated interest in the development of biodegradable plastics. However, the study of polymer biodegradation has historically been limited to a small number of polymers due to costly and slow standard methods for measuring degradation, slowing new material innovation. High-throughput polymer synthesis and a high-throughput polymer biodegradation method are developed and applied to generate a biodegradation dataset for 642 chemically distinct polyesters and polycarbonates. The biodegradation assay was based on the clear-zone technique, using automation to optically observe the degradation of suspended polymer particles under the action of a single&#13;
            &lt;jats:italic&gt;Pseudomonas lemoignei&lt;/jats:italic&gt;&#13;
            bacterial colony. Biodegradability was found to depend strongly on aliphatic repeat unit length, with chains less than 15 carbons and short side chains improving biodegradability. Aromatic backbone groups were generally detrimental to biodegradability; however, ortho- and para-substituted benzene rings in the backbone were more likely to be degradable than metasubstituted rings. Additionally, backbone ether groups improved biodegradability. While other heteroatoms did not show a clear improvement in biodegradability, they did demonstrate increases in biodegradation rates. Machine learning (ML) models were leveraged to predict biodegradability on this large dataset with accuracies over 82% using only chemical structure descriptors.
</description>
<pubDate>Tue, 30 May 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163743</guid>
<dc:date>2023-05-30T00:00:00Z</dc:date>
</item>
<item>
<title>Antigen-adjuvant interactions, stability, and immunogenicity profiles of a SARS-CoV-2 receptor-binding domain (RBD) antigen formulated with aluminum salt and CpG adjuvants</title>
<link>https://hdl.handle.net/1721.1/163742</link>
<description>Antigen-adjuvant interactions, stability, and immunogenicity profiles of a SARS-CoV-2 receptor-binding domain (RBD) antigen formulated with aluminum salt and CpG adjuvants
Bajoria, Sakshi; Kaur, Kawaljit; Kumru, Ozan S; Van Slyke, Greta; Doering, Jennifer; Novak, Hayley; Rodriguez Aponte, Sergio A; Dalvie, Neil C; Naranjo, Christopher A; Johnston, Ryan S; Silverman, Judith Maxwell; Kleanthous, Harry; Love, J Christopher; Mantis, Nicholas J; Joshi, Sangeeta B; Volkin, David B
Low-cost, refrigerator-stable COVID-19 vaccines will facilitate global access and improve vaccine coverage&#13;
in low- and middle-income countries. To this end, subunit-based approaches targeting the receptorbinding domain (RBD) of SARS-CoV-2 Spike protein remain attractive. Antibodies against RBD neutralize&#13;
SARS-CoV-2 by blocking viral attachment to the host cell receptor, ACE2. Here, a yeast-produced recombinant RBD antigen (RBD-L452K-F490W or RBD-J) was formulated with various combinations of aluminum-salt (Alhydrogel®, AH; AdjuPhos®, AP) and CpG 1018 adjuvants. We assessed the effect of antigenadjuvant interactions on the stability and mouse immunogenicity of various RBD-J preparations. While&#13;
RBD-J was 50% adsorbed to AH and &lt;15% to AP, addition of CpG resulted in complete AH binding, yet no&#13;
improvement in AP adsorption. ACE2 competition ELISA analyses of formulated RBD-J stored at varying&#13;
temperatures (4, 25, 37°C) revealed that RBD-J was destabilized by AH, an effect exacerbated by CpG. DSC&#13;
studies demonstrated that aluminum-salt and CpG adjuvants decrease the conformational stability of&#13;
RBD-J and suggest a direct CpG-RBD-J interaction. Although AH+CpG-adjuvanted RBD-J was the least&#13;
stable in vitro, the formulation was most potent at eliciting SARS-CoV-2 pseudovirus neutralizing antibodies in mice. In contrast, RBD-J formulated with AP+CpG showed minimal antigen-adjuvant interactions, a better stability profile, but suboptimal immune responses. Interestingly, the loss of in vivo potency&#13;
associated with heat-stressed RBD-J formulated with AH+CpG after one dose was abrogated by a booster.&#13;
Our findings highlight the importance of elucidating the key interrelationships between antigen-adjuvant&#13;
interactions, storage stability, and in vivo performance to enable successful formulation development of&#13;
stable and efficacious subunit vaccines.
</description>
<pubDate>Mon, 06 Jun 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163742</guid>
<dc:date>2022-06-06T00:00:00Z</dc:date>
</item>
<item>
<title>Synthetic Collagen Hydrogels through Symmetric Self‐Assembly of Small Peptides</title>
<link>https://hdl.handle.net/1721.1/163741</link>
<description>Synthetic Collagen Hydrogels through Symmetric Self‐Assembly of Small Peptides
Tanrikulu, I Caglar; Dang, Lianna; Nelavelli, Lekha; Ellison, Aubrey J; Olsen, Bradley D; Jin, Song; Raines, Ronald T
Animal‐sourced hydrogels, such as collagen, are widely used as extracellular‐matrix (ECM) mimics in tissue engineering but are plagued with problems of reproducibility, immunogenicity, and contamination. Synthetic, chemically defined hydrogels can avoid such issues. Despite the abundance of collagen in the ECM, synthetic collagen hydrogels are extremely rare due to design challenges brought on by the triple‐helical structure of collagen. Sticky‐ended symmetric self‐assembly (SESSA) overcomes these challenges by maximizing interactions between the strands of the triple helix, allowing the assembly of collagen‐mimetic peptides (CMPs) into robust synthetic collagen nanofibers. This optimization, however, also minimizes interfiber contacts. In this work, symmetric association states for the SESSA of short CMPs to probe their increased propensity for interfiber association are modelled. It is found that 33‐residue CMPs not only self‐assemble through sticky ends, but also form hydrogels. These self‐assemblies behave with remarkable consistency across multiple scales and present a clear link between their triple‐helical architecture and the properties of their hydrogels. The results show that SESSA is an effective and robust design methodology that enables the rational design of synthetic collagen hydrogels.
</description>
<pubDate>Thu, 23 Nov 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163741</guid>
<dc:date>2023-11-23T00:00:00Z</dc:date>
</item>
<item>
<title>The BostonWalks study: a longitudinal travel survey using smartphone tracking</title>
<link>https://hdl.handle.net/1721.1/163740</link>
<description>The BostonWalks study: a longitudinal travel survey using smartphone tracking
Meister, Adrian; Bashan, Nail F.; Basu, Rounaq; Shen, Xianglu; Wang, Ryan Q.; Sevtsuk, Andres
This paper introduces the BostonWalks (BWS) study, detailing its methodology, the resulting dataset, and an initial analysis. The BWS study is a smartphone-based GNSS-tracking study in the Boston metropolitan area, designed to generate an up-to-date dataset on travel behavior, with a particular focus on non-auto travel behavior and its representativeness across all population segments. The dataset encompasses approximately 155,000 trips from 990 participants, making it one of the most extensive datasets of its kind in North America. It includes both raw trajectory data and comprehensive socio-demographic information about participants. The paper outlines the survey methodology, including the technical infrastructure, recruitment strategy, and data processing techniques. A comparison of the socio-demographic and travel behavior characteristics of BWS participants with those from the National Household Travel Survey is provided. Lastly, the paper highlights the richness of the data through correlation and cluster analysis.
</description>
<pubDate>Sat, 28 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163740</guid>
<dc:date>2025-06-28T00:00:00Z</dc:date>
</item>
<item>
<title>Diverging global incidence trends of early-onset cancers: comparisons with incidence trends of later-onset cancers and mortality trends of early-onset cancers</title>
<link>https://hdl.handle.net/1721.1/163739</link>
<description>Diverging global incidence trends of early-onset cancers: comparisons with incidence trends of later-onset cancers and mortality trends of early-onset cancers
Terashima, Miyu; Nakayama, Kota; Shirai, Sora; Ugai, Satoko; Lee, Hwa-Young; Matsui, Haruna; Mizuno, Hiroki; Tanaka, Shiori; Song, Minkyo; Sasamoto, Naoko; Kawachi, Ichiro; Giovannucci, Edward L.; Ugai, Tomotaka
Background The global increase in the incidence of early-onset cancers (defined as cancers diagnosed at 20–49 years old) is a serious public health problem. We investigated 1) whether the incidence trend of early-onset cancers differs from that of later-onset cancers and 2) whether both the incidence and mortality of early-onset cancers have increased concurrently. Methods We utilized age-standardized incidence and mortality rates for early-onset and later-onset cancers diagnosed between 2000 and 2017 from the Cancer Incidence in Five Continents and World Health Organization (WHO) mortality databases. The national obesity prevalence among adults aged 20–49 years was obtained from the National Clinical Database. Using joinpoint regression models, we calculated average annual percentage changes (AAPCs) for cancer incidence and mortality by cancer types and countries. We additionally conducted human development index (HDI)-stratified analyses and assessed the correlation between the obesity prevalence in younger populations and early-onset cancer incidence by country. To investigate the more recent trend of early-onset cancer mortality, we extended our mortality analysis after 2017 for cancer types and countries with statistically significant positive AAPCs in both incidence and mortality of early-onset cancers between 2000 and 2017. Results Our analysis showed that 10 early-onset cancer types (thyroid cancer, breast cancer, melanoma, uterine cancer, colorectal cancer, kidney cancer, cervical cancer, pancreatic cancer, multiple myeloma, Hodgkin lymphoma) in females and 7 early-onset cancer types (thyroid cancer, kidney cancer, testis cancer, prostate cancer, colorectal cancer, melanoma, leukemia) in males had statistically significant positive AAPCs in at least 10 countries. Among these, the following early-onset cancer types had significantly higher AAPCs than later-onset cancer types in females: colorectal cancer (6 countries; AAPC range: 1.8–3.8%), cervical cancer (6 countries; AAPC range: 1.2–3.3%), pancreatic cancer (5 countries; AAPC range: 2.3–13.0%), and multiple myeloma (5 countries; AAPC range: 3.1–9.8%); in males: prostate cancer (12 countries; AAPC range: 3.9–18.4%), colorectal cancer (8 countries; AAPC range: 1.8–3.2%), and kidney cancer (6 countries; AAPC range: 2.0–6.0%). We observed statistically significant positive AAPCs in both the incidence and mortality of the following early-onset cancer types: uterine cancer (5 countries) and colorectal cancer (3 countries in females and 5 countries in males). The steeper increases in early-onset cancers compared with later-onset cancers were mainly observed in the very high-HDI country group, including early-onset colorectal cancer (AAPC = 2.4%, 95% CI 2.1–2.6 in females; AAPC = 2.0%, 95% CI 1.7–2.4 in males) to later-onset colorectal cancer (AAPC = −0.1%, 95% CI −0.2 to 0 in females; AAPC = −0.2%, 95% CI −0.3 to 0 in males). We observed strong positive correlations between the increasing obesity prevalence and the rising incidence of early-onset obesity-related cancers in several countries, including Australia (7 cancer types), United Kingdom (7 cancer types), Canada (7 cancer types), Republic of Korea (7 cancer types), and USA (6 cancer types) in females and United Kingdom (7 cancer types), Canada (6 cancer types), Australia (5 cancer types), Sweden (5 cancer types), and Republic of Korea (4 cancer types) in males. Although we did not observe an apparent spike after 2017 in many countries, we observed continued increases in the mortality of certain cancer types, such as uterine cancer (Japan, Republic of Korea, United Kingdom, USA, and Ecuador) in females and colorectal cancer (Argentina, Canada, United Kingdom, and USA) in males. Conclusions The increase in many early-onset cancer types was significantly higher than that of later-onset cancers, and the incidence and mortality of certain early-onset cancer types (such as colorectal cancer) increased simultaneously. Our study highlights global differences in cancer incidence and mortality trends of early-onset and later-onset cancers.
</description>
<pubDate>Fri, 14 Nov 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163739</guid>
<dc:date>2025-11-14T00:00:00Z</dc:date>
</item>
<item>
<title>Generalizable MRI normative modelling to detect age-inappropriate neurodegeneration</title>
<link>https://hdl.handle.net/1721.1/163738</link>
<description>Generalizable MRI normative modelling to detect age-inappropriate neurodegeneration
Parker, Thomas D.; Bethlehem, Richard A. I.; Seidlitz, Jakob; White, Simon R.; David, Michael C. B.; Kolanko, Magdalena A.; Bernstock, Joshua D.; Dorfschmidt, Lena; Bourke, Niall; Gailly de Taurines, Anastasia; Hain, Jessica A.; Del Giovane, Martina; Graham, Neil S. N.
Background Determining whether MRI brain scans demonstrate atrophy that is beyond “normal for age” is challenging. Automated measurements of structural metrics in individual brain regions have shown promise as biomarkers of neurodegeneration, yet widely available reference standards that aid interpretation at the individual level are lacking. Normative modelling, enabling standardized “brain charts”, represents a significant step in addressing this challenge by generating individualized age- and sex- adjusted centile scores derived from large, aggregated datasets for MRI-derived quantitative metrics. Methods Using normative data from 56,173 participants across the life course, we have developed regional cortical thickness and amygdala/hippocampal volume brain charts (adjusted for total intracranial volume) that can be applied at the individual level. At the group level, we investigate whether regional centile scores relate to cognitive performance (mini-mental state examination) and discriminate individuals with neuropathological evidence of Alzheimer’s disease (n = 351) from propensity-matched controls from the National Alzheimer's Coordinating Center (NACC) dataset. In addition, we explored the relationships between disease stage, cognition, regional tau deposition and regional centile scores in amyloid-β-PET-positive individuals with Alzheimer’s disease dementia (n = 39) and mild cognitive impairment (n = 71) from the Alzheimer’s Disease Neuroimaging Initiative-3 (ADNI-3). We then extended this approach to phenotypes of frontotemporal lobar degeneration using the Neuroimaging in Frontotemporal Dementia dataset (n = 113). Results We demonstrate BrainChart’s application to illustrative individual cases. At the group level, we show that in Alzheimer’s disease, regional centile scores from brain charting predicted cognitive performance, temporal lobe tau PET tracer uptake and discriminated disease groups from propensity matched cognitively normal controls in independent cohorts. Distinct patterns of age-inappropriate cortical atrophy were also evident in different clinical phenotypes of frontotemporal lobar degeneration from the Neuroimaging in Frontotemporal Dementia dataset. Conclusions Regional centile scores derived from an extensive normative dataset represent a generalizable method for objectively identifying atrophy in neurodegenerative diseases and can be applied to determine neurodegenerative atrophy at the individual level.
</description>
<pubDate>Wed, 12 Nov 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163738</guid>
<dc:date>2025-11-12T00:00:00Z</dc:date>
</item>
<item>
<title>Inclusive B-meson flavour-tagging algorithm at LHCb</title>
<link>https://hdl.handle.net/1721.1/163737</link>
<description>Inclusive B-meson flavour-tagging algorithm at LHCb
Aaij, R.; Abdelmotteleb, A. S. W.; Abellan Beteta, C.; Abudinén, F.; Ackernley, T.; Adefisoye, A. A.; Adeva, B.; Adinolfi, M.; Adlarson, P.; Agapopoulou, C.; Aidala, C. A.; Ajaltouni, Z.; Akar, S.; Akiba, K.; Albicocco, P.; Albrecht, J.
A new algorithm is developed to identify the flavour of neutral B mesons at production in pp collisions by utilising all tracks from the hadronisation process. The algorithm is calibrated separately for B0 and B s 0 mesons using B0 → J/ψK+π− and B s 0 → D s − π + decays from pp collision data collected by the LHCb experiment at a centre-of-mass energy of 13 TeV. This new algorithm improves the tagging power by 35% for B0 mesons and 20% for B s 0 mesons when compared to the combined performance of the existing LHCb flavour-tagging algorithms.
</description>
<pubDate>Mon, 10 Nov 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163737</guid>
<dc:date>2025-11-10T00:00:00Z</dc:date>
</item>
<item>
<title>Analytical benchmark problems and methodological framework for the assessment and comparison of multifidelity optimization methods</title>
<link>https://hdl.handle.net/1721.1/163736</link>
<description>Analytical benchmark problems and methodological framework for the assessment and comparison of multifidelity optimization methods
Mainini, Laura; Serani, Andrea; Pehlivan-Solak, Hayriye; Di Fiore, Francesco; Rumpfkeil, Markus P.; Minisci, Edmondo; Quagliarella, Domenico; Yildiz, Sihmehmet; Ficini, Simone; Pellegrini, Riccardo; Thelen, Andrew; Bryson, Dean; Nikbay, Melike; Diez, Matteo; Beran, Philip S.
As engineering systems increase in complexity and performance demands intensify, Multidisciplinary Design Optimization (MDO) methodologies are becoming essential for integrating models from multiple disciplines to optimize complex multi-physics systems. Within this context, major challenges remain in selecting appropriate disciplinary fidelity levels, and how to couple them effectively. Multifidelity methods offer a promising path forward by strategically combining information sources of varying fidelity - whether computational or experimental - to enable efficient and scalable design exploration and optimization. Despite the development of numerous multifidelity methods, their comparative performance remains difficult to assess due to the absence of standardized benchmark frameworks capable of evaluating performance across diverse optimization tasks. To address this gap, this paper introduces a comprehensive benchmarking framework that includes: (i) a suite of analytical benchmark optimization problems designed to stress-test and validate multifidelity methods; (ii) a set of assessment metrics for quantifying and comparing performance over measurable objectives; and (iii) the classification, evaluation, and comparison of several families of multifidelity optimization methods and frameworks using the proposed benchmarks to identify their respective strengths and weaknesses in real-world scenarios. The proposed benchmark problems are analytically defined functions carefully selected to capture mathematical challenges commonly encountered in real-world applications, including high dimensionality, multimodality, discontinuities, and noise. Their closed-form nature ensures computational efficiency, high reproducibility, and a clear separation of algorithmic behavior from numerical artifacts. The accompanying performance metrics support the systematic evaluation of multifidelity methods, measuring both optimization effectiveness and global approximation accuracy. By providing a rigorous, reproducible, and accessible benchmarking framework, this work aims to enable the broader community to understand, compare, and advance multifidelity optimization methods for complex problems in science and engineering.
</description>
<pubDate>Mon, 10 Nov 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163736</guid>
<dc:date>2025-11-10T00:00:00Z</dc:date>
</item>
<item>
<title>Embodiment, Relationships, and Sexuality: An Ethical Analysis of Extended Reality Technologies</title>
<link>https://hdl.handle.net/1721.1/163735</link>
<description>Embodiment, Relationships, and Sexuality: An Ethical Analysis of Extended Reality Technologies
Ramirez, Erick J.; Clark, Laura; Campbell, Sydney; Dreiman, Julian; Clay, Dorian; Gupta, Raghav; Jennett, Shelby
Abstract Communication technologies change the way we relate to each other and ourselves. In this essay we analyze the effects that extended reality (XR) technologies are likely to have on conceptions of the self, romantic relationships, and other associated concepts like sexual orientation. While these technologies are in their infancy, key psychological and philosophical concepts are already being explored. We begin by defining extended reality and the family of technologies that make it possible. We pay special attention to the way these immersive technologies ground the experiences of presence which can become virtually real. These experiences provide a useful framework for understanding the phenomena of XR embodiment. XR embodiment, the experience of one’s self as embodied in XR, opens up the possibility of blended physical and digital narrative selves which form the basis of new forms of relationships. In a future where XR is incorporated into the basic social and political structures of society, XR embodiment and virtually real experiences challenge normative concepts like sex and sexual orientation. Contemporary conceptions of the self, sex, consent, and love emerged in purely physical contexts to help us navigate the limitations of physical embodiment. XR embodiment requires a new ethical framework to make room for these possibilities. We end the paper by assessing ethical risks XR embodiment can introduce for XR developers, and researchers.
</description>
<pubDate>Fri, 14 Nov 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163735</guid>
<dc:date>2025-11-14T00:00:00Z</dc:date>
</item>
<item>
<title>IsoDAR@Yemilab: Preliminary design report—volume II (beam transport, neutrino source, and shielding)</title>
<link>https://hdl.handle.net/1721.1/163734</link>
<description>IsoDAR@Yemilab: Preliminary design report—volume II (beam transport, neutrino source, and shielding)
Spitz, Joshua; Alonso, Jose R.; Ameel, Jon; Barlow, Roger; Bartoszek, Larry; Bungau, Adriana; Shaevitz, Michael H.; Voirin, Erik A.; Winklehner, Daniel; Conrad, Janet M.; Engebretson, Samuel J.; Moon, Jarrett; Winkler, Eleanor; Adelmann, Andreas; Axani, Spencer N.; Barletta, William A.; Calabretta, Luciano; Calvo, Pedro; Chan, Andrew; Karagiorgi, Georgia
This Preliminary Design Report (PDR) describes the IsoDAR electron-antineutrino source in two volumes which are mostly site-independent and describe the cyclotron driver providing a 60 MeV, 10 mA proton beam (Volume I); and the Medium Energy Beam Transport (MEBT) line and target (this Volume). The IsoDAR driver and target will produce about 1.15 · 10 23 electron-antineutrinos over 5 calendar years. Paired with a kton-scale liquid scintillator detector, this will enable a broad particle physics program including searches for new symmetries, new interactions and new particles. Here in Volume II, we describe the Medium Energy Beam Transport line, the antineutrino source beam-target and surrounding sleeve, shielding, and plans for monitoring and installation.
</description>
<pubDate>Mon, 10 Nov 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163734</guid>
<dc:date>2025-11-10T00:00:00Z</dc:date>
</item>
<item>
<title>On Nontrivial Winning and Losing Parameters of Schmidt Games</title>
<link>https://hdl.handle.net/1721.1/163733</link>
<description>On Nontrivial Winning and Losing Parameters of Schmidt Games
Neckrasov, Vasiliy; Zhan, Eric
In this paper we study the classical Schmidt game on two families of sets: one related to frequencies of digits in base-2 expansions, and one connected to the set of the badly approximable numbers. Namely, we describe some nontrivial winning and losing parameters ( α , β ) for these sets.
</description>
<pubDate>Fri, 14 Nov 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163733</guid>
<dc:date>2025-11-14T00:00:00Z</dc:date>
</item>
<item>
<title>International bureaucrats under transparency: The case of the WTO TRIPS Council</title>
<link>https://hdl.handle.net/1721.1/163732</link>
<description>International bureaucrats under transparency: The case of the WTO TRIPS Council
Park, Sojun; Kim, Minju
How does transparency affect the behavior of international bureaucrats tasked with facilitating negotiations? Existing theories offer opposing expectations—greater transparency might induce international bureaucrats to engage more with contentious issues that matter to the public or lead them to avoid those issues whenever possible. We assess these competing perspectives by analyzing the World Trade Organization (WTO)’s 2002 document de-restriction reform that enhanced transparency to the public. Specifically, we examine how prompt public disclosure of documents shapes the way the WTO Secretariat writes reports about the Agreement on Trade-Related Aspects of Intellectual Property Rights (TRIPS). Using network statistics to estimate the state preference distributions on key topics, we find that, after the reform, the WTO Secretariat is more likely to issue reports on polarized topics in negotiations, using accountability-enhancing words. Our analysis at the country-year level shows that the reform led to greater national newspaper coverage of the WTO TRIPS, which in turn raised public awareness. The results suggest that transparency could empower international bureaucrats to tackle divisive issues in times of member-state gridlock.
</description>
<pubDate>Tue, 11 Nov 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163732</guid>
<dc:date>2025-11-11T00:00:00Z</dc:date>
</item>
<item>
<title>Prediction of Large Events in Directed Sandpiles</title>
<link>https://hdl.handle.net/1721.1/163674</link>
<description>Prediction of Large Events in Directed Sandpiles
Shah, Dhruv
The degree of predictability of large avalanche events in the directed sandpile model is studied. This degree is defined in terms of how successfully a strategy can predict such events, as compared to a random guess. A waiting time based prediction strategy which exploits the local anticorrelation of large events is discussed. With this strategy we show analytically and numerically that large events are predictable, and that this predictability persists in the thermodynamic limit. We introduce another strategy which predicts large avalanches in the future based on the present excess density in the sandpile. We obtain the exact conditional probabilities for large events given an excess density, and use this to determine the exact form of the ROC predictability curves. We show that for this strategy, the model is predictable only for finite lattice sizes, and unpredictable in the thermodynamic limit. This behaviour is to be contrasted with previously established numerical studies carried out for Manna sandpiles.
</description>
<pubDate>Sat, 15 Nov 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163674</guid>
<dc:date>2025-11-15T00:00:00Z</dc:date>
</item>
<item>
<title>What are the most informative data points for predicting extreme events?</title>
<link>https://hdl.handle.net/1721.1/163673</link>
<description>What are the most informative data points for predicting extreme events?
Champenois, Bianca; Sapsis, Themistoklis P.
The growing availability of large datasets that describe complex dynamical systems, such as climate models and turbulence simulations, has made machine learning an increasingly popular tool for modeling and analysis, but the inherent low representation of extreme events poses a major challenge for model accuracy in the tails of the distribution. This raises a fundamental question: Given a large dataset, which data points should we use to train machine learning models that effectively learn extremes? To address this question, we study a likelihood-weighted active data selection framework that identifies the most informative data points for model training. The framework improves predictions of extreme values of a target observable, scales to high-dimensional systems, and is model-agnostic. Unlike traditional active learning, which assumes the ability to query new data, our method is designed for problems where the dataset is fixed but vast, focusing on selection rather than acquisition. Points are scored using a likelihood-weighted uncertainty sampling criterion that prioritizes samples expected to reduce model uncertainty and improve predictions in the tails of the distribution for systems with non-Gaussian statistics. When applied to a machine learning climate model with input dimensionality on the order of tens of thousands, we find that the likelihood-weighted active data selection algorithm most accurately captures the statistics of extreme events using only a fraction of the original dataset. We also introduce analysis techniques to further interpret the optimally selected points. Looking ahead, the approach can serve as a compression algorithm that preserves information associated with extreme events in vast datasets.
</description>
<pubDate>Mon, 22 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163673</guid>
<dc:date>2025-09-22T00:00:00Z</dc:date>
</item>
<item>
<title>Coclique level structure for stochastic chemical reaction networks</title>
<link>https://hdl.handle.net/1721.1/163672</link>
<description>Coclique level structure for stochastic chemical reaction networks
Bruno, Simone; Fu, Yi; Campos, Felipe A.; Del Vecchio, Domitilla; Williams, Ruth J.
Continuous time Markov chains are commonly used as models for the stochastic behavior of chemical reaction networks. More precisely, these Stochastic Chemical Reaction Networks (SCRNs) are frequently used to gain a mechanistic understanding of how chemical reaction rate parameters impact the stochastic behavior of these systems. One property of interest is mean first passage times (MFPTs) between states. However, deriving explicit formulas for MFPTs can be highly complex. In order to address this problem, we first introduce the concept of $$coclique\, level\, structure$$ and develop theorems to determine whether certain SCRNs have this feature by studying associated graphs. Additionally, we develop an algorithm to identify, under specific assumptions, all possible coclique level structures associated with a given SCRN. Finally, we demonstrate how the presence of such a structure in a SCRN allows us to derive closed form formulas for both upper and lower bounds for the MFPTs. Our methods can be applied to SCRNs taking values in a generic finite state space and can also be applied to models with non-mass-action kinetics. We illustrate our results with examples from the biological areas of epigenetics, neurobiology and ecology.
</description>
<pubDate>Mon, 10 Nov 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163672</guid>
<dc:date>2025-11-10T00:00:00Z</dc:date>
</item>
<item>
<title>The Psyche Light Elements Investigation</title>
<link>https://hdl.handle.net/1721.1/163671</link>
<description>The Psyche Light Elements Investigation
Prettyman, Thomas H.; Mittlefehldt, David W.; Asphaug, Erik I.; Binzel, Richard P.; Courville, Samuel W.; Elkins-Tanton, Linda T.; Lawrence, David J.; Marchi, Simone; Merayo, José M. G.; McCoy, Timothy J.; Weiss, Benjamin P.
Light elements, such as C, S, Si, O, C, and H, are thought to be present in Earth’s liquid-Fe outer core. These elements lower melting temperatures, thereby allowing the core to remain in liquid state at high pressure and influencing magnetic and geodynamic processes. However, the identity and abundance of the light elements in the cores of terrestrial planets and how they were delivered to these cores is not well known. The NASA Psyche mission will travel to and explore (16) Psyche, which may be the metal-rich core of a differentiated planetesimal exposed by collisional stripping. If so, the Psyche mission could provide a direct assessment of the light element content of an asteroidal core, allowing comparisons to the inferred composition of planetary cores and the parent bodies of the magmatic iron group meteorites. In particular, Earth’s high-pressure core formed gradually (over ∼100 Myr), in a multistage process, under increasingly oxidizing conditions, whereas the cores of planetesimals formed quickly (within 10 Myr) at low pressure, likely in chemical equilibrium with their mantles. The trace element systematics and mineral composition of magmatic iron meteorites indicate the presence of C, P, and S in planetesimal cores prior to solidification. Such elements would have played a role in core dynamics, including dynamo generation. Their low solubility combined with the immiscibility of their mineral precipitates would have resulted in their separation from Fe upon crystallization and their eruption onto the surface of a stripped core (via ferrovolcanism). The Psyche spacecraft will detect their elemental, mineral, and magnetic signatures with the payload instruments, which include a Gamma Ray and Neutron Spectrometer, a Multispectral Imager, and a Magnetometer. Additional constraints on interior composition and processes influenced by light elements will be provided by Psyche’s gravity and geomorphology investigations. We provide a brief introduction to the topic of light elements along with prospects for (16) Psyche. While we emphasize core formation processes, we also consider other possibilities for the origin and evolution of this metal-rich body.
</description>
<pubDate>Tue, 11 Nov 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163671</guid>
<dc:date>2025-11-11T00:00:00Z</dc:date>
</item>
<item>
<title>A double exponential chirp waveform for noisy rheology</title>
<link>https://hdl.handle.net/1721.1/163670</link>
<description>A double exponential chirp waveform for noisy rheology
Waeterloos, Jarno L.; McKinley, Gareth H.; Clasen, Christian
In the search for faster rheometrical measurements techniques for fast time-evolving systems, optimally windowed chirps (OWCh) have recently been proposed for the determination of the complex modulus. However, such chirps are prone to artefacts at high frequencies due to fact that the input power is distributed over a range of frequencies leading to reduced signal-to-noise ratios in noisy conditions. The Tukey window which modulates the amplitude of the excitation disturbance and which is required to avoid spectral leakage directly reduces the signal-to-noise ratio at the edges of the signal leading to a divergence of the measured moduli at high frequencies. A new double exponential chirp (DEC) signal is proposed to overcome these limitations. Its capabilities are demonstrated with orthogonal superposition rheometry as an example of a demanding high-noise environment. The S-shaped time-frequency history of the new chirp signal redistributes the input power over the frequency spectrum. Numerical simulations using the Maxwell and Giesekus models, along with orthogonal superposition measurements on wormlike micellar fluids, demonstrate the effectiveness of the DEC waveform. Parameter optimization with the Giesekus model identifies the ideal input configurations for achieving a maximum signal-to-noise ratio during rheological measurements.
</description>
<pubDate>Tue, 23 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163670</guid>
<dc:date>2025-09-23T00:00:00Z</dc:date>
</item>
<item>
<title>Generative BigSMILES: an extension for polymer informatics, computer simulations &amp;amp; ML/AI</title>
<link>https://hdl.handle.net/1721.1/163669</link>
<description>Generative BigSMILES: an extension for polymer informatics, computer simulations &amp;amp; ML/AI
Schneider, Ludwig; Walsh, Dylan; Olsen, Bradley; de Pablo, Juan
The BigSMILES notation, a concise tool for polymer ensemble representation, is augmented here by introducing an enhanced version called generative BigSMILES. G-BigSMILES is designed for generative workflows, and is complemented by tailored software tools for ease of use. This extension integrates additional data, including reactivity ratios (or connection probabilities among repeat units), molecular weight distributions, and ensemble size. An algorithm, interpretable as a generative graph is devised that utilizes these data, enabling molecule generation from defined polymer ensembles. Consequently, the G-BigSMILES notation allows for efficient specification of complex molecular ensembles via a streamlined line notation, thereby providing a foundational tool for automated polymeric materials design. In addition, the graph interpretation of the G-BigSMILES notation sets the stage for robust machine learning methods capable of encapsulating intricate polymeric ensembles. The combination of G-BigSMILES with advanced machine learning techniques will facilitate straightforward property determination and in silico polymeric material synthesis automation. This integration has the potential to significantly accelerate materials design processes and advance the field of polymer science.
</description>
<pubDate>Fri, 17 Nov 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163669</guid>
<dc:date>2023-11-17T00:00:00Z</dc:date>
</item>
<item>
<title>Calculating Pairwise Similarity of Polymer Ensembles via Earth Mover’s Distance</title>
<link>https://hdl.handle.net/1721.1/163668</link>
<description>Calculating Pairwise Similarity of Polymer Ensembles via Earth Mover’s Distance
Shi, Jiale; Walsh, Dylan; Zou, Weizhong; Rebello, Nathan J; Deagen, Michael E; Fransen, Katharina A; Gao, Xian; Olsen, Bradley D; Audus, Debra J
Synthetic polymers, in contrast to small molecules and deterministic biomacromolecules, are typically ensembles composed of polymer chains with varying numbers, lengths, sequences, chemistry, and topologies. While numerous approaches exist for measuring pairwise similarity among small molecules and sequence-defined biomacromolecules, accurately determining the pairwise similarity between two polymer ensembles remains challenging. This work proposes the earth mover's distance (EMD) metric to calculate the pairwise similarity score between two polymer ensembles. EMD offers a greater resolution of chemical differences between polymer ensembles than the averaging method and provides a quantitative numeric value representing the pairwise similarity between polymer ensembles in alignment with chemical intuition. The EMD approach for assessing polymer similarity enhances the development of accurate chemical search algorithms within polymer databases and can improve machine learning techniques for polymer design, optimization, and property prediction.
</description>
<pubDate>Wed, 14 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163668</guid>
<dc:date>2024-02-14T00:00:00Z</dc:date>
</item>
<item>
<title>Engineered selective biotoxin‐binding hydrogels for toxin sequestration</title>
<link>https://hdl.handle.net/1721.1/163667</link>
<description>Engineered selective biotoxin‐binding hydrogels for toxin sequestration
Morris, Melody A; Yang, Yun Jung; Mai, Danielle J; Olsen, Bradley D
The development of synthetic selective membranes that separate materials of similar sizes, charges, and/or polarities remains a difficult challenge, and looking towards biology provides inspiration for new designs. In this work, a series of cholera toxin binding peptides (CTBPs) are identified, spanning a range of binding inhibitions, and integrated into chemically cross‐linked cholera toxin binding gels (CTBGs) via thiol‐Michael polycondensation reactions. All gels demonstrate rheological profiles consistent with elastic solids. The CTBGs are probed via small‐angle neutron scattering and exhibit a correlation length, &lt;jats:italic&gt;ξ&lt;/jats:italic&gt;, smaller than most proteins (1.3–2.5 nm). Thus, an effective entropic mesh is formed to block non‐targeted proteins. However, the CTBGs have a dynamic mesh size, Ξ, that is larger than cholera toxin (CT) to allow the transport of target proteins. The CTBGs with the highest binding inhibitions both show high selectivity and permeation of CT, rejecting all other tested proteins. In total, two new highly selective CTBGs are synthesized and validated for use in cholera toxin remediation. Together, this platform demonstrates the wide applicability of selectively‐diffusive materials for difficult separations.
</description>
<pubDate>Fri, 22 Mar 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163667</guid>
<dc:date>2024-03-22T00:00:00Z</dc:date>
</item>
<item>
<title>Accelerated small angle neutron scattering algorithms for polymeric materials</title>
<link>https://hdl.handle.net/1721.1/163666</link>
<description>Accelerated small angle neutron scattering algorithms for polymeric materials
Dai, Kexin; Olsen, Bradley D
Small-angle neutron scattering (SANS) is an extremely powerful technique for characterizing a wide variety of soft, biological, magnetic, and quantum materials, but it is often throughput-limited. This work proposes an algorithm to accelerate small angle neutron scattering (SANS) experiments by estimating the minimum number of counts to perform parameter estimation and model differentiation tasks to a specified level of certainty. Three classes of model polymer materials were examined and analyzed, and time slices of SANS data were used to model a reduced number of counts. The scattering data with reduced numbers of counts were fitted to SANS model functions to perform parameter estimation and model differentiation tasks. For parameter estimation, estimators accurate to within 5–10% of the full count estimator can be produced with only 1–50% of the full counts depending upon the sample and parameter of interest. In order to project parameter uncertainties at lower number of counts prior to the completion of experiments, it is crucial to have a robust error quantification method that reflects the true uncertainty associated with each parameter. Uncertainties from Monte Carlo (MC) bootstrapping are shown to in general overestimate the error from fitting many experimental replicates. For most parameter estimation techniques, the weighted least squares estimator is unbiased; however, certain models yield biased estimators. To differentiate between models, both the Akaike information criterion (AIC) and Bayesian information criterion (BIC) can be used, and with either criterion, reduced numbers of counts can still identify the best model for our samples from a group of related candidate models for each material. The proposed algorithm can help SANS users optimize valuable beamtime and accelerate the use of SANS for structural characterization of libraries of materials while obtaining reasonable parameter estimation and model differentiation when scattering models are available.
</description>
<pubDate>Fri, 10 Oct 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163666</guid>
<dc:date>2025-10-10T00:00:00Z</dc:date>
</item>
<item>
<title>Comparative study of conventional and process intensification by reactive distillation designs for glycerol carbonate production from glycerol and diethyl carbonate</title>
<link>https://hdl.handle.net/1721.1/163665</link>
<description>Comparative study of conventional and process intensification by reactive distillation designs for glycerol carbonate production from glycerol and diethyl carbonate
Chalermthai, Bushra; Sriharuethai, Chayanin; Olsen, Bradley D; Ngaosuwan, Kanokwan; Soottitantawat, Apinan; Assabumrungrat, Suttichai; Charoensuppanimit, Pongtorn
Glycerol carbonate (GC) can be produced from glycerol (GL), a low-value byproduct in the biodiesel industry. In this work, continuous processes of GC production via transesterification from crude GL and diethyl carbonate (DEC) were developed using Aspen Plus. Two cases were considered, and their process performances were compared. In Case I, a conventional design consisted of a continuously stirred tank reactor for the reaction section and a distillation column for the purification section. In Case II, a process intensification design consisted of a reactive distillation column that could accommodate both reaction and purification within a single column. In both cases, the process optimizations were carried out by connecting the process models in Aspen Plus to MATLAB, using the Genetic Algorithm as the optimizer. The results showed that Case II was superior to Case I in terms of energy utilization, CO2 emissions, and economics with the specific energy consumption of 1.92 kWh/kg of diethyl carbonate, % internal rate of return of 274, payback period of 1.44 years, and CO2 emissions of 0.26 kg CO2/kg DEC. Lastly, the proposed process in Case II was compared with the GC production using dimethyl carbonate (DMC). It was found that using DEC was superior to DMC due to easier separation and glycidol avoidance.
</description>
<pubDate>Sun, 12 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163665</guid>
<dc:date>2025-01-12T00:00:00Z</dc:date>
</item>
<item>
<title>Designing for degradation: the importance of considering biotic and abiotic polymer degradation</title>
<link>https://hdl.handle.net/1721.1/163664</link>
<description>Designing for degradation: the importance of considering biotic and abiotic polymer degradation
Tantawi, Omar; Joo, Wontae; Martin, Elijah E; Av-Ron, Sarah HM; Bannister, K'yal R; Prather, Kristala LJ; Olsen, Bradley D; Plata, Desiree L
Considering the increasing global plastic demand, there is a critical need to gain insight into environmental processes that govern plastic degradation in order to inform novel design of sustainable polymers. Current biological degradation testing standards focus on formation of CO2 (i.e., mineralization) alone as a diagnostic, ultimately limiting identification of structure–degradation relationships in a timely fashion. This work developed a sequential abiotic (i.e., photodegradation and hydrolysis) and biotic degradation test and applied it to a suite of 18 polymers, including ten lab produced, novel polyhydroxyalkanoate polyesters, and eight commercially available, bio-based (i.e., polylactic acid and poly-3-hydroxybutyrate) and fossil-derived (i.e., polystyrene, polypropylene, low density polyethylene, poly(ethylene terephthalate) and tire rubber) polymers. Biomineralization alone following standard methods (i.e., ASTM 6691-17, ISO 23977-1 2020) underestimated polymer degradation up to two-fold over 28 days. Simulated sunlight enhanced the overall polymer degradation by mobilizing dissolved organic carbon (DOC). After photoirradiation, up to 100% of released dissolved organic carbon was bioavailable for marine microbes over 14 days. Photodegradation and hydrolysis could be explained by structural drivers in the commodity polymers, and the lab-synthesized polymers illustrated a limit to total degradation beyond which no enhancements in degradation were achieved. Taken together, this workflow allows for relatively fast experimental determination of environmentally relevant stimuli to help support eventual elucidation of structure–property relationships for enhanced a priori design of degradable polymers.
</description>
<pubDate>Thu, 10 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163664</guid>
<dc:date>2025-04-10T00:00:00Z</dc:date>
</item>
<item>
<title>Seroprevalence of COVID-19 neutralizing antibodies among multi-ethnic staff of an Asian primary healthcare institution: insights from point-of-care testing and implications for booster vaccination decisions</title>
<link>https://hdl.handle.net/1721.1/163663</link>
<description>Seroprevalence of COVID-19 neutralizing antibodies among multi-ethnic staff of an Asian primary healthcare institution: insights from point-of-care testing and implications for booster vaccination decisions
Oka, Prawira; Jia, Huan; Kongsuphol, Patthara; Ng, Say Y.; Saravanan, Vivekanandan; Ng, Chirk J.; Moosa, Aminath S.; Xiong, Mengfei; Gun, Shih Y.; Tsang, Li P. M.; Lim, Jingyi; Vijaykumar, Kayshini; Ho, Cassandra X. Y.; Chua, Patrina W. L.; Ling, Sharon Y. H.
Background COVID-19 vaccines have been crucial for establishing immunity; however, emerging data suggest vaccine efficacy is reduced within six months. Healthcare staff face an elevated COVID-19 risk and should make an informed decision to receive timely boosters to maintain their immunity. This study aims to determine the COVID-19 neutralizing antibody (nAb) seroprevalence among primary care staff and the impact of serological testing on their vaccination decision. Methods This cross-sectional study involved multidisciplinary primary healthcare professionals working in 10 public primary care clinics from December 2022 to July 2023. A questionnaire captured sociodemographic data, COVID-19 related history and attitudes toward serological testing. Their COVID-19 nAb levels were measured via point-of-care CoVIm™ Rapid SARS-CoV-2 nAb Test and laboratory cPass™ SARS-CoV-2 nAb Detection Kit. Results The study included 474 subjects, mostly female (88.8%), with a mean age of 40.6 years (SD = 12.3). All received at least two COVID-19 vaccinations, and 80.6% reported at least one infection. COVID-19 nAb seroprevalence was high (99.2%). Post-vaccination, 79.7% contracted COVID-19, with the median time to infection being 163 days. Most staff (93.9%) desired to know their COVID-19 immunity status through a finger pick test (77.0%) instead of venepuncture. Over two-thirds (68.1%) indicated the results would influence their booster vaccination decision. Conclusion The study revealed a high seroprevalence of COVID-19 nAb among the fully vaccinated participating staff. The necessity for timely boosters is underscored by 79.7% contracting COVID-19 post-vaccination. Most subjects were willing to undergo point-of-care testing, with results potentially influencing their decisions for booster vaccination.
</description>
<pubDate>Thu, 30 Oct 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163663</guid>
<dc:date>2025-10-30T00:00:00Z</dc:date>
</item>
<item>
<title>Problem structuring in urban science education: Why, what, and how</title>
<link>https://hdl.handle.net/1721.1/163662</link>
<description>Problem structuring in urban science education: Why, what, and how
Lai, Yuan; Lavi, Rea
Urban science is an emerging and transdisciplinary field that attracts deep interest in planning degree programs from educational institutions worldwide. Urban science education emphasizes the science of cities and urban information technology by integrating design, engineering, system science, spatial science, behavioral and social science, decision science, and other disciplines. The increasing complexity of urban systems creates significant pedagogical challenges for urban science education, particularly in problem structuring, which is the process of structuring, or defining, (a) the scope of the problem, (b) the potential ways for addressing the problem, and (c) suitable criteria for judging solutions to the problem. In this article, we describe the theoretical foundations of problem structuring in relation to urban science education and explain why it is difficult to teach. In response to this pedagogical challenge, we propose DIMES (Describe, Inquire, Model, Extract, and State), a novel domain-agnostic method combining design thinking and systems thinking developed for problem structuring in any level of higher education. We describe how the DIMES method can be integrated into urban science curricula with relation to critical considerations for teaching urban science problem structuring, the fast-evolving smart city development, and the disruptive impact of generative artificial intelligence on urban science education. Finally, we provide our thoughts on potential future studies with DIMES in urban science learning settings.
</description>
<pubDate>Thu, 09 Oct 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163662</guid>
<dc:date>2025-10-09T00:00:00Z</dc:date>
</item>
<item>
<title>The power of fragmented elites: the role of inadvertent robust action</title>
<link>https://hdl.handle.net/1721.1/163661</link>
<description>The power of fragmented elites: the role of inadvertent robust action
Mizruchi, Mark S.; Chu, Johan S. G.
It is broadly accepted among political scientists, political sociologists, and social movement theorists that a unified group will have a higher probability of success than a group that experiences internal divisions or fragmentation. Similarly, it has been assumed that in a society with a relatively unified elite, the elite will experience disproportionately higher benefits relative to the larger population. We take issue with this claim. In the mid-twentieth century, large American corporations exhibited a relatively high level of unity but the relative economic benefits accruing to the elite were at historic lows. In more recent years, American big business has become increasingly fragmented, yet the economic benefits that these elites have received have reached historic highs, and the average American’s standard of living has stagnated. Drawing on Padgett and Ansell, we introduce the concept of inadvertent robust action to explain how a relatively fragmented, disorganized elite can reap benefits that exceed those that its more unified counterparts experienced in an earlier era. We conclude with a discussion of the conditions under which our formulation can be expected to hold.
</description>
<pubDate>Wed, 09 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163661</guid>
<dc:date>2025-04-09T00:00:00Z</dc:date>
</item>
<item>
<title>Basic Elements of Strong Gravitational Lensing</title>
<link>https://hdl.handle.net/1721.1/163660</link>
<description>Basic Elements of Strong Gravitational Lensing
Schechter, Paul L.; Schnittman, Jeremy D.
Even when used to describe the same phenomenon, equations, graphics and words each give different perspectives and lead to complementary insights. The basic elements of strong gravitational lensing are introduced here favoring words and graphics over equations whenever possible. Fermat’s principle is the fundamental driver of strong lensing. Three “D’s” encapsulate the essential effects of lensing: Delay, Deflection and Distortion. Gravity and geometry both contribute to the delay of photons from a lensed source. Their interplay determines how the images of a source are deflected and how they are stretched or compressed. Caustics and critical curves are explained. Images of doubly, triply, quadruply and quintuply lensed sources are displayed. A table of symbols, their definitions and distinctions provides a summary of the basic elements of strong lensing.
</description>
<pubDate>Fri, 30 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163660</guid>
<dc:date>2025-05-30T00:00:00Z</dc:date>
</item>
<item>
<title>Search for dark matter production in association with a single top quark in proton-proton collisions at √s = 13 TeV</title>
<link>https://hdl.handle.net/1721.1/163659</link>
<description>Search for dark matter production in association with a single top quark in proton-proton collisions at √s = 13 TeV
Chekhovsky, V.; Hayrapetyan, A.; Makarenko, V.; Tumasyan, A.; Adam, W.; Andrejkovic, J. W.; Benato, L.; Bergauer, T.; Chatterjee, S.; Damanakis, K.; Dragicevic, M.; Hussain, P. S.; Jeitler, M.; Krammer, N.; Li, A.; Liko, D.; The CMS collaboration
A search for the production of a single top quark in association with invisible particles is performed using proton-proton collision data collected with the CMS detector at the LHC at $$\sqrt{s}=13$$ TeV, corresponding to an integrated luminosity of 138 fb−1. In this search, a flavor-changing neutral current produces a single top quark or antiquark and an invisible state nonresonantly. The invisible state consists of a hypothetical spin-1 particle acting as a new mediator and decaying to two spin-1/2 dark matter candidates. The analysis searches for events in which the top quark or antiquark decays hadronically. No significant excess of events compatible with that signature is observed. Exclusion limits at 95% confidence level are placed on the masses of the spin-1 mediator and the dark matter candidates, and are compared to constraints from the dark matter relic density measurements. In a vector (axial-vector) coupling scenario, masses of the spin-1 mediator are excluded up to 1.85 (1.85) TeV with an expectation of 2.0 (2.0) TeV, whereas masses of the dark matter candidates are excluded up to 0.75 (0.55) TeV with an expectation of 0.85 (0.65) TeV.
</description>
<pubDate>Wed, 17 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163659</guid>
<dc:date>2025-09-17T00:00:00Z</dc:date>
</item>
<item>
<title>Measurement of the Ω c 0 and Ξ c 0 baryon lifetimes using hadronic b-baryon decays</title>
<link>https://hdl.handle.net/1721.1/163658</link>
<description>Measurement of the Ω c 0 and Ξ c 0 baryon lifetimes using hadronic b-baryon decays
Aaij, R.; Abdelmotteleb, A. S. W.; Abellan Beteta, C.; Abudinén, F.; Ackernley, T.; Adefisoye, A. A.; Adeva, B.; Adinolfi, M.; Adlarson, P.; Agapopoulou, C.; Aidala, C. A.; Ajaltouni, Z.; Akar, S.; Akiba, K.; Albicocco, P.; Albrecht, J.
The lifetimes of the Ω c 0 and Ξ c 0 baryons are measured using a pp collision dataset collected by the LHCb experiment, corresponding to an integrated luminosity of 9 fb−1. The charm baryons are produced in the fully reconstructed decay chains Ω b − → Ω c 0 → p K − K − π + π − and Ξ b − → Ξ c 0 → p K − K − π + π − . The measurement uses topologically and kinematically similar B− → D0(→ K−K+π−π+)π− decays for normalisation. The measured lifetimes are τ Ω c 0 = 276.3 ± 19.4 stat ± 1.8 syst ± 0.7 τ D 0 fs , τ Ξ c 0 = 149.2 ± 2.5 stat ± 0.9 syst ± 0.4 τ D 0 fs , where the first uncertainty is statistical, the second systematic and the third due to the uncertainty of the D0 lifetime. These results are consistent with previous measurements performed by the LHCb experiment.
</description>
<pubDate>Thu, 18 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163658</guid>
<dc:date>2025-09-18T00:00:00Z</dc:date>
</item>
<item>
<title>Wormholes, branes and finite matrices in sine dilaton gravity</title>
<link>https://hdl.handle.net/1721.1/163657</link>
<description>Wormholes, branes and finite matrices in sine dilaton gravity
Blommaert, Andreas; Levine, Adam; Mertens, Thomas G.; Papalini, Jacopo; Parmentier, Klaas
We compute the double trumpet in sine dilaton gravity via WdW quantization. The wormhole size is discretized. The wormhole amplitude matches the spectral correlation of a finite-cut matrix integral, where matrices have large but finite dimensions. This strongly suggests an identification of the sine dilaton gravity theory with the q-deformed JT gravity matrix integral. At the very least, it captures all universal content of that matrix model. The disk decomposes into the physical (gauge invariant) solutions of the WdW equation, which are trumpets with discrete sizes. This decomposition modifies the usual no-boundary wavefunction to a normalizable one in sine dilaton gravity. We furthermore present an exact quantization of sine dilaton gravity with open and closed end of the world branes. These EOW branes correspond with FZZT branes for the two Liouville theories that make up sine dilaton gravity. The WdW equation implies redundancies in this space of branes, leaving a one parameter family of gauge invariant branes. One gauge choice corresponds with branes discussed by Okuyama in the context of DSSYK. Legendre transforming the EOW brane amplitude reproduces the trumpet. One could read our work as fleshing out the Hilbert space of closed universes in sine dilaton gravity.
</description>
<pubDate>Tue, 16 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163657</guid>
<dc:date>2025-09-16T00:00:00Z</dc:date>
</item>
<item>
<title>Precision e+e− hemisphere masses in the dijet region with power corrections</title>
<link>https://hdl.handle.net/1721.1/163656</link>
<description>Precision e+e− hemisphere masses in the dijet region with power corrections
Hoang, André H.; Mateu, Vicent; Schwartz, Matthew D.; Stewart, Iain W.
We derive high-precision results for the e+e− heavy jet mass (HJM) dσ/dρ and dihemisphere mass (DHM) d2σ/(ds1ds2) distributions, for s1 ~ s2, in the dijet region. New results include: i) the N3LL resummation for HJM of large logarithms lnn(ρ) at small ρ including the exact two-loop non-global hemisphere soft function, the 4-loop cusp anomalous dimension and the 3-loop hard and jet functions, ii) N3LL results for DHM with resummation of logarithms ln(s1,2/Q2) when there is no large separation between s1 and s2, iii) profile functions for HJM to give results simultaneously valid in the peak and tail regions, iv) a complete two-dimensional basis of non-perturbative functions which can be used for double differential observables, that are needed for both HJM and DHM in the peak region, and v) an implementation of renormalon subtractions for large-angle soft radiation to O α s 3 together with a resummation of the additional large ln(Qρ/ΛQCD) logarithms. Here Q is the e+e− center-of-mass energy. Our resummation results are combined with known fixed-order O α s 3 results and we discuss the convergence and remaining perturbative uncertainty in the cross section. We also prove that, at order 1/Q, the first moment of the HJM distribution involves an additional non-perturbative parameter compared to the power correction that shifts the tail of the spectrum (where 1 ≫ ρ ≫ ΛQCD/Q). This differs from thrust where a single non-perturbative parameter at order 1/Q describes both the first moment and the tail, and it disfavors models of power corrections employing a single non-perturbative parameter, such as the low-scale effective coupling model. In this paper we focus only on the dijet region, not the far-tail distribution for ρ ≳ 0.2 beyond which the trijet factorization and resummation become important.
</description>
<pubDate>Wed, 10 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163656</guid>
<dc:date>2025-09-10T00:00:00Z</dc:date>
</item>
<item>
<title>Modular chaos, operator algebras, and the Berry phase</title>
<link>https://hdl.handle.net/1721.1/163655</link>
<description>Modular chaos, operator algebras, and the Berry phase
de Boer, Jan; Najian, Bahman; van der Heijden, Jeremy; Zukowski, Claire
Modular Berry transport associates a geometric phase to a zero mode ambiguity in a family of modular operators. In holographic settings, this phase was shown to encode nontrivial information about the emergent spacetime geometry. We reformulate modular Berry transport for arbitrary von Neumann algebras, including giving a precise definition of the zero mode projection in terms of a conditional expectation. For a certain class of state perturbations, we demonstrate that the modular Berry phase gives rise to an emergent symplectic form in the large N limit, extending related results in the context of subregion/subalgebra duality. We also show that the vanishing of the Berry curvature for modular scrambling modes signals the emergence of a local Poincaré algebra, which plays a key role in the quantum ergodic hierarchy. These results provide an intriguing relation between geometric phases, modular chaos and the local structure of spacetime.
</description>
<pubDate>Wed, 10 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163655</guid>
<dc:date>2025-09-10T00:00:00Z</dc:date>
</item>
<item>
<title>The Builtsphere: A Broken Geological Paradigm</title>
<link>https://hdl.handle.net/1721.1/163654</link>
<description>The Builtsphere: A Broken Geological Paradigm
Parreño Alonso, Cristina
This essay discusses the role that architecture plays as a new geological paradigm. Similar to the way geologist Peter K. Haff conceived the technosphere as “the proliferation of technology across the globe,” this essay defines the builtsphere as the proliferation of everything built across the planet and proposes both—the technosphere and the builtsphere—as subsystems of the anthroposphere. This essay illustrates this way of thinking architecture with a pedagogical experiment developed as a design studio that takes issue with the various ways in which the builtsphere has caused the breakdown of the Earth cycles.
</description>
<pubDate>Fri, 07 Oct 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163654</guid>
<dc:date>2022-10-07T00:00:00Z</dc:date>
</item>
<item>
<title>The Brinkmanship Game: Bargaining Under the Mutual Risk of Escalation*</title>
<link>https://hdl.handle.net/1721.1/163653</link>
<description>The Brinkmanship Game: Bargaining Under the Mutual Risk of Escalation*
Haun, Phil; O’Hara, Michael
This article describes a simple two-player game which illustrates basicconcepts of brinkmanship, to include calculations of probability andexpected outcomes, and risk-taking profiles. The game befits a single50-minute class period with introduction, gameplay, and discussion.The game can supplement the study of conflict from classic Cold Warcase studies of crisis bargaining, to arms control, or negotiating inter-national protocols for global climate change such as the ParisAgreement. The Brinkmanship Game was developed for the seventhweek of a 10-week graduate course called Game Theory andDecisionmaking: Exploring Strategic Situations. The course features aflipped classroom with class time devoted to experimentation, game-play, and discussion of readings and games; lectures are online. TheBrinkmanship Game would be appropriate for students in anyadvanced undergraduate or graduate level course in international rela-tions, security studies, negotiation, or game theory. The BrinkmanshipGame provides an active learning opportunity that can be valuable forencouraging students to come to their own understanding of con-cepts of mutual risk-taking. The authors have found the game to beeffective in the classroom and hope it may prove valuable to thosesearching for ways to motivate students and to help them learn.
</description>
<pubDate>Mon, 14 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163653</guid>
<dc:date>2022-02-14T00:00:00Z</dc:date>
</item>
<item>
<title>From the square to the shopping mall: new social media, state surveillance, and the evolving geographies of urban protest</title>
<link>https://hdl.handle.net/1721.1/163652</link>
<description>From the square to the shopping mall: new social media, state surveillance, and the evolving geographies of urban protest
Stokols, Andrew
Despite the rise of social media as a major factor in protests sincethe early 2010s, scholars have documented the continuedimportance of urban space and “place-based networks” for socialmovements. However, the 2019–2020 Hong Kong Anti-ELABprotests saw a shift from occupying symbolic public space to amore variegated use of urban spaces in the city. Combiningnetwork analysis of Telegram channels and georeferencing ofprotest events, this study shows how new digital media platformssuch as Telegram enabled a diverse array of protest activities, aswell as a shift from formal centrally located civic spaces to awider range of everyday spaces including malls, oﬃces, andindustrial buildings. This study also asks why this occurred,situating the shifting geography of protests as a response toseveral factors: new social media technologies, strengthening ofstate surveillance of physical and digital space, and collectivelearning from the perceived failures of past movements. Theimplications of these shifts for the future of urban socialmovements and the “public sphere” are discussed.
</description>
<pubDate>Wed, 22 Jun 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163652</guid>
<dc:date>2022-06-22T00:00:00Z</dc:date>
</item>
<item>
<title>Interdependence of driver and pedestrian behavior in naturalistic roadway negotiations</title>
<link>https://hdl.handle.net/1721.1/163651</link>
<description>Interdependence of driver and pedestrian behavior in naturalistic roadway negotiations
Noonan, T Zach; Gershon, Pnina; Domeyer, Josh; Mehler, Bruce; Reimer, Bryan
OBJECTIVE: This paper characterizes the actions of pedestrian-driver dyads by examining their interdependence across intersection types (e.g., zebra crossings, stop signs). Additionally, the analysis of interdependence captures other external factors, such as other vehicles or pedestrians, that may influence the interaction.&#13;
METHODS: A 228 epoch vehicle-pedestrian interaction dataset was extracted from a large naturalistic driving data collection effort, which included vehicle, pedestrian, and contextual information (e.g., intersection type, jaywalking, vehicle maneuver, and lead vehicle presence). An expanded Actor-Partner Interdependence Model (APIM) was used to analyze driver-pedestrian dyads using driver and pedestrian standard deviations of velocity as the independent variables and wait times as dependent variables. APIM structural equation models were augmented to include driver effects (i.e., lead vehicle and maneuver type) and pedestrian effects (i.e., lead pedestrian, crossing group size, crossing direction).&#13;
RESULTS: The level of protection afforded by an intersection had an effect on the extent of driver-pedestrian dyadic behavior. Interactions in undesignated crossings (i.e., jaywalking) were associated with interdependent behavior whereas interactions in designated crossings (i.e., crosswalks and parking lots) showed a partner effect on the driver's wait time but no significant corresponding partner effect on the pedestrian. Finally, protected intersection interactions (i.e., traffic lights and stop signs) demonstrated no significant partner effects.&#13;
CONCLUSIONS: The difference in behavior patterns associated with the intersection type and level of protection shows that context can mediate the level of negotiation required between drivers and pedestrians. These findings inform how context and driver-pedestrian interactions should be incorporated in future modeling efforts which may, ultimately, support design of automated systems that are able to interact more safely, efficiently, and socially.
</description>
<pubDate>Fri, 26 Aug 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163651</guid>
<dc:date>2022-08-26T00:00:00Z</dc:date>
</item>
<item>
<title>When do systematic strategies decay?</title>
<link>https://hdl.handle.net/1721.1/163650</link>
<description>When do systematic strategies decay?
Falck, Antoine; Rej, Adam; Thesmar, David
Published anomalies evaluated outside the data sample deliver about 50% of in-sample performance.
</description>
<pubDate>Mon, 08 Aug 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163650</guid>
<dc:date>2022-08-08T00:00:00Z</dc:date>
</item>
<item>
<title>When AI Is Wrong: Addressing Liability Challenges in Women’s Healthcare</title>
<link>https://hdl.handle.net/1721.1/163649</link>
<description>When AI Is Wrong: Addressing Liability Challenges in Women’s Healthcare
Marotta, Angelica
Healthcare professionals can leverage Artificial intelligence (AI) to provide better care for theirpatients. However, it is also necessary to consider that AI algorithms operate according to historicaldiagnostic data, which often include evidence gathered from men. The biases of prior practices andthe perpetuation of exclusionary processes toward women can lead to inaccurate medical deci-sions. The ramifications of such errors show that the incorrect use of AI raises several criticalquestions regarding who should be responsible for potential incidents. This study aims to providean analysis of the role of AI in affecting women’s healthcare and an overview of the liabilityimplications caused by AI mistakes. Finally, this work presents a framework for algorithmic auditingto ensure that AI data are collected and stored according to secure, legal, and fair practices.
</description>
<pubDate>Mon, 20 Jun 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163649</guid>
<dc:date>2022-06-20T00:00:00Z</dc:date>
</item>
<item>
<title>Toughening and Imparting Deconstructability to 3D‐Printed Glassy Thermosets with “Transferinker” Additives</title>
<link>https://hdl.handle.net/1721.1/163646</link>
<description>Toughening and Imparting Deconstructability to 3D‐Printed Glassy Thermosets with “Transferinker” Additives
Qin, K Peter; Herzog‐Arbeitman, Abraham; Zou, Weizhong; Chakraborty, Saswata; Kristufek, Samantha L; Husted, Keith EL; Joly, Guy D; Craig, Stephen L; Olsen, Bradley D; Johnson, Jeremiah A
Thermoset toughness and deconstructability are often opposing features; simultaneously improving both without sacrificing other mechanical properties (e.g., stiffness and tensile strength) is difficult, but, if achieved, could enhance the usage lifetime and end‐of‐life options for these materials. Here, a strategy that addresses this challenge in the context of photopolymer resins commonly used for 3D printing of glassy, acrylic thermosets is introduced. It is shown that incorporating bis‐acrylate “transferinkers,” which are cross‐linkers capable of undergoing degenerative chain transfer and new strand growth, as additives (5–25 mol%) into homemade or commercially available photopolymer resins leads to photopolymer thermosets with substantially improved tensile toughness and triggered chemical deconstructability with minimal impacts on Young's moduli, tensile strengths, and glass transition temperatures. These properties result from a transferinker‐driven topological transition in network structure from the densely cross‐linked long, heterogeneous primary strands of traditional photopolymer networks to more uniform, star‐like networks with few dangling ends; the latter structure more effectively bear stress yet is also more easily depercolated via solvolysis. Thus, transferinkers represent a simple and effective strategy for improving the mechanical properties of photopolymer thermosets and providing a mechanism for their triggered deconstructability.
</description>
<pubDate>Wed, 11 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163646</guid>
<dc:date>2024-09-11T00:00:00Z</dc:date>
</item>
<item>
<title>DustNet++: Deep Learning-Based Visual Regression for Dust Density Estimation</title>
<link>https://hdl.handle.net/1721.1/163644</link>
<description>DustNet++: Deep Learning-Based Visual Regression for Dust Density Estimation
Michel, Andreas; Weinmann, Martin; Kuester, Jannick; AlNasser, Faisal; Gomez, Tomas; Falvey, Mark; Schmitz, Rainer; Middelmann, Wolfgang; Hinz, Stefan
Detecting airborne dust in standard RGB images presents significant challenges. Nevertheless, the monitoring of airborne dust holds substantial potential benefits for climate protection, environmentally sustainable construction, scientific research, and various other fields. To develop an efficient and robust algorithm for airborne dust monitoring, several hurdles have to be addressed. Airborne dust can be opaque or translucent, exhibit considerable variation in density, and possess indistinct boundaries. Moreover, distinguishing dust from other atmospheric phenomena, such as fog or clouds, can be particularly challenging. To meet the demand for a high-performing and reliable method for monitoring airborne dust, we introduce DustNet++, a neural network designed for dust density estimation. DustNet++ leverages feature maps from multiple resolution scales and semantic levels through window and grid attention mechanisms to maintain a sparse, globally effective receptive field with linear complexity. To validate our approach, we benchmark the performance of DustNet++ against existing methods from the domains of crowd counting and monocular depth estimation using the Meteodata airborne dust dataset and the URDE binary dust segmentation dataset. Our findings demonstrate that DustNet++ surpasses comparative methodologies in terms of regression and localization capabilities.
</description>
<pubDate>Mon, 24 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163644</guid>
<dc:date>2025-02-24T00:00:00Z</dc:date>
</item>
<item>
<title>The first half-century of empirical capital markets research in accounting in pictures</title>
<link>https://hdl.handle.net/1721.1/163643</link>
<description>The first half-century of empirical capital markets research in accounting in pictures
Kothari, S. P.; Schonberger, Bryce; Wasley, Charles; Xiao, Jason J.
Seminal papers by Ball and Brown (1968) and Beaver (1968) spawned a vast literature on the role of accounting numbers in capital markets. This literature, often referred to as capital markets research in accounting (CMRA), is now more than a half-century old. In light of numerous changes to the economic and financial reporting environments over this time, we estimate CMRA’s major relations using a comprehensive sample period. We illustrate each relation using plots, allowing us to efficiently present CMRA’s first half-century consistent with the adage “a picture is worth a thousand words.” The aims of our study are to document the extent of time-series variation in CMRA’s major relations and to provide evidence on market-level determinants of that variation. In doing so, our study provides a natural starting point for future research designed to develop and test additional causal explanations for time-series variation in the properties of CMRA’s major relations.
</description>
<pubDate>Sat, 07 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163643</guid>
<dc:date>2025-06-07T00:00:00Z</dc:date>
</item>
<item>
<title>SeCOM-B: an integrated model for understanding human behaviour change in wicked socio-ecological problems</title>
<link>https://hdl.handle.net/1721.1/163642</link>
<description>SeCOM-B: an integrated model for understanding human behaviour change in wicked socio-ecological problems
Nguyen-Trung, Kien; Saeri, Alexander K.; Zhao, Kun; Boulet, Mark; Kaufman, Stefan
The COM-B model, widely adopted in behaviour change research, systematically explores and categorises the behavioural barriers and facilitators to inform intervention design. The model highlights that where the right mix of barriers and facilitators, in the broad categories of capability, motivation, and opportunity exist, a given behaviour is more likely to be enacted. However, for wicked problems, applying the COM-B model becomes challenging due to complexity, uncertainty, manageability challenges, and the interpretative opacity of the systems that influence behaviour. This paper introduces a combined framework (SeCOM-B) that integrates the Socio-ecological model (SEM) and the COM-B model, highlighting its potential application in co-designing behaviour change interventions to address wicked problems, which often involve non-scientific stakeholders and interdisciplinary team members. Drawing on three case studies of practical behaviour change projects taking place in Australia (2) and Vietnam (1) between March 2022 and July 2023, the paper further illustrates the application of the SeCOM-B model in analysing the drivers and barriers of behaviours, and exploring the implications for intervention design.
</description>
<pubDate>Fri, 19 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163642</guid>
<dc:date>2025-09-19T00:00:00Z</dc:date>
</item>
<item>
<title>Horizontal transfer of matrix metalloproteinase genes links early animal and microbial evolution</title>
<link>https://hdl.handle.net/1721.1/163641</link>
<description>Horizontal transfer of matrix metalloproteinase genes links early animal and microbial evolution
Parsons, Chris; Fournier, Gregory P.
Background The early evolution of animals is characterized by the emergence of complex tissues, organs, and integument, made possible in part by the diversification of groups of structural proteins. The abundance of this new kind of organic material in the environment would have provided novel nutrient opportunities for microbes, as part of the beginnings of animal-microbial coevolution. Indeed, a diverse ensemble of extant microbial groups appear to possess the enzymatic ability to cleave collagen, the most abundant animal-specific protein, through the use of matrix metalloproteinases (MMPs). In animals, MMPs serve to reshape the extracellular matrix in the course of development, but their prevalence in the microbial world has been largely overlooked. Results MMPs have extensive diversity in Bacteria, Eumetazoa, and Streptophyta. We show that in marine metagenomes, MMP abundance is highly correlated with chitinase abundance, implying that even microbial MMPs are associated with animal-derived substrates. Reconstructing the phylogeny of MMP proteins reveals a history of rapid diversification, as well as multiple interkingdom and interdomain horizontal gene transfers. Included among these is a transfer to the ancestral lineage of the archaeal family Methanosarcinaceae, constraining this group to postdate the evolution of collagen, and therefore animal diversification. Conclusions MMPs have an unusual genetic history, marked by multiple instances of gene transfer between bacteria and multicellular eukaryotes, a smoking gun for some of the earliest coevolution between prokaryotes and metazoans. By calculating an end-Permian divergence of Methanosarcina, we demonstrate that the phylogenies of substrate-specific enzymes can provide valuable older-bound age calibrations for improving molecular clock age estimates across the Tree of Life.
</description>
<pubDate>Wed, 05 Nov 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163641</guid>
<dc:date>2025-11-05T00:00:00Z</dc:date>
</item>
<item>
<title>New physics versus quenching factors in Coherent Neutrino Scattering</title>
<link>https://hdl.handle.net/1721.1/163640</link>
<description>New physics versus quenching factors in Coherent Neutrino Scattering
Li, Yulun; Herrera, Gonzalo; Huber, Patrick
Recent results on the Coherent Elastic Neutrino-Nucleus Scattering (CEνNS) on germanium present significant discrepancies among experiments. We perform a combined analysis of the Dresden-II, CONUS+ and COHERENT data, quantifying the impact of quenching factor uncertainties on their CEνNS cross section measurement. No choice of quenching factor can bring these three data sets into mutual agreement, whereas the combination of COHERENT with either Dresden-II or CONUS+ agrees well albeit for very different quenching factors. We further study the quenching factor dependence on the sensitivity of these experiments to a large neutrino magnetic moment, finding that the constraints can vary by up to an order of magnitude. Our work highlights the importance of reducing this uncertainty on quenching factors in order to probe new physics from neutrinos at the low-energy frontier.
</description>
<pubDate>Wed, 05 Nov 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163640</guid>
<dc:date>2025-11-05T00:00:00Z</dc:date>
</item>
<item>
<title>Skydiving to bootstrap islands</title>
<link>https://hdl.handle.net/1721.1/163639</link>
<description>Skydiving to bootstrap islands
Liu, Aike; Simmons-Duffin, David; Su, Ning; van Rees, Balt C.
We study families of semidefinite programs (SDPs) that depend nonlinearly on a small number of “external” parameters. Such families appear universally in numerical bootstrap computations. The traditional method for finding an optimal point in parameter space works by first solving an SDP with fixed external parameters, then moving to a new point in parameter space and repeating the process. Instead, we unify solving the SDP and moving in parameter space in a single algorithm that we call “skydiving”. We test skydiving on some representative problems in the conformal bootstrap, finding significant speedups compared to traditional methods.
</description>
<pubDate>Tue, 04 Nov 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163639</guid>
<dc:date>2025-11-04T00:00:00Z</dc:date>
</item>
<item>
<title>Search for top squarks in final states with many&#13;
light-flavor jets and 0, 1, or 2 charged leptons in&#13;
proton-proton collisions at √s = 13 TeV</title>
<link>https://hdl.handle.net/1721.1/163638</link>
<description>Search for top squarks in final states with many&#13;
light-flavor jets and 0, 1, or 2 charged leptons in&#13;
proton-proton collisions at √s = 13 TeV
Chekhovsky, V.; Hayrapetyan, A.; Makarenko, V.; Tumasyan, A.; Adam, W.; Andrejkovic, J. W.; Benato, L.; Bergauer, T.; Chatterjee, S.; Damanakis, K.; Dragicevic, M.; Hussain, P. S.; Jeitler, M.; Krammer, N.; Li, A.; Liko, D.; Mikulec, I.; Schieck, J.; The CMS collaboration
Several new physics models including versions of supersymmetry (SUSY) characterized by R-parity violation (RPV) or with additional hidden sectors predict the production of events with top quarks, low missing transverse momentum, and many additional quarks or gluons. The results of a search for top squarks decaying to two top quarks and six additional light-flavor quarks or gluons are reported. The search employs a novel machine learning method for background estimation from control samples in data using decorrelated discriminators. The search is performed using events with 0, 1, or 2 electrons or muons in conjunction with at least six jets. No requirement is placed on the magnitude of the missing transverse momentum. The result is based on a sample of proton-proton collisions at $$\sqrt{s}=13$$ TeV corresponding to 138 fb−1 of integrated luminosity collected with the CMS detector at the LHC in 2016–2018. With no statistically significant excess of events observed beyond the expected contributions from the standard model, the data are used to determine upper limits on the top squark pair production cross section in the frameworks of RPV and stealth SUSY. Models with top squark masses less than 700 (930) GeV are excluded at 95% confidence level for RPV (stealth) SUSY scenarios.
</description>
<pubDate>Wed, 29 Oct 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163638</guid>
<dc:date>2025-10-29T00:00:00Z</dc:date>
</item>
<item>
<title>Study of same-sign W boson scattering and anomalous couplings in events with one tau lepton from pp collisions at √s = 13 TeV</title>
<link>https://hdl.handle.net/1721.1/163637</link>
<description>Study of same-sign W boson scattering and anomalous couplings in events with one tau lepton from pp collisions at √s = 13 TeV
Hayrapetyan, A.; Tumasyan, A.; Adam, W.; Andrejkovic, J. W.; Bergauer, T.; Chatterjee, S.; Damanakis, K.; Dragicevic, M.; Hussain, P. S.; Jeitler, M.; Krammer, N.; Li, A.; Liko, D.; Mikulec, I.; Schieck, J.; Schöfbeck, R.; Schwarz, D.; Sonawane, M.; The CMS collaboration
A first study is presented of the cross section for the scattering of same-sign W boson pairs via the detection of a τ lepton. The data from proton-proton collisions at the center-of-mass energy of 13 TeV were collected by the CMS detector at the LHC, and correspond to an integrated luminosity of 138 fb−1. Events were selected that contain two jets with large pseudorapidity and large invariant mass, one τ lepton, one light lepton (e or μ), and significant missing transverse momentum. The measured cross section for electroweak same-sign WW scattering is $${1.44}_{-0.56}^{+0.63}$$ times the standard model prediction. In addition, a search is presented for the indirect effects of processes beyond the standard model via the effective field theory framework, in terms of dimension-6 and dimension-8 operators.
</description>
<pubDate>Mon, 27 Oct 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163637</guid>
<dc:date>2025-10-27T00:00:00Z</dc:date>
</item>
<item>
<title>Search for dark matter produced in association with a Higgs boson decaying to a τ lepton pair in proton-proton collisions at √s = 13 TeV</title>
<link>https://hdl.handle.net/1721.1/163636</link>
<description>Search for dark matter produced in association with a Higgs boson decaying to a τ lepton pair in proton-proton collisions at √s = 13 TeV
Chekhovsky, V.; Hayrapetyan, A.; Makarenko, V.; Tumasyan, A.; Adam, W.; Andrejkovic, J. W.; Benato, L.; Bergauer, T.; Chatterjee, S.; Damanakis, K.; Dragicevic, M.; Hussain, P. S.; Jeitler, M.; Krammer, N.; Li, A.; Liko, D.; The CMS Collaboration
A search for dark matter particles produced in association with a Higgs boson decaying into a pair of τ leptons is performed using data collected in proton-proton collisions at a center-of-mass energy of 13 TeV with the CMS detector. The analysis is based on a data set corresponding to an integrated luminosity of 101 fb−1 collected in 2017–2018. No significant excess over the expected standard model background is observed. This result is interpreted within the frameworks of the 2HDM+a and baryonic Z′ benchmark simplified models. The 2HDM+a model is a type-II two-Higgs-doublet model featuring a heavy pseudoscalar with an additional light pseudoscalar. Upper limits at 95% confidence level are set on the product of the production cross section and the branching fraction for each of these two simplified models. Heavy pseudoscalar boson masses between 400 and 700 GeV are excluded for a light pseudoscalar mass of 100 GeV. For the baryonic Z′ model, a statistical combination is made with an earlier search based on a data set of 36 fb−1 collected in 2016. In this model, Z′ boson masses up to 1050 GeV are excluded for a dark matter particle mass of 1 GeV.
</description>
<pubDate>Tue, 21 Oct 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163636</guid>
<dc:date>2025-10-21T00:00:00Z</dc:date>
</item>
<item>
<title>Massively parallel enrichment of low-frequency alleles enables duplex sequencing at low depth</title>
<link>https://hdl.handle.net/1721.1/163635</link>
<description>Massively parallel enrichment of low-frequency alleles enables duplex sequencing at low depth
Gydush, Gregory; Nguyen, Erica; Bae, Jin H; Blewett, Timothy; Rhoades, Justin; Reed, Sarah C; Shea, Douglas; Xiong, Kan; Liu, Ruolin; Yu, Fangyan; Leong, Ka Wai; Choudhury, Atish D; Stover, Daniel G; Tolaney, Sara M; Krop, Ian E; Christopher Love, J; Parsons, Heather A; Mike Makrigiorgos, G; Golub, Todd R; Adalsteinsson, Viktor A
Assaying for large numbers of low-frequency mutations requires sequencing at extremely high depth and accuracy. Increasing sequencing depth aids the detection of low-frequency mutations yet limits the number of loci that can be simultaneously probed. Here we report a method for the accurate tracking of thousands of distinct mutations that requires substantially fewer reads per locus than conventional hybrid-capture duplex sequencing. The method, which we named MAESTRO (for minor-allele-enriched sequencing through recognition oligonucleotides), combines massively parallel mutation enrichment with duplex sequencing to track up to 10,000 low-frequency mutations, with up to 100-fold fewer reads per locus. We show that MAESTRO can be used to test for chimaerism by tracking donor-exclusive single-nucleotide polymorphisms in sheared genomic DNA from human cell lines, to validate whole-exome sequencing and whole-genome sequencing for the detection of mutations in breast-tumour samples from 16 patients, and to monitor the patients for minimal residual disease via the analysis of cell-free DNA from liquid biopsies. MAESTRO improves the breadth, depth, accuracy and efficiency of mutation testing by sequencing.
</description>
<pubDate>Thu, 17 Mar 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163635</guid>
<dc:date>2022-03-17T00:00:00Z</dc:date>
</item>
<item>
<title>Peanut oral immunotherapy differentially suppresses clonally distinct subsets of T helper cells</title>
<link>https://hdl.handle.net/1721.1/163634</link>
<description>Peanut oral immunotherapy differentially suppresses clonally distinct subsets of T helper cells
Monian, Brinda; Tu, Ang A; Ruiter, Bert; Morgan, Duncan M; Petrossian, Patrick M; Smith, Neal P; Gierahn, Todd M; Ginder, Julia H; Shreffler, Wayne G; Love, J Christopher
Food allergy affects an estimated 8% of children in the United States. Oral immunotherapy (OIT) is a recently approved treatment, with outcomes ranging from sustained tolerance to food allergens to no apparent benefit. The immunological underpinnings that influence clinical outcomes of OIT remain largely unresolved. Using single-cell RNA-Seq and paired T cell receptor α/β (TCRα/β) sequencing, we assessed the transcriptomes of CD154+ and CD137+ peanut-reactive T helper (Th) cells from 12 patients with peanut allergy longitudinally throughout OIT. We observed expanded populations of cells expressing Th1, Th2, and Th17 signatures that further separated into 6 clonally distinct subsets. Four of these subsets demonstrated a convergence of TCR sequences, suggesting antigen-driven T cell fates. Over the course of OIT, we observed suppression of Th2 and Th1 gene signatures in effector clonotypes but not T follicular helper-like (Tfh-like) clonotypes. Positive outcomes were associated with stronger suppression of Th2 signatures in Th2A-like cells, while treatment failure was associated with the expression of baseline inflammatory gene signatures that were present in Th1 and Th17 cell populations and unmodulated by OIT. These results demonstrate that differential clinical responses to OIT are associated with both preexisting characteristics of peanut-reactive CD4+ T cells and suppression of a subset of Th2 cells.
</description>
<pubDate>Tue, 23 Nov 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163634</guid>
<dc:date>2021-11-23T00:00:00Z</dc:date>
</item>
<item>
<title>Mitochondrial variant enrichment from high-throughput single-cell RNA sequencing resolves clonal populations</title>
<link>https://hdl.handle.net/1721.1/163633</link>
<description>Mitochondrial variant enrichment from high-throughput single-cell RNA sequencing resolves clonal populations
Miller, Tyler E; Lareau, Caleb A; Verga, Julia A; DePasquale, Erica AK; Liu, Vincent; Ssozi, Daniel; Sandor, Katalin; Yin, Yajie; Ludwig, Leif S; El Farran, Chadi A; Morgan, Duncan M; Satpathy, Ansuman T; Griffin, Gabriel K; Lane, Andrew A; Love, J Christopher; Bernstein, Bradley E; Sankaran, Vijay G; van Galen, Peter
The combination of single-cell transcriptomics with mitochondrial DNA variant detection can be used to establish lineage relationships in primary human cells, but current methods are not scalable to interrogate complex tissues. Here, we combine common 3′ single-cell RNA-sequencing protocols with mitochondrial transcriptome enrichment to increase coverage by more than 50-fold, enabling high-confidence mutation detection. The method successfully identifies skewed immune-cell expansions in primary human clonal hematopoiesis.
</description>
<pubDate>Thu, 24 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163633</guid>
<dc:date>2022-02-24T00:00:00Z</dc:date>
</item>
<item>
<title>SARS-CoV-2 receptor binding domain displayed on HBsAg virus–like particles elicits protective immunity in macaques</title>
<link>https://hdl.handle.net/1721.1/163632</link>
<description>SARS-CoV-2 receptor binding domain displayed on HBsAg virus–like particles elicits protective immunity in macaques
Authorized vaccines against SARS-CoV-2 remain less available in low- and middle-income countries due to insufficient supply, high costs, and storage requirements. Global immunity could still benefit from new vaccines using widely available, safe adjuvants, such as alum and protein subunits, suited to low-cost production in existing manufacturing facilities. Here, a clinical-stage vaccine candidate comprising a SARS-CoV-2 receptor binding domain–hepatitis B surface antigen virus–like particle elicited protective immunity in cynomolgus macaques. Titers of neutralizing antibodies (&gt;104) induced by this candidate were above the range of protection for other licensed vaccines in nonhuman primates. Including CpG 1018 did not significantly improve the immunological responses. Vaccinated animals challenged with SARS-CoV-2 showed reduced median viral loads in bronchoalveolar lavage (~3.4 log10) and nasal mucosa (~2.9 log10) versus sham controls. These data support the potential benefit of this design for a low-cost modular vaccine platform for SARS-CoV-2 and other variants of concern or betacoronaviruses.
</description>
<pubDate>Wed, 16 Mar 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163632</guid>
<dc:date>2022-03-16T00:00:00Z</dc:date>
</item>
<item>
<title>Searching for exotic scalars at fusion reactors</title>
<link>https://hdl.handle.net/1721.1/163631</link>
<description>Searching for exotic scalars at fusion reactors
Baruch, Chaja; Fitzpatrick, Patrick J.; Menzo, Tony; Soreq, Yotam; Trifinopoulos, Sokratis; Zupan, Jure
Part of the energy created in deuterium-tritium fusion reactors is carried away from plasma by a high-intensity neutron flux, which is then absorbed by the reactor’s inner walls. The neutron flux can be used to sustain the reaction by the following mechanism: the walls are coated with lithium-rich breeding blankets, in which a fraction of neutrons interacts with lithium, creating tritium, which can be, in turn, used a fuel for the main reaction. The interactions of neutrons with the materials within the breeding blanket can also result in the production of dark sector particles, feebly interacting light scalars or pseudoscalars, via nuclear transitions. We estimate the potential size of such dark sector flux outside the reactor and consider possible detection methods at current and future thermonuclear fusion reactors. In our analysis, we take into account all other current bounds, recasting also the SNO axion bound for a CP even scalar. We find that year-long searches at current and future reactors can set leading constraints on dark scalar- and dark pseudoscalar-nucleon couplings.
</description>
<pubDate>Mon, 27 Oct 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163631</guid>
<dc:date>2025-10-27T00:00:00Z</dc:date>
</item>
<item>
<title>Measurements of charmed meson and antimeson production asymmetries at √s = 13.6 TeV</title>
<link>https://hdl.handle.net/1721.1/163630</link>
<description>Measurements of charmed meson and antimeson production asymmetries at √s = 13.6 TeV
Aaij, R.; Abdelmotteleb, A. S. W.; Abellan Beteta, C.; Abudinén, F.; Ackernley, T.; Adefisoye, A. A.; Adeva, B.; Adinolfi, M.; Adlarson, P.; Agapopoulou, C.; Aidala, C. A.; Ajaltouni, Z.; Akar, S.; Akiba, K.; Albicocco, P.; Albrecht, J.; The LHCb collaboration
This article presents doubly differential measurements of the asymmetries in production rates between mesons containing a charm quark and those containing an anti-charm quark in proton-proton collisions at a centre-of-mass energy of s = 13.6 TeV using data recorded by the LHCb experiment. The asymmetries of D0, D+ and D s + mesons are measured for two-dimensional intervals in transverse momentum and pseudorapidity, within the range 2.5 &lt; pT &lt; 25.0 GeV/c and 2.0 &lt; η &lt; 4.5. No significant production asymmetries are observed. Comparisons to the Pythia 8 and Herwig 7 event generators are also presented, and their agreement with the data is evaluated. These measurements constitute the first measurements of production asymmetries at this centre-of-mass energy of colliding beams, and the first measurements with the LHCb Run 3 detector.
</description>
<pubDate>Tue, 07 Oct 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163630</guid>
<dc:date>2025-10-07T00:00:00Z</dc:date>
</item>
<item>
<title>Urban Planning for Health Equity Must Employ an Intersectionality Framework</title>
<link>https://hdl.handle.net/1721.1/163629</link>
<description>Urban Planning for Health Equity Must Employ an Intersectionality Framework
Williams, Patrice C; Binet, Andrew; Alhasan, Dana M; Riley, Nyree M; Jackson, Chandra L
Urban planning for health equity should be guided by an intersectional approach. Intersectionality is an essential framework for understanding the multiple overlapping factors, such as social and economic inequalities, that produce health disparities. We offer four strategies that planning researchers and practitioners can use to develop and integrate an intersectional approach into planning for health equity: challenging implicit and explicit assumptions, building cross-sectoral coalitions united by a shared vision for social and environmental justice, applying transdisciplinary and co-designing approaches throughout the planning process, and using existing tools to evaluate the impact of programs and policies on advancing health equity.
</description>
<pubDate>Tue, 12 Jul 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163629</guid>
<dc:date>2022-07-12T00:00:00Z</dc:date>
</item>
<item>
<title>Characterizing driver speeding behavior when using partial-automation in real-world driving</title>
<link>https://hdl.handle.net/1721.1/163628</link>
<description>Characterizing driver speeding behavior when using partial-automation in real-world driving
Haus, Samantha H; Gershon, Pnina; Mehler, Bruce; Reimer, Bryan
Objective: Speeding is a prevalent and complex risky behavior that can be affected by many fac-tors. Understanding how drivers speed is important for developing countermeasures, especially asnew automation features emerge. The current study seeks to identify and describe types of real-world speeding behaviors with and without the use of partial-automation.&#13;
Methods: This study used a combination of supervised and unsupervised data analysis techniquesto assess relevant factors in real-world speeding epochs, extracted from the MIT Advanced VehicleTechnology Naturalistic Driving Study, and classified them into distinct speeding behaviors.Speeding epochs were defined as traveling at least 5 mph over the speed limit for a minimumduration of 3 s. Vehicle speed-exceedance profiles were characterized over time using DynamicTime Warping and included in multivariate models that evaluated the associations between differ-ent features of the speeding epochs, such as speeding duration and magnitude. Finally, the identi-fied features were used to cluster speeding behaviors using the Gower dissimilarity measure.&#13;
Results: The analysis yielded four types of behaviors in both partially-automated and manual driv-ing: (i) Incidental speeding (low duration, low magnitude), (ii) Moderate speeding (low duration,moderate magnitude), (iii) Elevated speeding (moderate duration, high magnitude), and (iv)Extended speeding (long duration, high magnitude). When comparing the behaviors with andwithout partial-automation use, both Incidental and Moderate speeding were found to have sig-nificantly longer durations with partial-automation than manual driving. Elevated speeding wasfound to be more prevalent and associated with higher magnitudes during manual than with par-tially-automated driving. Finally, although Extended speeding was more prevalent during automa-tion use, it was associated with a lower mean and maximum speed magnitude compared toExtended speeding during manual driving.&#13;
Conclusions: This work highlights the variability in speeding behavior between and within par-tially-automated and manual driving. The design of systems that mitigate risky speeding behav-iors should consider targeting divergent behaviors observed between manual and automateddriving as a mechanism to mitigate the prevalence of the different behaviors associated witheach state.
</description>
<pubDate>Tue, 12 Jul 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163628</guid>
<dc:date>2022-07-12T00:00:00Z</dc:date>
</item>
<item>
<title>Validation and Uncertainty Quantification of Transient Reflood Models Using COBRA-TF and Machine Learning Techniques Based on the NRC/PSU RBHT Benchmark</title>
<link>https://hdl.handle.net/1721.1/163627</link>
<description>Validation and Uncertainty Quantification of Transient Reflood Models Using COBRA-TF and Machine Learning Techniques Based on the NRC/PSU RBHT Benchmark
Jin, Yue; Bajorek, Stephen M; Cheung, Fan-Bill
The accurate prediction of the fluid flow mass and the heat transfer process as well as the system response during reflood transients has long been a critical and challenging issue for reactor system safety analyses. Accurate characterization of the flow and energy transport can also significantly facilitate the various system/component design and optimization tasks. In the current study based on the U.S. Nuclear Regulatory Commission/Pennsylvania State University Rod Bundle Heat Transfer (RBHT) reflood experimental data, a comprehensive uncertainty analysis framework is developed using DAKOTA. The developed framework is used to perform an in-depth reflood model validation and verification for the subchannel analysis code COBRA-TF. In the meantime, the artificial intelligence (AI)–based machine learning (ML) model for rod cladding temperature prediction during reflood is also developed and evaluated using the current framework. Key input parametric effects for reflood thermal-hydraulic prediction include the system pressure, inlet liquid temperature/enthalpy, inlet mass flow rate, and average bundle power input. The figure of merit under consideration is the peak cladding temperature variations. It is found in the current study that, while further model improvement is needed, COBRA-TF can predict the correct parametric trends when compared with the RBHT data. On the other hand, it is challenging for the pure AI-based ML models to correctly reflect the parametric trends. Suggestions for future ML model development are provided in the end.
</description>
<pubDate>Thu, 28 Jul 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163627</guid>
<dc:date>2022-07-28T00:00:00Z</dc:date>
</item>
<item>
<title>Remote language revitalisation efforts during COVID-19</title>
<link>https://hdl.handle.net/1721.1/163626</link>
<description>Remote language revitalisation efforts during COVID-19
Wiley-Camacho, Grahm; Hillaire, Garron; Buttimer, Christopher J; Colwell, Richard
As schools shift to online instruction during the COVID-19 pandemic, it is important to support disenfranchised populations and keep issues of equity at the centre of our response. In this study, the authors focus on supporting one of the few urban-based Indigenous language schools in the United States because language revitalisation is critical for Native American communities. The authors explore the extent to which video conferencing and flipped classrooms support the development of a community of speakers. The study focuses on a single classroom of 16 students in first through third grade. The authors use a digital decolonisation framework focused on empowering local communities in conjunction with design-based research methodology to explore contextualised remote instruction solutions. They report on benefits for the development of a community of speakers from remote instruction that come with costs in reduced efficacy of language learning. Finally, they distil those results into preliminary design principles.
</description>
<pubDate>Mon, 13 Jun 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163626</guid>
<dc:date>2022-06-13T00:00:00Z</dc:date>
</item>
<item>
<title>Cohomogeneity Two Ricci Solitons with Sub-Euclidean Volume</title>
<link>https://hdl.handle.net/1721.1/163625</link>
<description>Cohomogeneity Two Ricci Solitons with Sub-Euclidean Volume
Firester, Benjy; Tsiamis, Raphael
We introduce new families of four-dimensional Ricci solitons of cohomogeneity two with volume collapsing ends. In a local presentation of the metric conformal to a product, we reduce the soliton equation to a degenerate Monge-Ampère equation for the conformal factor coupled with ODEs. We obtain explicit complete expanding solitons as well as abstract existence results for shrinking and steady solitons with boundary. These families of Ricci solitons specialize to classical examples of Einstein and soliton metrics. We also classify local solutions of this Monge-Ampère equation to prove rigidity for these solitons.
</description>
<pubDate>Mon, 27 Oct 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163625</guid>
<dc:date>2025-10-27T00:00:00Z</dc:date>
</item>
<item>
<title>How Many Sexes? How Many Genders?</title>
<link>https://hdl.handle.net/1721.1/163624</link>
<description>How Many Sexes? How Many Genders?
Byrne, Alex
The British philosopher and public intellectual C. E. M. Joad was a regular panelist on the BBC radio show The Brains Trust during and after the Second World War. He often began an answer to listeners’ questions with his catchphrase “It all depends what you mean by…,” which caught on throughout the country (Ayto &amp; Crofton, 2011). If any question deserves Joad’s catchphrase, it is “How many genders?”
</description>
<pubDate>Wed, 10 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163624</guid>
<dc:date>2025-09-10T00:00:00Z</dc:date>
</item>
<item>
<title>Is There Super-Normal Profit in Real Estate Development?*</title>
<link>https://hdl.handle.net/1721.1/163623</link>
<description>Is There Super-Normal Profit in Real Estate Development?*
Geltner, David; Kumar, Anil; Van de Minne, Alex M
This paper explores the question of whether real estate development (RED) projects systematically present positive net present value (NPV) and therefore, provide super-normal profit. Such projects are the products of a business operation that governs the exercise of the real call option on development that is represented by developable land. We present a framework for considering super-normal profit in the RED industry, and then in light of that framework we examine RED projects produced by publicly-traded equity real estate investment trusts (REITs). We find strong evidence of positive correlation between REITs’ Tobin-Q ratios, indicative of positive NPV, and the ratio of development assets to total assets in the firm, controlling for other factors. The nature of the firm’s Tobin’s-Q metric is such that the implied added firm value is net of land cost and net of overhead and search costs associated with the RED business operation. While our findings do not prove a direction of causality between REITs’ RED activity and positive NPV, the robust positive correlation controlling for other factors raises interesting implications which are discussed in the paper.
</description>
<pubDate>Mon, 11 Jul 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163623</guid>
<dc:date>2022-07-11T00:00:00Z</dc:date>
</item>
<item>
<title>Countervailing Effects of Extreme Maximum and Minimum Temperature Days on Conflict in Mainland Southeast Asia</title>
<link>https://hdl.handle.net/1721.1/163622</link>
<description>Countervailing Effects of Extreme Maximum and Minimum Temperature Days on Conflict in Mainland Southeast Asia
Gasser, André Tashi; Lanz, Bruno
We exploit 0.5◦ × 0.5◦ raster data to document how exceedances of the local 90th percentile thresholds for daily maximum and minimum temperatures affect conflict in mainland&#13;
Southeast Asia. We show that conflict incidence increases with extreme high maximum&#13;
temperature days and decreases with extreme high minimum temperature days. This implies that failing to control for extreme minimums understates the effects of extreme maximums. Moreover, as the frequency of extreme maximums and minimums is expected to&#13;
increase together with average temperatures, the countervailing effects at both tails of&#13;
the temperature distribution offset one another in mean-temperature regressions, helping&#13;
to explain earlier inconclusive findings for the region. We also show that the effects of&#13;
extreme maximums and minimums differ by conflict type, actors involved and affected&#13;
populations. Thus, even in the absence of an aggregate mean-temperature effect, a rising frequency of extreme temperature days may generate complex distributional conflict&#13;
incidence.
</description>
<pubDate>Mon, 03 Nov 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163622</guid>
<dc:date>2025-11-03T00:00:00Z</dc:date>
</item>
<item>
<title>An Optimization-Based Construction Procedure for Function Space-Based Summation-by-Parts Operators on Arbitrary Grids</title>
<link>https://hdl.handle.net/1721.1/163621</link>
<description>An Optimization-Based Construction Procedure for Function Space-Based Summation-by-Parts Operators on Arbitrary Grids
Glaubitz, Jan; Nordström, Jan; Öffner, Philipp
We introduce a novel construction procedure for one-dimensional function space summation-by-parts (FSBP) operators. Existing construction procedures for FSBP operators of the form D = P - 1 Q proceed as follows: Given a boundary operator B, the norm matrix P is first determined and then in a second step the complementary matrix Q is calculated to finally get the FSBP operator D. In contrast, the approach proposed here determines the norm and complementary matrices, P and Q, simultaneously by solving an optimization problem. The proposed construction procedure applies to classical summation-by-parts (SBP) operators based on polynomial approximation and the broader class of FSBP operators. According to our experiments, the presented approach yields a numerically stable construction procedure and FSBP operators with higher accuracy for diagonal norm difference operators at the boundaries than the traditional approach. Through numerical simulations, we highlight the advantages of our proposed technique.
</description>
<pubDate>Thu, 06 Nov 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163621</guid>
<dc:date>2025-11-06T00:00:00Z</dc:date>
</item>
<item>
<title>Cochlea to categories: The spatiotemporal dynamics of semantic auditory representations</title>
<link>https://hdl.handle.net/1721.1/163620</link>
<description>Cochlea to categories: The spatiotemporal dynamics of semantic auditory representations
Lowe, Matthew X; Mohsenzadeh, Yalda; Lahner, Benjamin; Charest, Ian; Oliva, Aude; Teng, Santani
How does the auditory system categorize natural sounds? Here we apply multimodal neuroimaging to illustrate the progression from acoustic to semantically dominated representations. Combining magnetoencephalographic (MEG) and functional magnetic resonance imaging (fMRI) scans of observers listening to naturalistic sounds, we found superior temporal responses beginning ∼55 ms post-stimulus onset, spreading to extratemporal cortices by ∼100 ms. Early regions were distinguished less by onset/peak latency than by functional properties and overall temporal response profiles. Early acoustically-dominated representations trended systematically toward category dominance over time (after ∼200 ms) and space (beyond primary cortex). Semantic category representation was spatially specific: Vocalizations were preferentially distinguished in frontotemporal voice-selective regions and the fusiform; scenes and objects were distinguished in parahippocampal and medial place areas. Our results are consistent with real-world events coded via an extended auditory processing hierarchy, in which acoustic representations rapidly enter multiple streams specialized by category, including areas typically considered visual cortex.
</description>
<pubDate>Tue, 21 Jun 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163620</guid>
<dc:date>2022-06-21T00:00:00Z</dc:date>
</item>
<item>
<title>Rapid and automated alloy design with graph neural network-powered large language model-driven multi-agent AI</title>
<link>https://hdl.handle.net/1721.1/163619</link>
<description>Rapid and automated alloy design with graph neural network-powered large language model-driven multi-agent AI
Ghafarollahi, Alireza; Buehler, Markus J.
A multi-agent artificial intelligence (AI) model is developed to automate the discovery of new metallic alloys, integrating multimodal data and external knowledge, including insights from physics via atomistic simulations. The system consists of (a) large language models (LLMs) for tasks such as reasoning and planning, (b) AI agents with distinct roles collaborating dynamically, and (c) a newly developed graph neural network (GNN) model for rapid retrieval of physical properties. We chose the ternary NbMoTa body-centered-cubic alloy as our model system and developed the GNN to predict two fundamental materials properties: the Peierls barrier and the solute/screw dislocation interaction energy. Our GNN model efficiently predicts these properties, reducing reliance on costly brute-force calculations and alleviating the computational demands on the multi-agent system. By combining the predictive capabilities of GNNs with the collaborative intelligence of LLM-driven reasoning agents, the system autonomously explores vast alloy design spaces, identifies trends in atomic-scale properties, and predicts macroscale mechanical strength, as demonstrated by several computational experiments. This synergistic approach accelerates the discovery of advanced alloys and holds promise for broader applications in other complex systems, marking a step forward in automated materials discovery and design. Impact statement Traditional deep learning models, such as graph neural networks and convolutional neural networks, operate within the confines of their training data sets, making single-step inferences for regression or classification. Our work introduces a multi-agent strategy that transcends these limitations by integrating deep learning with reasoning and decision-making capabilities. This intelligent system actively interprets results, determines subsequent actions, and iteratively refines predictions, accelerating the materials design process. We demonstrate its effectiveness in exploring the vast compositional space of a ternary alloy, where the model dynamically solicits data, analyzes trends, generates visualizations, and derives insights into materials behavior. By enabling accurate predictions of key alloy characteristics, our approach advances the discovery of novel metallic systems and underscores the critical role of solid-solution alloying. More broadly, it represents a major step toward integrating artificial intelligence with scientific reasoning, moving closer to artificial general intelligence in engineering. This paradigm shift has profound implications for materials science, enabling more efficient, autonomous, and intelligent exploration of complex materials spaces. Graphical Abstract
</description>
<pubDate>Thu, 06 Nov 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163619</guid>
<dc:date>2025-11-06T00:00:00Z</dc:date>
</item>
<item>
<title>Latent Space Alignment Using Adversarially Guided Self-Play</title>
<link>https://hdl.handle.net/1721.1/163618</link>
<description>Latent Space Alignment Using Adversarially Guided Self-Play
Tucker, Mycal; Zhou, Yilun; Shah, Julie A
We envision a world in which robots serve as capable partners in heterogeneous teams composed of other robots or humans. A crucial step towards such a world is enabling robots to learn to use the same representations as their partners; with a shared representation scheme, information may be passed among teammates. We define the problem of learning a fixed partner’s representation scheme as that of latent space alignment and propose metrics for evaluating the quality of alignment. While techniques from prior art in other fields may be applied to the latent space alignment problem, they often require interaction with partners during training time or large amounts of training data. We developed a technique, Adversarially Guided Self-Play (ASP), that trains agents to solve the latent space alignment problem with little training data and no access to their pre-trained partners. Simulation results confirmed that, despite using less training data, agents trained by ASP aligned better with other agents than agents trained by other techniques. Subsequent human-participant studies involving hundreds of Amazon Mechanical Turk workers showed how laypeople understood our machines enough to perform well on team tasks and anticipate their machine partner’s successes or failures.
</description>
<pubDate>Fri, 26 Aug 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163618</guid>
<dc:date>2022-08-26T00:00:00Z</dc:date>
</item>
<item>
<title>Sensing Lights: The Challenges of Transforming Street Lights into an Urban Intelligence Platform</title>
<link>https://hdl.handle.net/1721.1/163617</link>
<description>Sensing Lights: The Challenges of Transforming Street Lights into an Urban Intelligence Platform
Alvarez, Ricardo; Duarte, Fabio; Frenchman, Dennis; Ratti, Carlo
The technological transformation behind intelligent infrastructure systems requires institutional and stakeholder realignment in their development. In this article, we evaluate the challenges for the production of smart infrastructure through an in-depth analysis of the development of smart street lighting strategies. We conduct surveys and semi-structured interviews with key stakeholders and industry leaders in public illumination, as well with public officials from cities in three continents to understand the related challenges they face, the strategies being developed to meet those challenges, and reflect on the lessons provided for the design, creation, and operation of public smart infrastructure systems. We find that there are key barriers. First, differences in vision that reflect a lack of fit between operators of the current infrastructure and the new possibilities afforded by digital technologies. Second, lack of policies that would help facilitate the adoption of these new technologies particularly in regards to privacy and data operationalization. Third, difficulties in public engagement. These barriers to innovation hinder the capacity of cities to maximize the possibilities as well as the social value of intelligent street lights as a future-proof platform for urban knowledge and urban applications.
</description>
<pubDate>Mon, 22 Aug 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163617</guid>
<dc:date>2022-08-22T00:00:00Z</dc:date>
</item>
<item>
<title>Intratumorally anchored cytokine therapy</title>
<link>https://hdl.handle.net/1721.1/163616</link>
<description>Intratumorally anchored cytokine therapy
Wittrup, K Dane; Kaufman, Howard L; Schmidt, Michael M; Irvine, Darrell J
INTRODUCTION: On-target, off-tumor toxicity severely limits systemic dosing of cytokines and agonist antibodies for cancer. Intratumoral administration is increasingly being explored to mitigate this problem. Full exploitation of this mode of administration must include a mechanism for sustained retention of the drug; otherwise, rapid diffusion out of the tumor eliminates any advantage.&#13;
&#13;
AREAS COVERED: We focus here on strategies for anchoring immune agonists in accessible formats. Such anchoring may utilize extracellular matrix components, cell surface receptor targets, or exogenously administered particulate materials. Promising alternative strategies not reviewed here include slow release from the interior of a material depot, expression following local transfection, and conditional proteolytic activation of masked molecules.&#13;
&#13;
EXPERT OPINION: An effective mechanism for tissue retention is a critical component of intratumorally anchored cytokine therapy, as leakage leads to decreased tumor drug exposure and increased systemic toxicity. Matching variable drug release kinetics with receptor-mediated cellular uptake is an intrinsic requirement for the alternative strategies mentioned above. Bioavailability of an anchored form of the administered drug is key to obviating this balancing act.
</description>
<pubDate>Thu, 02 Jun 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163616</guid>
<dc:date>2022-06-02T00:00:00Z</dc:date>
</item>
<item>
<title>The Generative Dialogue Framework and the Pursuit of Better Listening by Journalists: A Design-Centered Approach for More Constructive Conversations with Audiences</title>
<link>https://hdl.handle.net/1721.1/163615</link>
<description>The Generative Dialogue Framework and the Pursuit of Better Listening by Journalists: A Design-Centered Approach for More Constructive Conversations with Audiences
Dimitrakopoulou, Dimitra; Lewis, Seth C
This article introduces the Generative Dialogue Framework (GDF) and explores its potential as a pedagogical intervention, one that could help reimagine the future of engaged journalism by bringing design-thinking practices, creativity, and deep-listening modalities into play. The framework is developed through design thinking and builds around principles from the field of design. It uses virtual meeting technologies to organize small-group conversations, allows for creative and playful activities to help people share stories and feelings, and aims to create an ambient atmosphere of mutual understanding and co-creative problem-solving. With this article, we aspire to initiate a conversation around the value of “pollinating” journalism studies with concepts and principles from design thinking and facilitation so that journalists could become empowered to connect with their audiences with greater empathy and compassion and thereby surface diverse and rich lived experiences using more active and reflective listening skills. To test the framework’s potential for enhancing engaged journalism curricula, we collaborated with 17 journalism students at a U.S. university in a series of activities, from initial training on the platform to hosting a conversation using the GDF to ultimately producing a news story based on the insights acquired through this design-centered approach.
</description>
<pubDate>Wed, 18 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163615</guid>
<dc:date>2022-05-18T00:00:00Z</dc:date>
</item>
<item>
<title>Programmable Continuous Electrowetting of Liquid Metal for Reconfigurable Electronics</title>
<link>https://hdl.handle.net/1721.1/163614</link>
<description>Programmable Continuous Electrowetting of Liquid Metal for Reconfigurable Electronics
Babatain, Wedyan; Park, Christine; Harraz, Deiaa M; Kilic Afsar, Ozgun; Honnet, Cedric; Lov, Sarah; Labrune, Jean‐Baptiste; Dickey, Michael D; Ishii, Hiroshi
Dynamic manipulation of the shape and position of liquid metal (LM), a conductive and deformable conductor, presents new opportunities for reconfigurable electronics, fluidic logic, and soft-actuation systems. This study combines continuous electrowetting (CEW) with electrochemical modulation of the interface of LM in electrolyte to achieve tunable and directional LM manipulation in 2D spaces. A key finding is that under a fixed external electric field, the LM moves in a direction that depends on its electrochemical potential. The LM potential is controlled using a substrate featuring patterns of laser-induced graphene (LIG) since it is non-wetting to LM and electrically conductive. This strategy enables a range of functionalities, including “valves” for on-demand LM control, LM droplet sorting, feedback sensing, and fluidic logic gates. The strategy can also control the motion of LM droplets across 2D spaces. Finally, it is utilized within a reconfigurable circuit platform where the LM functions as a dynamic interconnect for sequential activation, parallel switching, and self-healing circuits. By coupling the electrically-driven motion of LM and the versatility of LIG patterning, this work establishes a versatile framework for reconfigurable electronics, programmable fluidic systems, and adaptive systems.
</description>
<pubDate>Mon, 15 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163614</guid>
<dc:date>2025-09-15T00:00:00Z</dc:date>
</item>
<item>
<title>Optimization of CEST MRI Reporter Protein Design Using Cation‐Pi Networks</title>
<link>https://hdl.handle.net/1721.1/163613</link>
<description>Optimization of CEST MRI Reporter Protein Design Using Cation‐Pi Networks
Korenchan, David E.; French, Ethan J.; Runco, Emerenziana; Dhakan, Chetan B.; Yan, Jinwu; Nakashima, Hiroshi; McMahon, Michael T.; Gilad, Assaf A.; Farrar, Christian T.
Nucleic acid-based therapeutics, such as oncolytic virotherapy or gene therapy, would benefit greatly from a reporter gene that induces endogenous production of a protein biomarker to noninvasively track the delivery, persistence, and spread with imaging. Several chemical exchange saturation transfer (CEST) reporter proteins detectable by magnetic resonance imaging (MRI) have been demonstrated to have high sensitivity. However, to date none can provide strong CEST contrast at a distinct resonance from that of endogenous proteins, limiting their specificity. We investigated proteins and peptides containing tyrosine (Tyr), tryptophan (Trp), and lysine (Lys) residues that demonstrate CEST contrast shifted far downfield (4–10 ppm) from water. Although Tyr, Trp, and Lys exchangeable protons are typically not detectable under physiological conditions, those in our tested molecules are, having exchange rates of 400–2500 s−1. The large chemical shift dispersion and rapid exchange rates are attributed to unique hydrogen bonding and cation-π network interactions. These discoveries set the stage for designing a stable reporter protein with high detection specificity and sensitivity that can facilitate the in vivo monitoring of viral and gene therapies using MRI.
</description>
<pubDate>Wed, 10 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163613</guid>
<dc:date>2025-09-10T00:00:00Z</dc:date>
</item>
<item>
<title>3D‐Printed Mixed Ionic‐Electronic Conductive Polymer Composites for Long‐Term Bioelectronic Sensing</title>
<link>https://hdl.handle.net/1721.1/163612</link>
<description>3D‐Printed Mixed Ionic‐Electronic Conductive Polymer Composites for Long‐Term Bioelectronic Sensing
Bagatella, Simone; Roh, Heejung; Cavallaro, Marco; Suriano, Raffaella; Levi, Marinella; Gumyusenge, Aristide
Reliable, long-term monitoring of health data is becoming increasingly essential in modern healthcare. While computational and machine learning capabilities continue to advance, the lack of lightweight, conformable, and customizable hardware remains a key limitation. In the context of heart health, traditional electrocardiogram (ECG) electrodes are rigid and often uncomfortable for continuous wear. Existing soft electrodes tend to be either cost-prohibitive or unreliable over extended use. In this work, all-polymer, 3D-printed, highly stable, and conformable ECG patches are developed for long-term signal acquisition. Through material optimization, composite materials with electrical conductivity up to 1.7 S cm−1 are developed, maintaining over 85% of their conductivity after 60 days of exposure to open air. These materials also exhibit remarkable stretchability (strain at break up to 253%) and high mechanical strength (tensile strength of 25 MPa). The formulated inks are fully compatible with 3D material extrusion techniques, significantly reducing manufacturing costs. The printed electrodes are flexible, stretchable, and capable of recording high-quality ECG signals, performing comparably to state-of-the-art metal electrodes, even after more than a month of use-and-store in open air.
</description>
<pubDate>Sun, 07 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163612</guid>
<dc:date>2025-09-07T00:00:00Z</dc:date>
</item>
<item>
<title>A causal inference framework to compare the effectiveness of life-sustaining ICU therapies—using the example of cancer patients with sepsis</title>
<link>https://hdl.handle.net/1721.1/163611</link>
<description>A causal inference framework to compare the effectiveness of life-sustaining ICU therapies—using the example of cancer patients with sepsis
Matos, João; Struja, Tristan; Woite, Naira Link; Restrepo, David; Waschka, Andre Kurepa; Celi, Leo A; Sauer, Christopher M
The rise in cancer patients could lead to an increase in intensive care units (ICUs) admissions. We explored differences in treatment practices and outcomes of invasive therapies between patients with sepsis with and without cancer. Adults from 2008 to 2019 admitted to the ICU for sepsis were extracted from the databases MIMIC-IV and eICU-CRD. Using Extreme Gradient Boosting, we estimated the odds for invasive mechanical ventilation (IMV) or vasopressors. Targeted maximum likelihood estimation (TMLE) models estimated treatment effects of IMV and vasopressors on in-hospital mortality and 28 hospital-free days. 58,988 adult septic patients were included, of which 6145 had cancer. In-hospital mortality was higher for cancer patients (30.3% vs. 16.1%). Patients with cancer had lower odds of receiving IMV (aOR [95%CI], 0.94 [0.90–0.97]); pronounced for hematologic patients (aOR 0.89 [0.84–0.93]). Odds for vasopressors were also lower for hematologic patients (aOR 0.89 [0.84–0.94]). TMLE models found IMV to be overall associated with higher in-hospital mortality for solid and hematological patients (ATE 3% [1%–5%], 6% [3%–9%], respectively), while vasopressors were associated with higher in-hospital mortality for patients with solid and metastatic cancer (ATE 6% [4%–8%], 3% [1%–6%], respectively). We utilized US-wide ICU data to estimate a relationship between mortality and the use of common therapies. With the exception of hematologic patients being less likely to receive IMV, we did not find differential treatment patterns. We did not demonstrate an average survival benefit for therapies, underscoring the need for a more granular analysis to identify subgroups who benefit from these interventions.
</description>
<pubDate>Mon, 08 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163611</guid>
<dc:date>2025-09-08T00:00:00Z</dc:date>
</item>
<item>
<title>Accelerated Navigator for Rapid ∆B0 Field Mapping for Real-Time Shimming and Motion Correction of Human Brain MRI</title>
<link>https://hdl.handle.net/1721.1/163610</link>
<description>Accelerated Navigator for Rapid ∆B0 Field Mapping for Real-Time Shimming and Motion Correction of Human Brain MRI
Jayadev, Nutandev Bikkamane; Stockmann, Jason; Frost, Robert; Arango, Nicolas; Chang, Yulin; van der Kouwe, André; Andronesi, Ovidiu C.
∆B0 shim optimization performed at the beginning of an MR scan is unable to correct for ∆B0 field inhomogeneities caused by patient motion or hardware instability during scans. Navigator-based methods have been demonstrated previously to be effective for motion and shim correction. The purpose of this work was to accelerate volumetric navigators to allow fast acquisition of the parent navigated sequence with short real-time feedback time and high spatial resolution of the ∆B0 field mapping. A GRAPPA-accelerated 3D dual-echo EPI vNav was implemented on a 3 T Prisma MRI scanner. Testing was performed on an anthropomorphic head phantom and 11 human participants. vNav-derived ∆B0 field maps with various spatial resolutions were compared to Cartesian-encoded gold-standard 3D gradient-echo ∆B0 field mapping. ∆B0 shimming was evaluated for the scanner's spherical harmonics shims and a custom-made AC/DC RF-receive/∆B0-shim array. The performance of dual-echo and single-echo accelerated navigators was compared for tracking and updating ∆B0 field maps during motion. Real-time motion and shim corrections for 2D MRI and 3D MRSI sequences were assessed in vivo with controlled head movement. Up to 8-fold acceleration of volumetric navigators (vNavs) significantly reduced geometric distortions and signal dropouts near air-tissue interfaces and metal implants. Acceleration allowed a flexible tradeoff between spatial resolution (2.5–7.5 mm) and acquisition time (242–1302 ms). Notably, accelerated high-resolution (5 mm) vNav was faster (378 ms) than unaccelerated low-resolution (7.5 mm) vNav (700 ms) and showed better agreement with 3D-GRE ∆B0 field mapping with 5.5 Hz RMSE, 1 Hz bias, and [−10%, +10%] confidence interval. Accelerated vNavs improved 3D MRSI and 2D MRI in real-time motion and shim correction applications. Advanced shimming with spherical harmonic and shim array showed superior ΔB0 correction, especially with joint shim optimization. GRAPPA-accelerated vNavs provide fast, robust, and high-quality ∆B0 field mapping and shimming over the whole-brain. The accelerated vNavs enable rapid correction of ∆B0 field inhomogeneities and faster acquisition of the navigated parent sequence. This methodology can be used for real-time motion and shim correction to enhance data quality in various MRI applications.
</description>
<pubDate>Thu, 04 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163610</guid>
<dc:date>2025-09-04T00:00:00Z</dc:date>
</item>
<item>
<title>Wirelessly Powered Ingestible Capsule for Optical Stimulation of the Gastrointestinal Tract in Rodents</title>
<link>https://hdl.handle.net/1721.1/163609</link>
<description>Wirelessly Powered Ingestible Capsule for Optical Stimulation of the Gastrointestinal Tract in Rodents
Elsherif, Mohamed; El‐Din, Rawan Badr; Makhambetova, Zhansaya; Naser, Heba; Boitet, Maylis; Singh, Rahul; Oh, Keonghwan; Sukesan, Revathi; Ha, Sohmyung; Ramadi, Khalil B.
Optogenetics enables cell-specific activation and inhibition of neurons. The gut contains intricate networks of enteric and central neurons, but in vivo investigation is difficult due to its motile and harsh environment. This work reports an ingestible electronic capsule for non-invasive optical gut stimulation (ICOPS) in rodents. ICOPS is wirelessly powered via a transmitter coil, delivered by oral gavage, and excreted safely without obstruction within 20 h. The device integrates a micro-light-emitting diode (µLED) operating at 470 nm—a standard wavelength for channelrhodopsin-2 activation—together with a 460-turn ferrite-core coil and a shunt capacitor. Optimized circuits enable efficient power transfer at low frequencies (45–140 kHz), addressing weak coupling and misalignment. ICOPS operates effectively up to 14 cm longitudinally, 9 cm laterally, and at 75° rotation relative to the magnetic field. Specific absorption rate (SAR) analysis confirms exposure within safe occupational limits at 6 A and 45/63 kHz. In vivo validation using an in vivo imaging system (IVIS) and micro-computed tomography (µCT) confirms functionality and safety. ICOPS is the first rodent-scale ingestible capsule fabricated entirely in-house using 3D printing, without the need for cleanroom facilities, providing a compact, scalable platform for non-invasive optogenetic modulation of enteric circuits.
</description>
<pubDate>Wed, 20 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163609</guid>
<dc:date>2025-08-20T00:00:00Z</dc:date>
</item>
<item>
<title>RBD-VLP Vaccines Adjuvanted with Alum or SWE Protect K18-hACE2 Mice against SARS-CoV-2 VOC Challenge</title>
<link>https://hdl.handle.net/1721.1/163608</link>
<description>RBD-VLP Vaccines Adjuvanted with Alum or SWE Protect K18-hACE2 Mice against SARS-CoV-2 VOC Challenge
Wong, Ting Y; Russ, Brynnan P; Lee, Katherine S; Miller, Olivia A; Kang, Jason; Cooper, Melissa; Winters, Michael T; Rodriguez-Aponte, Sergio A; Dalvie, Neil C; Johnston, Ryan S; Rader, Nathaniel A; Wong, Zeriel Y; Cyphert, Holly A; Martinez, Ivan; Shaligram, Umesh; Batwal, Saurabh; Lothe, Rakesh; Chandrasekaran, Rahul; Nagar, Gaurav; Rajurkar, Meghraj; Rao, Harish; Bevere, Justin R; Barbier, Mariette; Love, J Christopher; Damron, F Heath
The ongoing COVID-19 pandemic has contributed largely to the global&#13;
vaccine disparity. Development of protein subunit vaccines can help alleviate shortages of COVID-19 vaccines delivered to low-income countries. Here, we evaluated&#13;
the efficacy of a three-dose virus-like particle (VLP) vaccine composed of hepatitis B&#13;
surface antigen (HBsAg) decorated with the receptor binding domain (RBD) from the&#13;
Wuhan or Beta SARS-CoV-2 strain adjuvanted with either aluminum hydroxide (alum)&#13;
or squalene in water emulsion (SWE). RBD HBsAg vaccines were compared to the&#13;
standard two doses of Pfizer mRNA vaccine. Alum-adjuvanted vaccines were composed of either HBsAg conjugated with Beta RBD alone (b RBD HBsAg1Al) or a combination of both Beta RBD HBsAg and Wuhan RBD HBsAg (b/Wu RBD HBsAg1Al).&#13;
RBD vaccines adjuvanted with SWE were formulated with Beta RBD HBsAg (b RBD&#13;
HBsAg1SWE) or without HBsAg (b RBD1SWE). Both alum-adjuvanted RBD HBsAg vaccines generated functional RBD IgG against multiple SARS-CoV-2 variants of concern&#13;
(VOC), decreased viral RNA burden, and lowered inflammation in the lung against&#13;
Alpha or Beta challenge in K18-hACE2 mice. However, only b/Wu RBD HBsAg1Al was&#13;
able to afford 100% survival to mice challenged with Alpha or Beta VOC. Furthermore,&#13;
mice immunized with b RBD HBsAg1SWE induced cross-reactive neutralizing antibodies&#13;
against major VOC of SARS-CoV-2, lowered viral RNA burden in the lung and brain, and&#13;
protected mice from Alpha or Beta challenge similarly to mice immunized with Pfizer&#13;
mRNA. However, RBD1SWE immunization failed to protect mice from VOC challenge.&#13;
Our findings demonstrate that RBD HBsAg VLP vaccines provided similar protection profiles to the approved Pfizer mRNA vaccines used worldwide and may offer protection&#13;
against SARS-CoV-2 VOC.
</description>
<pubDate>Mon, 15 Aug 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163608</guid>
<dc:date>2022-08-15T00:00:00Z</dc:date>
</item>
<item>
<title>On Monoid Algebras Having Every Nonempty Subset of N ≥ 2 as a Length Set</title>
<link>https://hdl.handle.net/1721.1/163607</link>
<description>On Monoid Algebras Having Every Nonempty Subset of N ≥ 2 as a Length Set
Geroldinger, Alfred; Gotti, Felix
We construct monoid algebras that satisfy the ascending chain condition on principal ideals and have the property that every nonempty subset of N ≥ 2 occurs as a length set.
</description>
<pubDate>Sat, 12 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163607</guid>
<dc:date>2025-04-12T00:00:00Z</dc:date>
</item>
<item>
<title>The Psyche Multispectral Imager Investigation: Characterizing the Geology, Topography, and Multispectral Properties of a Metal-Rich World</title>
<link>https://hdl.handle.net/1721.1/163606</link>
<description>The Psyche Multispectral Imager Investigation: Characterizing the Geology, Topography, and Multispectral Properties of a Metal-Rich World
Bell, J. F.; Ravine, M. A.; Caplinger, M. A.; Schaffner, J. A.; Brylow, S. M.; Clark, M. J.; Peckham, D. A.; Otjens, P. T.; Price, G. J.; Rowell, T.; Ravine, J. W.; Laramee, J. D.; Juergens, R. C.; Morgan, W.; Parker, A. G.
The Psyche Multispectral Imager (“the Imager”) is a payload system designed to directly achieve or to indirectly enable the key scientific goals and optical navigation requirements of NASA’s Psyche mission, which will conduct the first up-close orbital investigation of the metal-rich Main Belt asteroid (16) Psyche. The Imager consists of a pair of block redundant cameras and electronics that are mounted inside the thermally controlled spacecraft body, with a view out the spacecraft −X panel that will be nadir-pointed during nominal asteroid orbital mapping operations. The two identical Camera Heads are connected to a separate Digital Electronics Assembly (DEA) box that interfaces to the spacecraft avionics and that provides power, commanding, data processing, and onboard image storage. The Imager system shares significant heritage with imaging instruments flown on the Mars Climate Orbiter, the Mars Science Laboratory and Mars 2020 rovers, and Juno. Each camera consists of a 1600 × 1200 photosensitive pixel charge-coupled device (CCD) detector and its associated electronics, a 9-position filter wheel assembly, a compact catadioptric f /2.9 telescope with a fixed focal length of 148 mm, and a sunshade to minimize stray and scattered light. The Imager CCD, filters, and optics enable broadband polychromatic (∼540 ± 250 nm) imaging plus narrowband imaging in 7 colors centered from 439 to 1015 nm. An additional neutral density filter enables protection of the CCD from direct solar illumination. Each camera has a field of view of 4.6° × 3.4° and an instantaneous field of view of 50 μrad/pixel that enables imaging of the asteroid at scales ranging from ∼35 m/pix from 700 km altitude to ∼4 m/pix at 75 km altitude. The primary camera (“Imager A”) is pointed along the spacecraft −X axis, and the backup camera (“Imager B”) is toed-out by 3.7° to potentially enable greater surface area coverage per unit time if both Imagers are operated simultaneously during some mission phases. Stereoscopic mapping is performed by observing the same surface regions with either camera over a range of off-nadir pointing angles.
</description>
<pubDate>Wed, 21 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163606</guid>
<dc:date>2025-05-21T00:00:00Z</dc:date>
</item>
<item>
<title>Mechanobiochemical finite element model to analyze impact-loading-induced cell damage, subsequent proteoglycan loss, and anti-oxidative treatment effects in articular cartilage</title>
<link>https://hdl.handle.net/1721.1/163605</link>
<description>Mechanobiochemical finite element model to analyze impact-loading-induced cell damage, subsequent proteoglycan loss, and anti-oxidative treatment effects in articular cartilage
Kosonen, Joonas P.; Eskelinen, Atte S. A.; Orozco, Gustavo A.; Coleman, Mitchell C.; Goetz, Jessica E.; Anderson, Donald D.; Grodzinsky, Alan J.; Tanska, Petri; Korhonen, Rami K.
Joint trauma often leads to articular cartilage degeneration and post-traumatic osteoarthritis (PTOA). Pivotal determinants include trauma-induced excessive tissue strains that damage cartilage cells. As a downstream effect, these damaged cells can trigger cartilage degeneration via oxidative stress, cell death, and proteolytic tissue degeneration. N-acetylcysteine (NAC) has emerged as an antioxidant capable of inhibiting oxidative stress, cell death, and cartilage degeneration post-impact. However, the temporal effects of NAC are not fully understood and remain difficult to assess solely by physical experiments. Thus, we developed a computational finite element analysis framework to simulate a drop-tower impact of cartilage in Abaqus, and subsequent oxidative stress-related cell damage, and NAC treatment upon cartilage proteoglycan content in Comsol Multiphysics, based on prior ex vivo experiments. Model results provide evidence that immediate NAC treatment can reduce proteoglycan loss by mitigating oxidative stress, cell death (improved proteoglycan biosynthesis), and enzymatic proteoglycan depletion. Our simulations also indicate that delayed NAC treatment may not inhibit cartilage proteoglycan loss despite reduced cell death after impact. These results enhance understanding of the temporal effects of impact-related cell damage and treatment that are critical for the development of effective treatments for PTOA. In the future, our modeling framework could increase understanding of time-dependent mechanisms of oxidative stress and downstream effects in injured cartilage and aid in developing better treatments to mitigate PTOA progression.
</description>
<pubDate>Sat, 10 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163605</guid>
<dc:date>2025-05-10T00:00:00Z</dc:date>
</item>
<item>
<title>The three-point energy correlator in the coplanar limit</title>
<link>https://hdl.handle.net/1721.1/163604</link>
<description>The three-point energy correlator in the coplanar limit
Gao, Anjie; Yang, Tong-Zhi; Zhang, Xiaoyuan
Energy correlators are a type of observables that measure how energy is distributed across multiple detectors as a function of the angles between pairs of detectors. In this paper, we study the three-point energy correlator (EEEC) at lepton colliders in the three-particle near-to-plane (coplanar) limit. The leading-power contribution in this limit is governed by the three-jet (trijet) configuration. We introduce a new approach by projecting the EEEC onto the volume of the parallelepiped formed by the unit vectors aligned with three detected final-state particles. Analogous to the back-to-back limit of the two-point energy correlator probing the dijet configuration, the small-volume limit of the EEEC probes the trijet configuration. We derive a transverse momentum dependent (TMD) based factorization theorem that captures the soft and collinear logarithms in the coplanar limit, which enables us to achieve the next-to-next-to-next-to-leading logarithm (N3LL) resummation. To our knowledge, this is the first N3LL result for a trijet event shape. Additionally, we demonstrate that a similar factorization theorem can be applied to the fully differential EEEC in the three-particle coplanar limit, which provides a clean environment for studying different coplanar trijet shapes.
</description>
<pubDate>Mon, 04 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163604</guid>
<dc:date>2025-08-04T00:00:00Z</dc:date>
</item>
<item>
<title>Reduction of Plane Quartics and Cayley Octads</title>
<link>https://hdl.handle.net/1721.1/163603</link>
<description>Reduction of Plane Quartics and Cayley Octads
van Bommel, Raymond; Docking, Jordan; Dokchitser, Vladimir; Lercier, Reynald; Lorenzo García, Elisa
We give a conjectural characterisation of the stable reduction of plane quartics over local fields in terms of their Cayley octads. This results in p-adic criteria that efficiently give the stable reduction type amongst the 42 possible types, and whether the reduction is hyperelliptic or not. These criteria are in the vein of the machinery of “cluster pictures” for hyperelliptic curves. We also construct explicit families of quartic curves that realise all possible stable types, against which we test these criteria. We give numerical examples that illustrate how to use these criteria in practice.
</description>
<pubDate>Mon, 02 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163603</guid>
<dc:date>2025-06-02T00:00:00Z</dc:date>
</item>
<item>
<title>How Exceptional Is the Ear?</title>
<link>https://hdl.handle.net/1721.1/163602</link>
<description>How Exceptional Is the Ear?
Bergevin, Christopher; Freeman, Dennis M.; Coffin, Allison
Studies of hearing often conclude that the ear is “remarkable” or that its performance is “exceptional.” Some common examples include the following: ▹  the ears of mammals are encased in the hardest bone in the body; ▹  the ear contains the most vascularized tissue in body; ▹  the ear has the highest resting potential in the body; ▹  ears have a unique “fingerprint”; ▹  the ear can detect signals below the thermal noise floor; and ▹  the ear is highly nonlinear (or highly linear, depending upon who you ask). Some claims hold up to further scrutiny, while others do not. Additionally, several claims hold for animals in one taxon, while others are shared across taxa. Most frequently, our sense of wonder results from the differences between ears as products of natural selection (over eons) and artificial systems as products of engineering design. Our goal in analyzing claims of remarkable or exceptional performance is to deepen our appreciation of these differences.
</description>
<pubDate>Mon, 12 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163602</guid>
<dc:date>2025-05-12T00:00:00Z</dc:date>
</item>
<item>
<title>Somato‐Cognitive Action Network in Focal Dystonia</title>
<link>https://hdl.handle.net/1721.1/163601</link>
<description>Somato‐Cognitive Action Network in Focal Dystonia
Wang, Yuchao; Huynh, Baothy; Ren, Jianxun; Chen, Mo; Zhang, Wei; Hu, Dan; Li, Shasha; Liu, Hesheng; Kimberley, Teresa J.
Background&#13;
The central pathology causing idiopathic focal dystonia remains unclear. The recently identified somato-cognitive action network (SCAN) has been implicated.&#13;
&#13;
Objective&#13;
We tested whether the effector-agnostic SCAN may constitute a central pathology shared across dystonia subtypes, whereas the effector-specific regions in the primary sensorimotor cortex may show distinct functional changes specific to the dystonic body part.&#13;
&#13;
Methods&#13;
We collected functional magnetic resonance imaging (MRI) from patients with focal dystonia (laryngeal dystonia [LD], N = 24; focal hand dystonia [FHD], N = 18) and healthy control participants (N = 21). Regions of interest were selected a priori within the basal ganglia-thalamo-cortical and cerebello-thalamo-cortical sensorimotor pathways. We investigated dystonia-dependent resting-state connectivity changes: between SCAN and related cortical regions, between cortical and noncortical regions, and among noncortical regions. Cortical network boundaries were individualized based on resting-state data. Separately, individualized hand and mouth/larynx regions were also generated from task-based MRI (finger-tapping and phonation, respectively) for comparison.&#13;
&#13;
Results&#13;
Both focal dystonia subtypes showed significant functional changes (P = 0.048 for LD, P = 0.017 for FHD) compared to controls, driven by SCAN's higher functional connectivity to task-based mouth/larynx region and concomitantly lower connectivity to the cingulo-opercular network. No significant subcortical or cerebellar changes were observed when LD and FHD were modeled as independent groups. However, exploratory analysis combining LD and FHD suggested a dystonia-dependent asynchronization between SCAN and sensorimotor cerebellum (P = 0.010) that may indicate a pathological rather than compensatory process.&#13;
&#13;
Conclusions&#13;
We demonstrate that SCAN is uniquely associated with focal dystonia dysfunction beyond the dystonic effector regions, offering insights into pathophysiology and treatments. © 2025 The Author(s). Movement Disorders published by Wiley Periodicals LLC on behalf of International Parkinson and Movement Disorder Society.
</description>
<pubDate>Thu, 28 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163601</guid>
<dc:date>2025-08-28T00:00:00Z</dc:date>
</item>
<item>
<title>A Practical and Optimal First-Order Method for Large-Scale Convex Quadratic Programming</title>
<link>https://hdl.handle.net/1721.1/163600</link>
<description>A Practical and Optimal First-Order Method for Large-Scale Convex Quadratic Programming
Lu, Haihao; Yang, Jinwen
Convex quadratic programming (QP) is an important class of optimization problem with wide applications in practice. The classic QP solvers are based on either simplex or barrier method, both of which suffer from the scalability issue because their computational bottleneck is solving linear equations. In this paper, we design and analyze a first-order method for QP, called restarted accelerated primal-dual hybrid gradient (rAPDHG), whose computational bottleneck is matrix-vector multiplication. We show that rAPDHG has a linear convergence rate to an optimal solution when solving QP, and the obtained linear rate is optimal among a wide class of primal-dual methods. Furthermore, we connect the linear rate with a sharpness constant of the KKT system of QP, which is a standard quantity to measure the hardness of a continuous optimization problem. Numerical experiments demonstrate that both restarts and acceleration can significantly improve the performance of the algorithm. Lastly, we present PDQP.jl, an open-source solver based on rAPDHG that can be run on both GPU and CPU. With a numerical comparison with SCS and OSQP on standard QP benchmark sets and large-scale synthetic QP instances, we demonstrate the effectiveness of rAPDHG for solving QP.
</description>
<pubDate>Wed, 02 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163600</guid>
<dc:date>2025-07-02T00:00:00Z</dc:date>
</item>
<item>
<title>Effect of Iron Oxidation State on Solvent Extraction Scandium Extraction Process from Bauxite Residue and Life Cycle Assessment</title>
<link>https://hdl.handle.net/1721.1/163599</link>
<description>Effect of Iron Oxidation State on Solvent Extraction Scandium Extraction Process from Bauxite Residue and Life Cycle Assessment
Braz, Vitor M. P.; Vaccari, Mentore; Espinosa, Denise C. R.; Tenório, Jorge A. S.; Botelho Junior, Amilton B.
The extraction of Sc from bauxite residue (also known as red mud) is a promising but challenging secondary source due to the high Fe content, reducing efficiency. This study investigated the impact of Fe on Sc recovery by solvent extraction and evaluated the environmental impact of the process. A hydrometallurgical route was chosen for Sc extraction involving leaching with H2SO4 followed by solvent extraction with Cyanex 923 and Alamine 336. Synergistic combination of these extractants was tested to increase selectivity. Results showed that Cyanex 923 extracted nearly 100% of the Sc, but the co-extraction of Fe (25–80%) remained a significant challenge. A combination of Cyanex 923 and Alamine 336 improved Sc selectivity by minimizing Fe extraction at pH 0.5–1.0 (&lt; 20%). LCA indicated that leaching had the greatest environmental impact due to high energy consumption, while solvent extraction also contributed considerably because of kerosene use for dilution. The highest environmental impact is on ozone depletion in all steps of the process (leaching and solvent extraction). Synergistic use of Cyanex 923 and Alamine 336 is an efficient strategy for Sc extraction with low Fe co-extraction. Further optimizations are needed for the industrial scale, particularly concerning environmental impacts.
</description>
<pubDate>Mon, 15 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163599</guid>
<dc:date>2025-09-15T00:00:00Z</dc:date>
</item>
<item>
<title>Membrane Application in Hydrometallurgical Processing</title>
<link>https://hdl.handle.net/1721.1/163598</link>
<description>Membrane Application in Hydrometallurgical Processing
Botelho Junior, Amilton B.; Peng, Hong; Kim, Jihye
Critical minerals are crucial for energy transition and for the success of commercialization of hydro power, wind turbines, and photovoltaic panels. The increasing demand puts pressure on the search for new sources, including new mining sites, tailings, and urban solid wastes. Membrane-based separation is well-stated for water desalination and wastewater treatment. Recently, the search for new processes to recover critical minerals in aqueous processing has shed light on its potential application. Electrodialysis has demonstrated a mature electrochemical separation technique, while supported liquid membranes have great potential for future developments. Membrane cost represents the main drawback, and for this reason new materials are under development including synthesis for a specific critical mineral, such as Li and rare earth elements.
</description>
<pubDate>Tue, 18 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163598</guid>
<dc:date>2025-02-18T00:00:00Z</dc:date>
</item>
<item>
<title>Review of The Rhetoricity of Philosophy: Audience in Perelman and Ricoeur After the Badiou-Cassin Debate</title>
<link>https://hdl.handle.net/1721.1/163597</link>
<description>Review of The Rhetoricity of Philosophy: Audience in Perelman and Ricoeur After the Badiou-Cassin Debate
Schiappa, Edward
In this well-written and superbly researched book, Blake D. Scott uses the “debate” between Alain Badiou and Barbara Cassin as a point of departure to revisit the longstanding tension between philosophy and rhetoric. Through substantial exegeses of the work of Chaïm Perelman and Lucie Olbrechts‑Tyteca, as well as selected writings by Paul Ricœur, Scott rejects the conventional view that philosophy and rhetoric are separate disciplines. He argues instead for their asymmetrical interdependence: rhetoric is constitutive of philosophical practice. Central to his thesis is the concept of rhetoricity—the rhetorical dimension inherent in all discourse by virtue of the human “rhetorical capacity,” our ability to reflect on audiences and the potential for persuasion.
</description>
<pubDate>Sat, 06 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163597</guid>
<dc:date>2025-09-06T00:00:00Z</dc:date>
</item>
<item>
<title>Household Portfolios and Retirement Saving over the Life Cycle</title>
<link>https://hdl.handle.net/1721.1/163596</link>
<description>Household Portfolios and Retirement Saving over the Life Cycle
PARKER, JONATHAN A; SCHOAR, ANTOINETTE; COLE, ALLISON; SIMESTER, DUNCAN
Using account-level data on millions of U.S. middle-class investors over 2006 to 2018, we characterize the share of investable wealth that they hold in the stock market over their working lives. Relative to the 1990s, this share has both risen by 10% and become age-dependent. The Pension Protection Act (PPA)—which allowed target date funds (TDFs) to be default options in retirement plans—played an important role: younger (older) workers starting at a firm after TDFs became the default option post-PPA invested more (less) in stocks, in line with the TDF glidepath. In contrast, contribution rates changed little following the PPA.
</description>
<pubDate>Tue, 12 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163596</guid>
<dc:date>2025-08-12T00:00:00Z</dc:date>
</item>
<item>
<title>How Many Americans Work Remotely? A Survey of Surveys and Their Measurement Issues</title>
<link>https://hdl.handle.net/1721.1/163595</link>
<description>How Many Americans Work Remotely? A Survey of Surveys and Their Measurement Issues
Brynjolfsson, Erik; Horton, John; Makridis, Christos; Mas, Alex; Ozimek, Adam; Rock, Daniel; TuYe, Hong‐Yi
Remote work surged during the COVID-19 pandemic, but estimates vary widely. To address this, we field the Remote Life Survey (RLS), a nationally representative survey. In October 2020, we find that 31.6% of continuously employed workers always worked from home (WFH), and 21.9% did so sometimes or rarely, totaling 53.5%. We compare our results with government surveys and assess four factors contributing to measurement differences: (a) web versus mail-based respondents, (b) inclusion of self-employed workers, (c) occupation mix, and (d) exclusion of pre-pandemic remote workers. We find that (d) explains most of the discrepancy between the Current Population Survey (CPS) and other measures. Policymakers and researchers relying on CPS data should note that it may underestimate remote work prevalence by up to 25 percentage points. Our preferred estimates suggest that about half of the U.S. workforce worked remotely at least one day per week as of December 2020.
</description>
<pubDate>Tue, 28 Oct 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163595</guid>
<dc:date>2025-10-28T00:00:00Z</dc:date>
</item>
<item>
<title>The Impact of Internal Variability on Benchmarking Deep Learning Climate Emulators</title>
<link>https://hdl.handle.net/1721.1/163594</link>
<description>The Impact of Internal Variability on Benchmarking Deep Learning Climate Emulators
Lütjens, Björn; Ferrari, Raffaele; Watson‐Parris, Duncan; Selin, Noelle E
Full-complexity Earth system models (ESMs) are computationally very expensive, limiting their use in exploring the climate outcomes of multiple emission pathways. More efficient emulators that approximate ESMs can directly map emissions onto climate outcomes, and benchmarks are being used to evaluate their accuracy on standardized tasks and data sets. We investigate a popular benchmark in data-driven climate emulation, ClimateBench, on which deep learning-based emulators are currently achieving the best performance. We compare these deep learning emulators with a linear regression-based emulator, akin to pattern scaling, and show that it outperforms the incumbent 100M-parameter deep learning foundation model, ClimaX, on 3 out of 4 regionally resolved climate variables, notably surface temperature and precipitation. While emulating surface temperature is expected to be predominantly linear, this result is surprising for emulating precipitation. Precipitation is a much more noisy variable, and we show that deep learning emulators can overfit to internal variability noise at low frequencies, degrading their performance in comparison to a linear emulator. We address the issue of overfitting by increasing the number of climate simulations per emission pathway (from 3 to 50) and updating the benchmark targets with the respective ensemble averages from the MPI-ESM1.2-LR model. Using the new targets, we show that linear pattern scaling continues to be more accurate on temperature, but can be outperformed by a deep learning-based technique for emulating precipitation. We publish our code and data at https://github.com/blutjens/climate-emulator.
</description>
<pubDate>Tue, 26 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163594</guid>
<dc:date>2025-08-26T00:00:00Z</dc:date>
</item>
<item>
<title>Don’t Just Send in the Chiefs</title>
<link>https://hdl.handle.net/1721.1/163593</link>
<description>Don’t Just Send in the Chiefs
Wright, Randall S.
A few years ago, my wife and I visited the aircraft carrierMidway on a vacation to San Diego. The Midway is one ofthe largest aircraft carriers ever built. It was to be deployedin World War II, but the war ended before the Midway couldbe commissioned. It was the largest ship in the US Navy until1955 and the first aircraft carrier too big to pass through thePanama Canal. The ship was in service for 47 years, includingthe Vietnam War and Operation Desert Storm. It’s now afloating museum (Wikimedia Foundation 2022).
</description>
<pubDate>Tue, 19 Apr 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163593</guid>
<dc:date>2022-04-19T00:00:00Z</dc:date>
</item>
<item>
<title>Updated measurement of CP violation and polarisation in B s 0 → J / ψ K ¯ ∗ 892 0 decays</title>
<link>https://hdl.handle.net/1721.1/163590</link>
<description>Updated measurement of CP violation and polarisation in B s 0 → J / ψ K ¯ ∗ 892 0 decays
Aaij, R.; Abdelmotteleb, A. S. W.; Abellan Beteta, C.; Abudinén, F.; Ackernley, T.; Adefisoye, A. A.; Adeva, B.; Adinolfi, M.; Adlarson, P.; Agapopoulou, C.; Aidala, C. A.; Ajaltouni, Z.; Akar, S.; Akiba, K.; Albicocco, P.; Albrecht, J.; The LHCb Collaboration
A time-integrated angular analysis of the decay B s 0 → J / ψ K ¯ ∗ 892 0 , with J/ψ → μ+μ− and K ¯ ∗ 892 0 → K − π + , is presented. The analysis employs a sample of proton-proton collision data collected by the LHCb experiment during 2015–2018 at a centre-of-mass energy of 13 TeV, corresponding to an integrated luminosity of 6 fb−1. A simultaneous maximum-likelihood fit is performed to the angular distributions in bins of the K−π+ mass. This fit yields measurements of the CP-averaged polarisation fractions and CP asymmetries for the P-wave component of the K−π+ system. The longitudinal and parallel polarisation fractions are determined to be f0 = 0.534 ± 0.012 ± 0.009 and f|| = 0.211 ± 0.014 ± 0.005, respectively, where the first uncertainty is statistical and the second is systematic. The CP asymmetries are measured with 3–7% precision and are found to be consistent with zero. These measurements, along with an updated determination of the branching fraction relative to the B0 → J/ψK*0 decay, are combined with previous LHCb results, providing the most precise values for these observables to date.
</description>
<pubDate>Tue, 21 Oct 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163590</guid>
<dc:date>2025-10-21T00:00:00Z</dc:date>
</item>
<item>
<title>Measurements of inclusive and differential Higgs boson production cross sections at √s = 13.6 TeV in the H → γγ decay channel</title>
<link>https://hdl.handle.net/1721.1/163589</link>
<description>Measurements of inclusive and differential Higgs boson production cross sections at √s = 13.6 TeV in the H → γγ decay channel
Hayrapetyan, A.; Makarenko, V.; Tumasyan, A.; Adam, W.; Andrejkovic, J. W.; Benato, L.; Bergauer, T.; Damanakis, K.; Dragicevic, M.; Giordano, C.; Hussain, P. S.; Jeitler, M.; Krammer, N.; Li, A.; Liko, D.; Mikulec, I.; Schieck, J.; The CMS Collaboration
Inclusive and differential cross sections for Higgs boson production in protonproton collisions at a centre-of-mass energy of 13.6 TeV are measured using data collected&#13;
with the CMS detector at the LHC in 2022, corresponding to an integrated luminosity of&#13;
34.7 fb−1&#13;
. Events with the diphoton final state are selected, and the measured inclusive&#13;
fiducial cross section is σfid = 74±11 (stat)+5&#13;
−4&#13;
(syst) fb, in agreement with the standard model&#13;
prediction of 67.8 ± 3.8 fb. Differential cross sections are measured as functions of several&#13;
observables: the Higgs boson transverse momentum and rapidity, the number of associated&#13;
jets, and the transverse momentum of the leading jet in the event. Within the uncertainties,&#13;
the differential cross sections agree with the standard model predictions.
</description>
<pubDate>Mon, 08 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163589</guid>
<dc:date>2025-09-08T00:00:00Z</dc:date>
</item>
<item>
<title>On the structure of multiple stable equilibria in competitive ecological systems</title>
<link>https://hdl.handle.net/1721.1/163588</link>
<description>On the structure of multiple stable equilibria in competitive ecological systems
Taylor, Washington; O’Dwyer, James
For some ecological systems with a large pool of possible species, there can be multiple stable equilibria with different species composition. Natural or anthropogenic disruption can induce a shift between different such equilibria. While some work has been done on ecological systems with multiple equilibria, there is no general theory governing the distribution of equilibria or characterizing the basins of attraction of different equilibria. This article addresses these questions in a simple class of Lotka-Volterra models. We focus on competitive systems of species on a niche axis with multiple equilibria. We find that basins of attraction are generally larger for equilibria with greater biomass; in many cases, the basin of attraction size scales roughly exponentially with the net biomass of equilibria. This is illustrated in two ecologically relevant limits. In a continuous limit with species spaced arbitrarily closely on the niche axis, equilibria with different numbers of species provide a new perspective on the notion of limiting similarity. In another limit, akin to a statistical mechanical model, the niche axis becomes infinite while the range of interactions remains fixed; in this limit, we prove the exponential relation between basin size and biomass using the Markov chain central limit theorem.
</description>
<pubDate>Mon, 06 Oct 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163588</guid>
<dc:date>2025-10-06T00:00:00Z</dc:date>
</item>
<item>
<title>Measurement of branching fractions and CP asymmetries in Λ b 0 Ξ b 0 → p K S 0 h − decays</title>
<link>https://hdl.handle.net/1721.1/163587</link>
<description>Measurement of branching fractions and CP asymmetries in Λ b 0 Ξ b 0 → p K S 0 h − decays
Aaij, R.; Abdelmotteleb, A. S. W.; Abellan Beteta, C.; Abudinén, F.; Ackernley, T.; Adefisoye, A. A.; Adeva, B.; Adinolfi, M.; Adlarson, P.; Agapopoulou, C.; Aidala, C. A.; Ajaltouni, Z.; Akar, S.; Akiba, K.; Albicocco, P.; Albrecht, J.; The LHCb Collaboration
A study of Λ b 0 and Ξ b 0 baryon decays to the final states p K S 0 π − and p K S 0 K − is performed using pp collision data collected by the LHCb experiment, corresponding to an integrated luminosity of 9 fb−1. The decays Λ b 0 → p K S 0 K − and Ξ b 0 → p K S 0 K − are observed for the first time, with significances reaching eight standard deviations. The branching fractions and integrated CP asymmetries are measured for the Λ b 0 → p K S 0 π − , Λ b 0 → p K S 0 K − , and Ξ b 0 → p K S 0 K − decays. For the decay Λ b 0 → p K S 0 π − , the CP asymmetries are measured in different regions of the Dalitz plot. No evidence of CP violation is observed.
</description>
<pubDate>Tue, 21 Oct 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163587</guid>
<dc:date>2025-10-21T00:00:00Z</dc:date>
</item>
<item>
<title>(Not so) universal literacy screening: a survey of educators reveals variability in implementation</title>
<link>https://hdl.handle.net/1721.1/163586</link>
<description>(Not so) universal literacy screening: a survey of educators reveals variability in implementation
Ozernov-Palchik, Ola; Elizee, Zoe; Catania, Fabio; Hacikamiloglu, Meral; Shattuck-Hufnagel, Stefanie; Petscher, Yaacov; Ghosh, Satrajit; Gabrieli, John D. E.
Currently, most states in the United States have enacted legislation mandating universal screening for literacy risk in kindergarten through 3rd grade. However, the degree to which these policies translate into consistent, high-quality screening practices remains unclear. In this survey study, we collected responses from a diverse sample of K–3 educators (N = 251) across 39 states, representing varied school types, professional roles, and experience levels, to examine the real-world implementation of universal screening. Guided by the Exploration, Preparation, Implementation, and Sustainment (EPIS) framework, we analyzed quantitative and qualitative data to identify real-world factors that could impede the fidelity and effectiveness of screening implementation. We found substantial variability across multiple dimensions of literacy screening implementation. Educators described considerable variation in screener selection, administration practices, testing environments, training quality, scoring accuracy, and the use of results to guide intervention. Notably, many indicated insufficient training and professional development, expressing uncertainty about administering and interpreting screeners, particularly for English language learners. Nearly half also reported the absence of systematic procedures for developing intervention plans, suggesting that many students identified as at risk do not receive appropriate follow-up support. These implementation challenges occurred despite widespread recognition among educators of screening’s importance for early literacy intervention. Educators from lower-socioeconomic status schools reported significantly greater time burdens in conducting screenings and more technology-related challenges compared to their higher-SES counterparts. Without systematic improvements to implementation support and training, current screening initiatives may fail to achieve their intended goal of early identification and intervention for struggling readers.
</description>
<pubDate>Wed, 29 Oct 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163586</guid>
<dc:date>2025-10-29T00:00:00Z</dc:date>
</item>
<item>
<title>Development of a Compton imager setup</title>
<link>https://hdl.handle.net/1721.1/163582</link>
<description>Development of a Compton imager setup
Arya, Anuraag; Bilkhu, Harmanjeet S.; Vishwakarma, Sandeep; Belatikar, Hrishikesh; Bhalerao, Varun; Ghodgaonkar, Abhijeet; Koyande, Jayprakash G.; Marathe, Aditi; Mithun, N. P. S.; Narang, Sanjoli; Nimbalkar, Sudhanshu; Page, Pranav; Palit, Sourav; Patel, Arpit; Shetye, Amit; Tallur, Siddharth
Hard X-ray photons with energies in the range of hundreds of keV typically undergo Compton scattering when they are incident on a detector. In this process, an incident photon deposits a fraction of its energy at the point of incidence and continues onwards with a change in direction that depends on the amount of energy deposited. By using a pair of detectors to detect the point of incidence and the direction of the scattered photon, we can calculate the scattering direction and angle. The position of a source in the sky can be reconstructed using many Compton photon pairs from a source. We demonstrate this principle in the laboratory by using a pair of Cadmium Zinc Telluride (CZT) detectors sensitive in the energy range of 20–200 keV, similar to those used in AstroSat/CZT Imager (CZTI). The laboratory setup consists of two detectors placed perpendicular to each other in a lead-lined box. The detectors are read out by a custom-programmed Xilinx PYNQ-Z2 FPGA board, and data are then transferred to a personal computer (PC). There are two key updates from CZTI: the detectors are read concurrently rather than serially, and the time resolution has been improved from 20 to 7.5  μ s. We irradiated the detectors with a collimated 133 Ba source and identified Compton scattering events for the 356 keV line. We run a Compton reconstruction algorithm to correctly infer the location of the source in the detector frame, with a location-dependent angular response measure of 16 ∘ –30 ∘ . This comprises a successful technology demonstration for a Compton imaging camera in the hard X-ray regime. We present the details of our setup, the data acquisition process, and software algorithms, and showcase our results. We also quantify the limitations of this setup and discuss ways of improving the performance in future experiments.
</description>
<pubDate>Tue, 04 Nov 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163582</guid>
<dc:date>2025-11-04T00:00:00Z</dc:date>
</item>
<item>
<title>The Situation Awareness Framework for Explainable AI (SAFE-AI) and Human Factors Considerations for XAI Systems</title>
<link>https://hdl.handle.net/1721.1/163525</link>
<description>The Situation Awareness Framework for Explainable AI (SAFE-AI) and Human Factors Considerations for XAI Systems
Sanneman, Lindsay; Shah, Julie A
Recent advances in artificial intelligence (AI) have drawn attention to the need for AI systems to beunderstandable to human users. The explainable AI (XAI) literature aims to enhance human under-standing and human-AI team performance by providing users with necessary information about AI sys-tem behavior. Simultaneously, the human factors literature has long addressed importantconsiderations that contribute to human performance, including how to determine human informa-tional needs, human workload, and human trust in autonomous systems. Drawing from the human fac-tors literature, we propose the Situation Awareness Framework for Explainable AI (SAFE-AI), a three-level framework for the development and evaluation of explanations about AI system behavior. Ourproposed levels of XAI are based on the informational needs of human users, which can be deter-mined using the levels of situation awareness (SA) framework from the human factors literature. Basedon our levels of XAI framework, we also suggest a method for assessing the effectiveness of XAI sys-tems. We further detail human workload considerations for determining the content and frequency ofexplanations as well as metrics that can be used to assess human workload. Finally, we discuss theimportance of appropriately calibrating user trust in AI systems through explanations along with othertrust-related considerations for XAI, and we detail metrics that can be used to evaluate user trust inthese systems.
</description>
<pubDate>Wed, 22 Jun 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163525</guid>
<dc:date>2022-06-22T00:00:00Z</dc:date>
</item>
<item>
<title>Remove hydrogen and store it too: an acid-in-clay based electro-chemical solution</title>
<link>https://hdl.handle.net/1721.1/163524</link>
<description>Remove hydrogen and store it too: an acid-in-clay based electro-chemical solution
Kim, Kyung-Shik; Park, Jin-Sung; Yoon, Young-Chul; Kim, Jinwoo; Li, Ju; Yildiz, Bilge; Tasan, Cemal Cem
Extracting hydrogen from metallic components can open up a new pathway for preventing hydrogen embrittlement. To this end, we propose an electrochemically driven, all-solid method for hydrogen control, capable of both extracting and storing hydrogen simultaneously. In this approach, we employ acid-in-clay as a proton conducting electrolyte at room temperature. Through this electrochemical treatment, hydrogen is efficiently extracted from pre-charged steels, thereby restoring their tensile properties and preventing embrittlement. Moreover, it has been confirmed that the extracted hydrogen can be efficiently collected at the counter electrode, demonstrating the significant advantages of the process.
</description>
<pubDate>Thu, 14 Nov 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163524</guid>
<dc:date>2024-11-14T00:00:00Z</dc:date>
</item>
<item>
<title>Contrasting interchain order and mixed ionic–electronic conduction in conjugated polymers: an isoindigo case study</title>
<link>https://hdl.handle.net/1721.1/163523</link>
<description>Contrasting interchain order and mixed ionic–electronic conduction in conjugated polymers: an isoindigo case study
Meacham, Rebecca F; Roh, Heejung; Cunin, Camille E; Lee, Eric R; Li, Wenhao; Zhao, Yan; Samal, Sanket; Gumyusenge, Aristide
In mixed ionic–electronic conductive polymers, electronic conduction is optimal in tightly packed flat chains, while ionic conduction benefits from free volume accommodating large ions. To this end, polymers with high crystallinity are often excluded from structure–property studies of high-performing mixed conductors due to their unbalanced transport, which favors electronic charges over ionic ones. Herein, we investigated how mixed conduction can be achieved in ordered conjugated polymers by systematically combining interchain order with side chain engineering. We synthesized a series of isoindigo (IID)-based copolymers with varying amounts of aliphatic and hydrophilic side chains and examined the impact of interchain order on mixed conduction. Through crystallographic, spectro-electrochemical, and molecular dynamics studies, we demonstrated that systematically introducing hydrophilic side chains reduces the bulk order and long-range aggregation by increasing chain flexibility while preserving the interchain stacking distances within crystalline domains. Testing these IID polymers in transistor devices revealed that ion insertion and device transconductance strongly depend on the amount of hydrophilic side chains. We demonstrated that glycol side chains can enhance mixed conduction while maintaining interchain order. Our findings suggest that the IID system is promising for designing polymers that can accommodate ionic species without compromising the chain ordering required for electronic conduction.
</description>
<pubDate>Tue, 22 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163523</guid>
<dc:date>2024-10-22T00:00:00Z</dc:date>
</item>
<item>
<title>Molecular engineering of a cryptic epitope in Spike RBD improves manufacturability and neutralizing breadth against SARS-CoV-2 variants</title>
<link>https://hdl.handle.net/1721.1/163514</link>
<description>Molecular engineering of a cryptic epitope in Spike RBD improves manufacturability and neutralizing breadth against SARS-CoV-2 variants
Rodriguez-Aponte, Sergio A; Dalvie, Neil C; Wong, Ting Y; Johnston, Ryan S; Naranjo, Christopher A; Bajoria, Sakshi; Kumru, Ozan S; Kaur, Kawaljit; Russ, Brynnan P; Lee, Katherine S; Cyphert, Holly A; Barbier, Mariette; Rao, Harish D; Rajurkar, Meghraj P; Lothe, Rakesh R; Shaligram, Umesh S; Batwal, Saurabh; Chandrasekaran, Rahul; Nagar, Gaurav; Kleanthous, Harry; Biswas, Sumi; Bevere, Justin R; Joshi, Sangeeta B; Volkin, David B; Damron, F Heath; Love, J Christopher
There is a continued need for sarbecovirus vaccines that can be manufactured and distributed in low- and middle-income countries (LMICs). Subunit protein vaccines are manufactured at large scales at low costs, have less stringent temperature requirements for distribution in LMICs, and several candidates have shown protection against SARS-CoV-2. We previously reported an engineered variant of the SARS-CoV-2 Spike protein receptor binding domain antigen (RBD-L452K-F490W; RBD-J) with enhanced manufacturability and immunogenicity compared to the ancestral RBD. Here, we report a second-generation engineered RBD antigen (RBD-J6) with two additional mutations to a hydrophobic cryptic epitope in the RBD core, S383D and L518D, that further improved expression titers and biophysical stability. RBD-J6 retained binding affinity to human convalescent sera and to all tested neutralizing antibodies except antibodies that target the class IV epitope on the RBD core. K18-hACE2 transgenic mice immunized with three doses of a Beta variant of RBD-J6 displayed on a virus-like particle (VLP) generated neutralizing antibodies (nAb) to nine SARS-CoV-2 variants of concern at similar levels as two doses of Comirnaty. The vaccinated mice were also protected from challenge with Alpha or Beta SARS-CoV-2. This engineered antigen could be useful for modular RBD-based subunit vaccines to enhance manufacturability and global access, or for further development of variant-specific or broadly acting booster vaccines.
</description>
<pubDate>Fri, 27 Jan 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163514</guid>
<dc:date>2023-01-27T00:00:00Z</dc:date>
</item>
<item>
<title>Immunotherapy-induced neutralizing antibodies disrupt allergen binding and sustain allergen tolerance in peanut allergy</title>
<link>https://hdl.handle.net/1721.1/163513</link>
<description>Immunotherapy-induced neutralizing antibodies disrupt allergen binding and sustain allergen tolerance in peanut allergy
LaHood, Nicole A; Min, Jungki; Keswani, Tarun; Richardson, Crystal M; Amoako, Kwasi; Zhou, Jingjia; Marini-Rapoport, Orlee; Bernard, Hervé; Hazebrouck, Stéphane; Shreffler, Wayne G; Love, J Christopher; Pomes, Anna; Pedersen, Lars C; Mueller, Geoffrey A; Patil, Sarita U
In IgE-mediated food allergies, exposure to the allergen activates systemic allergic responses. Oral immunotherapy (OIT) treats food allergies through incremental increases in oral allergen exposure. However, OIT only induces sustained clinical tolerance and decreased basophil sensitivity in a subset of individuals despite increases in circulating allergen-specific IgG in all treated individuals. Therefore, we examined the allergen-specific antibodies from 2 OIT cohorts of patients with sustained and transient responses. Here, we compared antibodies from individuals with sustained or transient responses and discovered specific tolerance-associated conformational epitopes of the immunodominant allergen Ara h 2 recognized by neutralizing antibodies. First, we identified what we believe to be previously unknown conformational, intrahelical epitopes using x-ray crystallography with recombinant antibodies. We then identified epitopes only recognized in sustained tolerance. Finally, antibodies recognizing tolerance-associated epitopes effectively neutralized allergen to suppress IgE-mediated effector cell activation. Our results demonstrate the molecular basis of antibody-mediated protection in IgE-mediated food allergy, by defining how these antibodies disrupt IgE-allergen interactions to prevent allergic reactions. Our approach to studying the structural and functional basis for neutralizing antibodies demonstrates the clinical relevance of specific antibody clones in antibody-mediated tolerance. We anticipate that our findings will form the foundation for treatments of peanut allergy using neutralizing antibodies and hypoallergens.
</description>
<pubDate>Tue, 17 Jan 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163513</guid>
<dc:date>2023-01-17T00:00:00Z</dc:date>
</item>
<item>
<title>Tissue-specific abundance of interferon-gamma drives regulatory T cells to restrain DC1-mediated priming of cytotoxic T cells against lung cancer</title>
<link>https://hdl.handle.net/1721.1/163512</link>
<description>Tissue-specific abundance of interferon-gamma drives regulatory T cells to restrain DC1-mediated priming of cytotoxic T cells against lung cancer
Zagorulya, Maria; Yim, Leon; Morgan, Duncan M; Edwards, Austin; Torres-Mejia, Elen; Momin, Noor; McCreery, Chloe V; Zamora, Izabella L; Horton, Brendan L; Fox, James G; Wittrup, K Dane; Love, J Christopher; Spranger, Stefani
Local environmental factors influence CD8+ T cell priming in lymph nodes (LNs). Here, we sought to understand how factors unique to the tumor-draining mediastinal LN (mLN) impact CD8+ T cell responses toward lung cancer. Type 1 conventional dendritic cells (DC1s) showed a mLN-specific failure to induce robust cytotoxic T cells responses. Using regulatory T (Treg) cell depletion strategies, we found that Treg cells suppressed DC1s in a spatially coordinated manner within tissue-specific microniches within the mLN. Treg cell suppression required MHC II-dependent contact between DC1s and Treg cells. Elevated levels of IFN-γ drove differentiation Treg cells into Th1-like effector Treg cells in the mLN. In patients with cancer, Treg cell Th1 polarization, but not CD8+/Treg cell ratios, correlated with poor responses to checkpoint blockade immunotherapy. Thus, IFN-γ in the mLN skews Treg cells to be Th1-like effector Treg cells, driving their close interaction with DC1s and subsequent suppression of cytotoxic T cell responses.
</description>
<pubDate>Tue, 14 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163512</guid>
<dc:date>2023-02-14T00:00:00Z</dc:date>
</item>
<item>
<title>Minimal purification method enables developability assessment of recombinant proteins</title>
<link>https://hdl.handle.net/1721.1/163511</link>
<description>Minimal purification method enables developability assessment of recombinant proteins
Rodriguez‐Aponte, Sergio A; Naranjo, Christopher A; Johnston, Ryan S; Dalvie, Neil C; Crowell, Laura E; Bajoria, Sakshi; Kumru, Ozan S; Joshi, Sangeeta B; Volkin, David B; Love, J Christopher
Analytical characterization of proteins is a critical task for developing therapeutics and subunit vaccine candidates. Assessing candidates with a battery of biophysical assays can inform the selection of one that exhibits properties consistent with a given target product profile (TPP). Such assessments, however, require several milligrams of purified protein, and ideal assessments of the physicochemical attributes of the proteins should not include unnatural modifications like peptide tags for purification. Here, we describe a fast two‐stage minimal purification process for recombinant proteins secreted by the yeast host &lt;jats:italic&gt;Komagataella phaffii&lt;/jats:italic&gt; from a 20 mL culture supernatant. This method comprises a buffer exchange and filtration with a Q‐membrane filter and we demonstrate sufficient removal of key supernatant impurities including host‐cell proteins (HCPs) and DNA with yields of 1–2 mg and &amp;gt;60% purity. This degree of purity enables characterizing the resulting proteins using affinity binding, mass spectrometry, and differential scanning calorimetry. We first evaluated this method to purify an engineered SARS‐CoV‐2 subunit protein antigen and compared the purified protein to a conventional two‐step chromatographic process. We then applied this method to compare several SARS‐CoV‐2 RBD sequences. Finally, we show this simple process can be applied to a range of other proteins, including a single‐domain antibody, a rotavirus protein subunit, and a human growth hormone. This simple and fast developability methodology obviates the need for genetic tagging or full chromatographic development when assessing and comparing early‐stage protein therapeutics and vaccine candidates produced in &lt;jats:italic&gt;K. phaffii&lt;/jats:italic&gt;.&lt;/jats:p&gt;
</description>
<pubDate>Fri, 17 Mar 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163511</guid>
<dc:date>2023-03-17T00:00:00Z</dc:date>
</item>
<item>
<title>Interstellar Mapping And Acceleration Probe: The NASA IMAP Mission</title>
<link>https://hdl.handle.net/1721.1/163510</link>
<description>Interstellar Mapping And Acceleration Probe: The NASA IMAP Mission
McComas, D. J.; Christian, E. R.; Schwadron, N. A.; Gkioulidou, M.; Allegrini, F.; Baker, D. N.; Bzowski, M.; Clark, G.; Cohen, C. M. S.; Cohen, I.
NASA’s Interstellar Mapping and Acceleration Probe (IMAP) mission provides extensive and well-coordinated new observations of the inner and outer heliosphere and scientific closure on two of the most important topics in Heliophysics: 1) the acceleration of charged particles and 2) the interaction of the solar wind with the local interstellar medium. These topics are intimately coupled because particles accelerated in the inner heliosphere propagate outward through the solar wind and mediate its interaction with the very local interstellar medium (VLISM). The IMAP mission is designed to address these topics, provide extensive new real-time measurements critical to Space Weather observations and predictions, and much more. IMAP’s ten instruments are mounted on a simple, spinning spacecraft that orbits about the first Sun-Earth Lagrange point, L1, and repoints its Sun-facing solar arrays and spin axis toward the Sun each day. The instruments provide complete and synergistic observations that examine particle energization processes at 1 au while simultaneously probing the global heliospheric interaction with the VLISM. The 1 au in-situ observations include solar wind electrons and ions from solar wind through suprathermal energies, pickup and energetic ions, as well as the interplanetary magnetic field. IMAP provides Energetic Neutral Atom (ENA) global imaging of the outer heliosphere via ENAs from tens of eV up through hundreds of keV, as well as observations of interstellar neutral atoms traversing the heliosphere. IMAP also directly measures interstellar dust that enters the heliosphere and the solar-wind-modulated ultraviolet glow. This paper provides the mission overview for the full IMAP mission, acts as a roadmap to the other papers in this IMAP collection and provides the citable reference for the overall IMAP mission going forward.
</description>
<pubDate>Thu, 30 Oct 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163510</guid>
<dc:date>2025-10-30T00:00:00Z</dc:date>
</item>
<item>
<title>The Passive Regolith Sampler: From Concept to Delivery to the Lunar Surface</title>
<link>https://hdl.handle.net/1721.1/163509</link>
<description>The Passive Regolith Sampler: From Concept to Delivery to the Lunar Surface
Stober, Keith J.; Dorrington, Scott; Rupasinghe, Dinuri; Mao, Claire; Romero, Elizabeth; Moswane, Rethabile; Zhang, Jackson; Mahfouth AlShehhi, Abdulla; Els, Sebastian G.; Wood, Danielle
This paper outlines the development and testing of two light-weight, low-cost, passive sensors developed by the MIT Space Enabled Research Group that were delivered to the moon in 2023 onboard the Rashid-1 rover as part of the Emirates Lunar Mission. The Passive Regolith Sampler (PRS) is a simple device mounted to the wheels of the rover, containing an aluminum tray with a cover plate of perforated holes of varying size and spacing. The device uses the motion of the rover wheel to press the device into the lunar surface, capturing small samples of lunar regolith in the holes. The Passive Wax Thermometer (PWT) is a collection of 10 wax samples, contained in individual capsules covered with sapphire windows. Each wax sample is an alkane with a different melting temperature determined by its chemical formula. Each wax sample undergoes temperature-dependent changes in opacity, providing a method for inferring temperature via image analysis. In preparation for lunar surface operations, the Space Enabled team performed a series of laboratory experiments and analytical analyses aiming to replicate conditions expected to be encountered during the mission. These experiments and analyses explored the physical mechanisms of the rover/regolith interaction, the lighting and thermal conditions at the landing site, and the quality of images captured from the rover mast camera. This paper outlines the results of these experiments and analyses, and their influence on the design and operations planning for the two payloads. Due to landing anomalies, the 2023 mission did not complete lunar surface operations; further work is planned to explore future operational opportunities.
</description>
<pubDate>Thu, 30 Oct 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163509</guid>
<dc:date>2025-10-30T00:00:00Z</dc:date>
</item>
<item>
<title>Von Neumann-Morgenstern stability and internal closedness in matching theory</title>
<link>https://hdl.handle.net/1721.1/163508</link>
<description>Von Neumann-Morgenstern stability and internal closedness in matching theory
Faenza, Yuri; Stein, Cliff; Wan, Jia
Gale and Shapley’s stability criterion enjoys a rich mathematical structure, which propelled its application in various settings. Although immensely popular, the approach by Gale and Shapley cannot encompass all the different features that arise in applications, motivating the search for alternative solution concepts. We investigate alternatives that rely on the concept of internal stability, a notion introduced for abstract games by von Neumann and Morgenstern and motivated by the need of finding a set of mutually compatible solutions. The set of stable matchings is internally stable. However, the class of internally stable sets is much richer, for an internally stable set of matchings may also include unstable matchings and/or exclude stable ones. In this paper, we focus on two families of internally stable sets of matchings: von Neumann-Morgenstern stable and internally closed. We study algorithmic questions around those concepts in both the marriage and the roommate models. One of our results implies that, in the marriage model, internally closed sets are an alternative to stable matchings that is as tractable as stable matchings themselves, a fairly rare occurrence in the area. Both our positive and negative results rely on new structural insights and extensions of classical algebraic structures associated with sets of matchings, which we believe to be of independent interest.
</description>
<pubDate>Thu, 30 Oct 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163508</guid>
<dc:date>2025-10-30T00:00:00Z</dc:date>
</item>
<item>
<title>Order-forcing in Neural Codes</title>
<link>https://hdl.handle.net/1721.1/163507</link>
<description>Order-forcing in Neural Codes
Jeffs, R. A.; Lienkaemper, Caitlin; Youngs, Nora
Convex neural codes are subsets of the Boolean lattice that record the intersection patterns of convex sets in Euclidean space. Much work in recent years has focused on finding combinatorial criteria on codes that can be used to classify whether or not a code is convex. In this paper we introduce order-forcing, a combinatorial tool which recognizes when certain regions in a realization of a code must appear along a line segment between other regions. We use order-forcing to construct novel examples of non-convex codes, and to expand existing families of examples. We also construct a family of codes which shows that a dimension bound of Cruz, Giusti, Itskov, and Kronholm (referred to as monotonicity of open convexity) is tight in all dimensions.
</description>
<pubDate>Tue, 28 Oct 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163507</guid>
<dc:date>2025-10-28T00:00:00Z</dc:date>
</item>
<item>
<title>Co-evolution of alpha-helical transmembrane protein residues: large-scale variant profiling and complete mutational landscape of 2277 known PDB entries representing 504 unique human protein sequences</title>
<link>https://hdl.handle.net/1721.1/163506</link>
<description>Co-evolution of alpha-helical transmembrane protein residues: large-scale variant profiling and complete mutational landscape of 2277 known PDB entries representing 504 unique human protein sequences
Karagöl, Taner; Karagöl, Alper; Zhang, Shuguang
Membrane proteins play fundamental roles in cellular function, yet the evolutionary dynamics of their amino acid composition remain poorly understood. Our current study investigates the substitutional landscape and evolutionary patterns of hydrophilic and hydrophobic residues in membrane α-helical proteins, addressing a significant gap in our knowledge of protein evolution. We analyzed 2277 high-resolution protein structures from the RCSB Protein Data Bank corresponding to 458 unique PDB structures, 504 UniProt transmembrane entries and their AlphaMissense predicted mutational libraries including more than 5.8 million amino acid substitutions, focusing on known transmembrane α-helical proteins in Homo sapiens. Our analysis showed that the pathological outcome of the substitutions is diverse, as nonpolar to polar changes showed higher pathological scores in general. Notably, F &lt;=&gt; Y substitutions showed significantly lower pathological scores. Our further analysis revealed a significant asymmetry in the evolutionary frequencies of polar and nonpolar amino acids. We identified key residue pairs driving this asymmetry, with F &lt;=&gt; Y, A &lt;=&gt; T, V &lt;=&gt; T and A &lt;=&gt; S co-evolution diverging from the expected negative correlations (Spearman’s rho &gt; 0.20, p &lt; 0.001). The V &lt;=&gt; T substitution via an alanine intermediate and the G &lt;=&gt; N substitution via a serine intermediate lower their statistical barrier, which would otherwise require two sequential base changes. We propose two evolutionary game theory (EGT) based models to explain their diversification, with partial correlation analysis on residue frequencies in homolog sequences. These mathematical insights suggest a previously unrecognized evolutionary pressure, potentially linked to functional diversification, which could be targeted to combat drug resistance. Our results offer insights into membrane protein evolution and may inform improved methods for protein structure prediction and design.
</description>
<pubDate>Wed, 24 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163506</guid>
<dc:date>2025-09-24T00:00:00Z</dc:date>
</item>
<item>
<title>Perforation of the host cell plasma membrane during Toxoplasma invasion requires rhoptry exocytosis</title>
<link>https://hdl.handle.net/1721.1/163505</link>
<description>Perforation of the host cell plasma membrane during Toxoplasma invasion requires rhoptry exocytosis
Male, Frances; Kegawa, Yuto; Blank, Paul S.; Jiménez-Munguía, Irene; Sidik, Saima M.; Valleau, Dylan; Lourido, Sebastian; Lebrun, Maryse; Zimmerberg, Joshua; Ward, Gary E.
Toxoplasma gondii is an obligate intracellular parasite. Proteins released during host cell invasion from apical secretory organelles known as rhoptries are delivered into the host cell cytosol to perform functions critical for parasite survival and virulence. How these effector proteins move across the host cell plasma membrane is unknown but may involve a previously noted temporary loss of host cell plasma membrane barrier integrity. Here, we use high-speed, multi-wavelength fluorescence imaging to spatially monitor the barrier integrity of the host cell plasma membrane, in real time, during invasion. The data reveal that early in invasion the parasite creates a transient perforation in the host cell membrane. The perforation occurs at the point on the host membrane in contact with the parasite’s apical end. Parasites depleted of any of five proteins known to be required for rhoptry exocytosis are unable to perforate the host cell membrane. These data suggest a model in which perforating agents stored within rhoptries are released onto the host cell at the initiation of invasion to create a conduit for the delivery of rhoptry effector proteins.
</description>
<pubDate>Fri, 19 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163505</guid>
<dc:date>2025-09-19T00:00:00Z</dc:date>
</item>
<item>
<title>Unknottedness of free boundary minimal surfaces and self-shrinkers</title>
<link>https://hdl.handle.net/1721.1/163504</link>
<description>Unknottedness of free boundary minimal surfaces and self-shrinkers
Chu, Sabine; Franz, Giada
We study unknottedness for free boundary minimal surfaces in a three-dimensional Riemannian manifold with nonnegative Ricci curvature and strictly convex boundary, and for self-shrinkers in the three-dimensional Euclidean space. For doing so, we introduce the concepts of boundary graph for free boundary minimal surfaces and of graph at infinity for self-shrinkers. We prove that these surfaces are unknotted in the sense that any two such surfaces with isomorphic boundary graph or graph at infinity are smoothly isotopic.
</description>
<pubDate>Mon, 08 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163504</guid>
<dc:date>2025-09-08T00:00:00Z</dc:date>
</item>
<item>
<title>The wavefront set: bounds for the Langlands parameter</title>
<link>https://hdl.handle.net/1721.1/163503</link>
<description>The wavefront set: bounds for the Langlands parameter
Ciubotaru, Dan; Kim, Ju-Lee
For an irreducible smooth representation of a connected reductive p-adic group, two important associated invariants are the wavefront set and the (partly conjectural) Langlands parameter. While a wavefront set consists of p-adic nilpotent orbits, one constituent of the Langlands parameter is a complex nilpotent orbit in the dual Lie algebra. For unipotent representations in the sense of Lusztig, the corresponding nilpotent orbits on the two sides are related via the Lusztig–Spaltenstein duality (Ciubotaru et al. in Am J Math arXiv:2112.14354v4 , J Reine Angew Math (Crelles J) 823:191–253, 2025). In this paper, we formulate a general upper-bound conjecture and several variants relating the nilpotent orbits that appear in the wavefront set and in the Langlands parameter. We also verify these expectations in some cases, including the depth-zero supercuspidal representations of classical groups and all the irreducible representations of G2.
</description>
<pubDate>Tue, 09 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163503</guid>
<dc:date>2025-09-09T00:00:00Z</dc:date>
</item>
<item>
<title>An open-source and low-cost dual-extruder 3D printer for macroscale biotic materials</title>
<link>https://hdl.handle.net/1721.1/163502</link>
<description>An open-source and low-cost dual-extruder 3D printer for macroscale biotic materials
de Alva, Jesse P.; Buehler, Markus
This work presents the design and fabrication of a novel, dual-extruder biotic 3D printer, tailored for precise deposition of natural biomaterials such as pectin, chitosan, and cellulose. Moving beyond the limitations of traditional thermoplastic extrusion which relies on non-renewable plastics and produces significant waste, this printer utilizes a syringe-based mechanical extruder to deposit viscous biotic material hydrogels. The integration of a dual-extruder system enables the creation of multi-material prints, offering new possibilities for sustainable and biotic manufacturing. Designed with accessibility and versatility in mind, the system features user-friendly operation suitable for non-experts with open-source hardware and software. By providing a robust, customizable, and open-source platform, this work aims to empower researchers, educators, and innovators to advance biomaterials research and expand the reach of sustainable additive manufacturing. The printer fosters a collaborative community and lays the groundwork for further exploration of biological designs and materials.
</description>
<pubDate>Wed, 22 Oct 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163502</guid>
<dc:date>2025-10-22T00:00:00Z</dc:date>
</item>
<item>
<title>Observation of the doubly-charmed-baryon decay Ξ cc + + → Ξ c 0 π + π +</title>
<link>https://hdl.handle.net/1721.1/163501</link>
<description>Observation of the doubly-charmed-baryon decay Ξ cc + + → Ξ c 0 π + π +
Aaij, R.; Abdelmotteleb, A. S. W.; Abellan Beteta, C.; Abudinén, F.; Ackernley, T.; Adefisoye, A. A.; Adeva, B.; Adinolfi, M.; Adlarson, P.; Agapopoulou, C.; Aidala, C. A.; Ajaltouni, Z.; Akar, S.; Akiba, K.; Albicocco, P.; Albrecht, J.; The LHCb Collaboration
A search for the doubly-charmed-baryon decay Ξ cc + + → Ξ c 0 π + π + is performed using proton-proton collision data collected by the LHCb experiment at a centre-of-mass energy of 13 TeV and corresponding to an integrated luminosity of 5.4 fb−1. A significant structure consistent with the Ξ cc + + baryon is observed in the Ξ c 0 π + π + invariant-mass spectrum. Using the Ξ cc + + → Λ c + K − π + π + decay as the normalisation channel, the branching fraction ratio, B Ξ cc + + → Ξ c 0 π + π + B Ξ cc + + → Λ c + K − π + π + , is measured to be 1.37 ± 0.18 (stat) ± 0.09 (syst) ± 0.35 (ext). This measurement provides critical input for testing QCD factorisation methods in the weak decays of doubly-heavy baryons, particularly in quantifying nonperturbative effects such as final-state interactions and resonance contributions to the hadronisation process.
</description>
<pubDate>Thu, 16 Oct 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163501</guid>
<dc:date>2025-10-16T00:00:00Z</dc:date>
</item>
<item>
<title>Experimental evidence for nodal superconducting gap in moiré graphene</title>
<link>https://hdl.handle.net/1721.1/163500</link>
<description>Experimental evidence for nodal superconducting gap in moiré graphene
Park, Jeong Min; Sun, Shuwen; Watanabe, Kenji; Taniguchi, Takashi; Jarillo-Herrero, Pablo
Understanding the nature of superconductivity in magic-angle graphene remains&#13;
challenging. A key difficulty lies in discerning the different energy scales in this strongly&#13;
interacting system, particularly the superconducting gap. Here, we report simultaneous tunneling&#13;
spectroscopy and transport measurements of magic-angle twisted trilayer graphene. This approach&#13;
allows us to identify two coexisting V-shaped tunneling gaps with different energy scales: a&#13;
distinct low-energy superconducting gap that vanishes at the superconducting critical temperature&#13;
and magnetic field, and a higher-energy pseudogap. The superconducting tunneling spectra display&#13;
a linear gap-filling behavior with temperature and magnetic field and exhibit the Volovik effect,&#13;
consistent with a nodal order parameter. Our work suggests an unconventional nature of the&#13;
superconducting gap and establishes an experimental framework for multidimensional&#13;
investigation of tunable quantum materials.
</description>
<pubDate>Thu, 06 Nov 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163500</guid>
<dc:date>2025-11-06T00:00:00Z</dc:date>
</item>
<item>
<title>Mitigating matrix effects in oil and gas wastewater analysis: LC-MS/MS method for ethanolamines</title>
<link>https://hdl.handle.net/1721.1/163499</link>
<description>Mitigating matrix effects in oil and gas wastewater analysis: LC-MS/MS method for ethanolamines
de Vera, Glen Andrew D; Caldiero, Loredana; Conte, Giovanni; Plata, Desirée L
The high salinity and organic content in oil and gas wastewaters can cause ion suppression during liquid chromatography mass spectrometry (LC/MS) analysis, diminishing the sensitivity and accuracy of measurements in available methods. This suppression is severe for low molecular weight organic compounds such as ethanolamines (e.g., monoethanolamine (MEA), diethanolamine (DEA), triethanolamine (TEA), N-methyldiethanolamine (MDEA), and N,N-ethyldiethanolamine (EDEA)). Here, we deployed solid phase extraction (SPE), mixed-mode LC, triple quadrupole MS with positive electrospray ionization (ESI), and a suite of stable isotope standards (i.e., one per target compound) to correct for ion suppression by salts and organic matter, SPE losses, and instrument variability. The method was evaluated in produced water samples from Italy (NaCl salinity from 8110–18 100 mg L−1; diesel range organic compounds ranging from 5.1–7.9 mg L−1). After correcting for matrix effects, ethanolamines in produced water samples were quantified. The first batch of samples (March 2019) had 37–646 μg L−1 total ethanolamines. The second batch of samples (September 2019) had greater ethanolamine content of 77–3976 μg L−1 which was attributed to a reduced water cut during oil production, enhancing the proportionate abundance of these compounds in the aqueous phase. In all samples, DEA and MEA were the dominant ethanolamine species. Possible sources (e.g., corrosion inhibitor and biotransformation) and natural attenuation potential during storage (e.g., at different temperatures, acidification, and addition of sodium azide) were investigated. The developed analytical method enables further investigation of the fate of low molecular weight organic additives in oil and gas development and provides an enhanced ability to evaluate risks associated with chemical release to the environment.
</description>
<pubDate>Thu, 26 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163499</guid>
<dc:date>2024-12-26T00:00:00Z</dc:date>
</item>
<item>
<title>Sensitivity analysis of aromatic chemistry to gas-phase kinetics in a dark molecular cloud model</title>
<link>https://hdl.handle.net/1721.1/163498</link>
<description>Sensitivity analysis of aromatic chemistry to gas-phase kinetics in a dark molecular cloud model
Byrne, Alex N; Xue, Ci; Van Voorhis, Troy; McGuire, Brett A
The increasingly large number of complex organic molecules detected in the interstellar medium necessitates robust kinetic models that can be relied upon for investigating the involved chemical processes. Such models require rate coefficients for each of the thousands of reactions; the values of these are often estimated or extrapolated, leading to large uncertainties that are rarely quantified. We have performed a global Monte Carlo and a more local one-at-a-time sensitivity analysis on the gas-phase rate coefficients in a 3-phase dark cloud model. Time-dependent sensitivities have been calculated using four metrics to determine key reactions for the overall network as well as for the cyanonaphthalene molecule in particular, an important interstellar species that is severely under-produced by current models. All four metrics find that reactions involving small, reactive species that initiate hydrocarbon growth have large effects on the overall network. Cyanonaphthalene is most sensitive to a number of these reactions as well as ring-formation of the phenyl cation (C6H5+) and aromatic growth from benzene to naphthalene. Future efforts should prioritize constraining rate coefficients of key reactions and expanding the network surrounding these processes. These results highlight the strength of sensitivity analysis techniques to identify critical processes in complex chemical networks, such as those often used in astrochemical modeling.
</description>
<pubDate>Mon, 21 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163498</guid>
<dc:date>2024-10-21T00:00:00Z</dc:date>
</item>
<item>
<title>Automated electrochemical oxygen sensing using a 3D-printed microfluidic lab-on-a-chip system</title>
<link>https://hdl.handle.net/1721.1/163497</link>
<description>Automated electrochemical oxygen sensing using a 3D-printed microfluidic lab-on-a-chip system
Kaufman, Daniel; Winkler, Steffen; Heuer, Christopher; Shibli, Ahed; Snezhko, Alexander; Livshits, Gideon I; Bahnemann, Janina; Ben-Yoav, Hadar
Dissolved oxygen is crucial for metabolism, growth, and other complex physiological and pathological processes; however, standard physiological models (such as organ-on-chip systems) often use ambient oxygen levels, which do not reflect the lower levels that are typically found in vivo. Additionally, the local generation of reactive oxygen species (ROS; a key factor in physiological systems) is often overlooked in biology-mimicking models. Here, we present a microfluidic system that integrates electrochemical dissolved oxygen sensors with lab-on-a-chip technology to monitor the physiological oxygen concentrations and generate hydrogen peroxide (H2O2; a specific ROS). This microfluidic lab-on-a-chip system was fabricated using high-resolution 3D printing technology in a one-step process. It incorporates a micromixer, an on-chip bubble-trap, an electrochemical cell with fabricated gold or platinum black-coated working electrodes as well as an Ag/AgCl reference electrode, and a commercial optical oxygen sensor for validation. This device enables an automated variation of the oxygen levels as well as sensitive electrochemical oxygen monitoring (limit of detection = 11.9 ± 0.3 μM), with a statistically significant correlation with the optical sensor. The proposed system can serve as a tool to characterize and evaluate custom-made electrodes. Indeed, we envision that in the future it will be used to regulate dissolved oxygen levels and oxygen species in real time in organ-on-chip systems.
</description>
<pubDate>Sat, 28 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163497</guid>
<dc:date>2024-12-28T00:00:00Z</dc:date>
</item>
<item>
<title>A critical review on Li-ion transport, chemistry and structure of ceramic–polymer composite electrolytes for solid state batteries</title>
<link>https://hdl.handle.net/1721.1/163496</link>
<description>A critical review on Li-ion transport, chemistry and structure of ceramic–polymer composite electrolytes for solid state batteries
Sand, Sara Catherine; Rupp, Jennifer LM; Yildiz, Bilge
In the transition to safer, more energy-dense solid state batteries, polymer–ceramic composite electrolytes may offer a potential route to achieve simultaneously high Li-ion conductivity and enhanced mechanical stability. Despite numerous studies on the polymer–ceramic composite electrolytes, disagreements persist on whether the polymer or the ceramic is positively impacted in their constituent ionic conductivity for such composite electrolytes, and even whether the interface is a blocking layer or a highly conductive lithium ion path. This lack of understanding limits the design of effective composite solid electrolytes. By thorough and critical analysis of the data collected in the field over the last three decades, we present arguments for lithium conduction through the bulk of the polymer, ceramic, or their interface. From this analysis, we can conclude that the unexpectedly high conductivity reported for some ceramic–polymer composites cannot be accounted for by the ceramic phase alone. There is evidence to support the theory that the Li-ion conductivity in the polymer phase increases along this interface in contact with the ceramic. The potential mechanisms for this include increased free volume, decreased crystallinity, and modulated Lewis acid–base effects in the polymer, with the former two to be the more likely mechanisms. Future work in this field requires understanding these factors more quantitatively, and tuning of the ceramic surface chemistry and morphology in order to obtain targeted structural modifications in the polymer phase.
</description>
<pubDate>Mon, 18 Nov 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163496</guid>
<dc:date>2024-11-18T00:00:00Z</dc:date>
</item>
<item>
<title>Na vs. Li metal anodes for batteries: unraveling thermodynamic and electronic origins of voids and developing descriptors for artificial surface coatings</title>
<link>https://hdl.handle.net/1721.1/163495</link>
<description>Na vs. Li metal anodes for batteries: unraveling thermodynamic and electronic origins of voids and developing descriptors for artificial surface coatings
Venturi, Victor; Freitas, Rodrigo; Abate, Iwnetim Iwnetu
Techno-economic, humanitarian, and safety concerns limit the possible uses of conventional lithium-ion and lithium-metal batteries. Sodium-based batteries constitute a promising alternative to address these issues; however, due to the similarities between the two alkali metals, they present similar failure modes as their lithium counterparts. In this work, we focus on one of such failure mechanisms: the thermodynamically-driven accumulation of vacancies on the surface of the metallic anode, which leads to the formation of voids and pits, detrimental to battery performance and cycle life. We investigate the differences in behavior between anode/coating interfaces of both lithium and sodium. Adhesion energy, a descriptor previously argued to be a reliable design principle for lithium metal anodes, is found to not exhibit the same predictive power for sodium metal architectures: in cases where vacancy congregation is not thermodynamically favorable for isolated sodium slabs, we find strong interfacial interactions to have adverse effects on void formation. By studying select coating materials, we also reveal that these material interactions at alkali/coating interfaces are highly nuanced, and that the field of surface science and engineering is ripe with opportunities for further discovery and tuning of surface properties via coating selection.
</description>
<pubDate>Tue, 24 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163495</guid>
<dc:date>2024-09-24T00:00:00Z</dc:date>
</item>
<item>
<title>Interface‐Induced Stability of Nontrivial Topological Spin Textures: Unveiling Room‐Temperature Hopfions and Skyrmions</title>
<link>https://hdl.handle.net/1721.1/163494</link>
<description>Interface‐Induced Stability of Nontrivial Topological Spin Textures: Unveiling Room‐Temperature Hopfions and Skyrmions
Katmis, Ferhat; Lauter, Valeria; Yagan, Rawana; Brandt, Iuri S; Cheghabouri, Arash M; Zhou, Hua; Freeland, John W; de Araujo, Clodoaldo IL; Jamer, Michelle E; Heiman, Don; Onbasli, Mehmet C; Moodera, Jagadeesh S
Topological spin configurations, such as soliton-like spin texture and Dirac electron assemblies, have recently emerged in fundamental science and technology. Achieving stable topological spin textures at room temperature is crucial for their use as long-range information carriers. However, their creation and manipulation are hindered by multi-step field training and competing interactions. Thus, a spontaneous ground state for multidimensional topological spin textures is desirable, with skyrmions forming swirling, hedgehog-like spin structures in two dimensions and hopfions as their twisted 3D counterparts. Here, the first observation of robust and reproducible topological spin textures of hopfions and skyrmions observed at room temperature and in zero magnetic field is reported, which are stabilized by geometric confinement and protected by interfacial magnetism in a ferromagnet/topological insulator/ferromagnet trilayer heterostructure. These skyrmion-hopfion configurations are directly observed at room temperature with Lorenz transmission electron microscopy. Using micromagnetic modeling, the experimental observations of hopfion-skyrmion assemblies are reproduced. This model reveals a complete picture of how spontaneously organized skyrmion lattices encircled by hopfion rings are controlled by surface electrons, uniaxial anisotropy, and Dzyaloshinskii-Moriya interaction. This study provides evidence that topological chiral spin textures can facilitate the development of magnetic topological carriers, paving the way for ultralow-power and high-density information processing.
</description>
<pubDate>Mon, 18 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163494</guid>
<dc:date>2025-08-18T00:00:00Z</dc:date>
</item>
<item>
<title>Genetic Surfaceome E. coli Reprogramming Enables Selective Water Oxidation</title>
<link>https://hdl.handle.net/1721.1/163493</link>
<description>Genetic Surfaceome E. coli Reprogramming Enables Selective Water Oxidation
Sedenho, Graziela C; Pacheco, Jéssica C; Gut, Melanie; Lima, Filipe CDA; Dey, Sunanda; Crespilho, Frank N; Furst, Ariel L
Programming catalytic behavior at the microbial genome level is a frontier in synthetic biology with direct impact on bioelectrocatalysis. A key challenge is the coordinated control of gene expression, localization, folding, and cofactor maturation required to achieve proper bioelectrocatalytic activity. Here, a synthetic operon in Escherichia coli is engineered to reprogram its surfaceome for selective water oxidation. Using orthogonal IPTG-inducible control and codon-optimized expression, a fungal bilirubin oxidase (BOD) displayed at the cell surface is produced by ice nucleation protein anchoring (BOD-E. coli). Post-overexpression copper catalytic site reconstitution provides an active holoenzyme. The developed engineered living material performs water oxidation at near-zero overpotential (27 mV at pH 9.1), with complete suppression of the oxygen reduction reaction. These results show how regenerable microbial platforms can be designed for selective catalysis and artificial photosynthesis.
</description>
<pubDate>Fri, 15 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163493</guid>
<dc:date>2025-08-15T00:00:00Z</dc:date>
</item>
<item>
<title>Surprises From the Basal Ganglia: Stop and Go Have New Meaning</title>
<link>https://hdl.handle.net/1721.1/163492</link>
<description>Surprises From the Basal Ganglia: Stop and Go Have New Meaning
Graybiel, Ann M
This perspective highlights new worksuggesting the need for revision of the canonical direct–indirect model of the basal ganglia’s inﬂuence on move-ment, with fresh evidence that there is a formerlyunappreciated pair of direct and indirect pathways thatparallel the standard model’s canonical direct and indi-rect pathways, and promising evidence pointing towardimproved clinical treatments for Parkinson’s disease. Asa working hypothesis, it is suggested that the non-canonical direct and indirect pathways, which arise instriosomes, might act as homeostatic circuits that canreign in or amplify the activity of the canonical pathwaysin the face of their imbalance, including that occurring inhyperkinetic or hypokinetic disorders.
</description>
<pubDate>Thu, 14 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163492</guid>
<dc:date>2025-08-14T00:00:00Z</dc:date>
</item>
<item>
<title>Simulating the Potential for Invasive Grass Expansion to Alter Wildfire Behavior in Southern California With WRF‐Fire</title>
<link>https://hdl.handle.net/1721.1/163491</link>
<description>Simulating the Potential for Invasive Grass Expansion to Alter Wildfire Behavior in Southern California With WRF‐Fire
Wang, Bowen; Madakumbura, Gavin D; Juliano, Timothy W; Williams, A Park
Invasion by non‐native annual grasses poses a serious threat to native vegetation in California,facilitated through interaction with wildfires. Our work is the first attempt to use the coupled fire‐atmospheremodel, WRF‐Fire, to investigate how shifts from native, shrub‐dominated vegetation to invasive grasses couldhave affected a known wildfire event in southern California. We simulate the Mountain Fire, which burned&gt;11,000 ha in July 2013, under idealized fuel conditions representing varying extents of grass invasion.Expanding grass to double its observed coverage causes fire to spread faster due to the lower fuel load in grassesand increased wind speed. Beyond this, further grass expansion reduces the simulated spread rate because lowerheat release partially offsets the positive effects. Our simulations suggest that grass expansion may generallypromote larger faster‐spreading wildfires in southern California, motivating continued efforts to contain andreduce the spread of invasive annual grasses in this region.
</description>
<pubDate>Wed, 13 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163491</guid>
<dc:date>2025-08-13T00:00:00Z</dc:date>
</item>
<item>
<title>Vaccine-boosted CAR T crosstalk with host immunity to reject tumors with antigen heterogeneity</title>
<link>https://hdl.handle.net/1721.1/163490</link>
<description>Vaccine-boosted CAR T crosstalk with host immunity to reject tumors with antigen heterogeneity
Ma, Leyuan; Hostetler, Alexander; Morgan, Duncan M; Maiorino, Laura; Sulkaj, Ina; Whittaker, Charles A; Neeser, Alexandra; Pires, Ivan Susin; Yousefpour, Parisa; Gregory, Justin; Qureshi, Kashif; Dye, Jonathan; Abraham, Wuhbet; Suh, Heikyung; Li, Na; Love, J Christopher; Irvine, Darrell J
Chimeric antigen receptor (CAR) T cell therapy effectively treats human cancer, but the loss of the antigen recognized by the CAR poses a major obstacle. We found that in vivo vaccine boosting of CAR T cells triggers the engagement of the endogenous immune system to circumvent antigen-negative tumor escape. Vaccine-boosted CAR T promoted dendritic cell (DC) recruitment to tumors, increased tumor antigen uptake by DCs, and elicited the priming of endogenous anti-tumor T cells. This process was accompanied by shifts in CAR T metabolism toward oxidative phosphorylation (OXPHOS) and was critically dependent on CAR-T-derived IFN-γ. Antigen spreading (AS) induced by vaccine-boosted CAR T enabled a proportion of complete responses even when the initial tumor was 50% CAR antigen negative, and heterogeneous tumor control was further enhanced by the genetic amplification of CAR T IFN-γ expression. Thus, CAR-T-cell-derived IFN-γ plays a critical role in promoting AS, and vaccine boosting provides a clinically translatable strategy to drive such responses against solid tumors.
</description>
<pubDate>Thu, 20 Jul 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163490</guid>
<dc:date>2023-07-20T00:00:00Z</dc:date>
</item>
<item>
<title>Early cellular and molecular signatures correlate with severity of West Nile virus infection</title>
<link>https://hdl.handle.net/1721.1/163489</link>
<description>Early cellular and molecular signatures correlate with severity of West Nile virus infection
Lee, Ho-Joon; Zhao, Yujiao; Fleming, Ira; Mehta, Sameet; Wang, Xiaomei; Wyk, Brent Vander; Ronca, Shannon E; Kang, Heather; Chou, Chih-Hung; Fatou, Benoit; Smolen, Kinga K; Levy, Ofer; Clish, Clary B; Xavier, Ramnik J; Steen, Hanno; Hafler, David A; Love, J Christopher; Shalek, Alex K; Guan, Leying; Murray, Kristy O; Kleinstein, Steven H; Montgomery, Ruth R
Infection with West Nile virus (WNV) drives a wide range of responses, from asymptomatic to flu-like symptoms/fever or severe cases of encephalitis and death. To identify cellular and molecular signatures distinguishing WNV severity, we employed systems profiling of peripheral blood from asymptomatic and severely ill individuals infected with WNV. We interrogated immune responses longitudinally from acute infection through convalescence employing single-cell protein and transcriptional profiling complemented with matched serum proteomics and metabolomics as well as multi-omics analysis. At the acute time point, we detected both elevation of pro-inflammatory markers in innate immune cell types and reduction of regulatory T cell activity in participants with severe infection, whereas asymptomatic donors had higher expression of genes associated with anti-inflammatory CD16&lt;sup&gt;+&lt;/sup&gt; monocytes. Therefore, we demonstrated the potential of systems immunology using multiple cell-type and cell-state-specific analyses to identify correlates of infection severity and host cellular activity contributing to an effective anti-viral response.
</description>
<pubDate>Fri, 15 Dec 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163489</guid>
<dc:date>2023-12-15T00:00:00Z</dc:date>
</item>
<item>
<title>Full-length single-cell BCR sequencing paired with RNA sequencing reveals convergent responses to pneumococcal vaccination</title>
<link>https://hdl.handle.net/1721.1/163488</link>
<description>Full-length single-cell BCR sequencing paired with RNA sequencing reveals convergent responses to pneumococcal vaccination
Morgan, Duncan M; Zhang, Yiming J; Kim, Jin-Hwan; Murillo, MaryAnn; Singh, Suddham; Loschko, Jakob; Surendran, Naveen; Sekulovic, Ognjen; Feng, Ellie; Shi, Shuting; Irvine, Darrell J; Patil, Sarita U; Kanevsky, Isis; Chorro, Laurent; Christopher Love, J
Single-cell RNA sequencing (scRNA-seq) can resolve transcriptional features from individual cells, but scRNA-seq techniques capable of resolving the variable regions of B cell receptors (BCRs) remain limited, especially from widely-used 3′-barcoded libraries. Here, we report a method that can recover paired, full-length variable region sequences of BCRs from 3′-barcoded scRNA-seq libraries. We first verify this method (B3E-seq) can produce accurate, full-length BCR sequences. We then apply this method to profile B cell responses elicited against the capsular polysaccharide of Streptococcus pneumoniae serotype 3 (ST3) by glycoconjugate vaccines in five infant rhesus macaques. We identify BCR features associated with specificity for the ST3 antigen which are present in multiple vaccinated monkeys, indicating a convergent response to vaccination. These results demonstrate the utility of our method to resolve key features of the B cell repertoire and profile antigen-specific responses elicited by vaccination.
</description>
<pubDate>Sat, 28 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163488</guid>
<dc:date>2024-09-28T00:00:00Z</dc:date>
</item>
<item>
<title>Rapidity and multiplicity dependence of charged-particle flow in pPb collisions at s NN = 8.16 TeV</title>
<link>https://hdl.handle.net/1721.1/163487</link>
<description>Rapidity and multiplicity dependence of charged-particle flow in pPb collisions at s NN = 8.16 TeV
Aaij, R.; Abdelmotteleb, A. S. W.; Abellan Beteta, C.; Abudinén, F.; Ackernley, T.; Adefisoye, A. A.; Adeva, B.; Adinolfi, M.; Adlarson, P.; Agapopoulou, C.; Aidala, C. A.; Ajaltouni, Z.; Akar, S.; Akiba, K.; Albicocco, P.; Albrecht, J.; LHCB Collaboration
The elliptic and triangular flow of charged particles are measured using two-particle angular correlations in pPb collisions in the pseudorapidity range 2.0 &lt; |η| &lt; 4.8. The data sample was collected by the LHCb experiment in 2016 at a centre-of-mass energy per nucleon pair of s NN = 8.16 TeV, containing in total approximately 1.5 billion collision events. Non-flow contributions are obtained in low-multiplicity collisions and subtracted to extract the flow harmonics. The results are presented as a function of event multiplicity and hadron transverse momentum. Comparisons with a full (3+1)D dynamic model indicate that it overestimates the measured elliptic flow. A comparison between the forward and backward regions reveals no significant differences in flow parameters, suggesting that final-state effects may dominate over initial-state effects in the origin of flow in small systems.
</description>
<pubDate>Wed, 15 Oct 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163487</guid>
<dc:date>2025-10-15T00:00:00Z</dc:date>
</item>
<item>
<title>Truth and perspective</title>
<link>https://hdl.handle.net/1721.1/163486</link>
<description>Truth and perspective
Ricciardi, Giuseppe; Reuter, Kevin
Several studies in experimental philosophy and semantics have shown that a substantial number of English speakers consider a statement true even if it does not align with the facts, as long as it is justified from the speaker's perspective. These findings challenge the prevailing view among philosophers that truth in the empirical domain is uniformly based on a statement's correspondence to reality. In this study, we explore how perspective-taking influences truth assessments by showing that this influence depends on how the critical question assessing the statement’s truth is phrased. Our results show that when the question targets only the proposition, e.g., “Is it true that [the uttered proposition]?”), participants typically apply a correspondence view of truth—consistent with philosophical convention. But when the question also highlights the speaker (e.g., “Is [the speaker]’s answer true?”), many participants shift toward judging the statement from the speaker’s perspective. We discuss four possible explanations for this behavior and examine the implications of the findings for other philosophical discussions concerning truth and lying, the theory of reference, and norms of assertion.
</description>
<pubDate>Thu, 23 Oct 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163486</guid>
<dc:date>2025-10-23T00:00:00Z</dc:date>
</item>
<item>
<title>Curtain Model for CAT(0) Spaces and Isometries</title>
<link>https://hdl.handle.net/1721.1/163485</link>
<description>Curtain Model for CAT(0) Spaces and Isometries
Chen, Yutong
This paper studies the dynamics of isometries in the curtain model, which is used to capture the hyperbolicity in a fixed CAT(0) space. We establish several fundamental properties and fully classify the behavior of semisimple isometries of a CAT(0) space in the associated curtain model. In the nonsemisimple case, we restrict the behavior of parabolic actions with positive translation length in the curtain model in most cases of interest, allowing the use of ping-pong-like techniques on the curtain model to provide insights into the study of CAT(0) groups.
</description>
<pubDate>Wed, 30 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163485</guid>
<dc:date>2025-07-30T00:00:00Z</dc:date>
</item>
<item>
<title>An Algorithm for Estimating the Crossing Number of Dense Graphs, and Continuous Analogs of the Crossing and Rectilinear Crossing Numbers</title>
<link>https://hdl.handle.net/1721.1/163484</link>
<description>An Algorithm for Estimating the Crossing Number of Dense Graphs, and Continuous Analogs of the Crossing and Rectilinear Crossing Numbers
Solé-Pi, Oriol
We present a deterministic n 2 + o ( 1 ) -time algorithm that approximates the crossing number of any graph G of order n up to an additive error of o ( n 4 ) . We also provide a randomized polynomial-time algorithm that constructs a drawing of G with cr ( G ) + o ( n 4 ) crossings. These results yield a 1 + o ( 1 ) approximation algorithm for the crossing number of dense graphs. Our work complements a paper of Fox, Pach and Súk [20], who obtained similar results for the rectilinear crossing number. The results in [20] and in this paper imply that the (normalized) crossing and rectilinear crossing numbers are estimable parameters. Motivated by this, we introduce two graphon parameters, the crossing density and the rectilinear crossing density, and we prove that, in a precise sense, these are the correct continuous analogs of the crossing and rectilinear crossing numbers of graphs.
</description>
<pubDate>Tue, 21 Oct 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163484</guid>
<dc:date>2025-10-21T00:00:00Z</dc:date>
</item>
<item>
<title>Improved measurement of η/η′ mixing in B s 0 → J / ψ η ′ decays</title>
<link>https://hdl.handle.net/1721.1/163483</link>
<description>Improved measurement of η/η′ mixing in B s 0 → J / ψ η ′ decays
Aaij, R.; Abdelmotteleb, A. S. W.; Abellan Beteta, C.; Abudinén, F.; Ackernley, T.; Adefisoye, A. A.; Adeva, B.; Adinolfi, M.; Adlarson, P.; Agapopoulou, C.; Aidala, C. A.; Ajaltouni, Z.; Akar, S.; Akiba, K.; Albicocco, P.; Albrecht, J.; Aleksiejunas, R.
Branching fraction ratios between the decays B s 0 → J / ψ η ′ are measured using proton-proton collision data collected by the LHCb experiment at centre-of-mass energies of 7, 8 and 13 TeV, corresponding to an integrated luminosity of 9 fb−1. The measured ratios of these branching fractions are B B 0 → J / ψη ′ B B 0 → J / ψη = 0.48 ± 0.06 ± 0.02 ± 0.01 , B B s 0 → J / ψη ′ B B s 0 → J / ψη = 0.80 ± 0.02 ± 0.02 ± 0.01 , where the uncertainties are statistical, systematic and related to the precision of the η(′) branching fractions, respectively. They are used to constrain the η/η′ mixing angle, ϕP, and to probe the presence of a possible glueball component in the η′ meson, described by the gluonic mixing angle ϕG. The obtained results are ϕ P = 41.6 − 1.2 + 1.0 ∘ , ϕ G = 28.1 − 4.0 + 3.9 ∘ , where the uncertainties are statistically dominated. While the value of ϕP is compatible with existing experimental determinations and theoretical calculations, the angle ϕG differs from zero by more than four standard deviations, which points to a substantial glueball component in the η′ meson and/or unexpectedly large contributions from gluon-mediated processes in these decays. The absolute branching fractions are also measured relative to that of the well-established B s 0 → J / ψϕ decay, which serves as the normalisation channel. These results supersede the previous LHCb measurements and are the most precise to date.
</description>
<pubDate>Tue, 14 Oct 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163483</guid>
<dc:date>2025-10-14T00:00:00Z</dc:date>
</item>
<item>
<title>Zero carbon challenges in supply chain management to achieve sustainability</title>
<link>https://hdl.handle.net/1721.1/163482</link>
<description>Zero carbon challenges in supply chain management to achieve sustainability
Derse, O.; Yontar, E.
Reducing carbon emissions due to increasing climate concerns has become important at every stage of the supply chain line, as it is in every sector. Many activities take place in the supply chain processes and it takes serious work for these activities to be in line with the net zero carbon strategy. This paper addresses the challenges that are preventing the supply chain from achieving its net zero carbon target. Challenges addressed; It is categorized as environmental challenges, financial and economic challenges, organizational challenges, social and consumer challenges, technical and technological challenges, and administrative challenges. Depending on the 6 main categories determined, 24 sub-challenges are determined and the network structure, relations and rankings of the determined challenges are determined by the Analytical Network Process (ANP) method, one of the Multi-Criteria Decision Making methods. The risks of the challenges identified by the ANP-based Failure Mode and Effect Analysis (FMEA) are also listed. According to the ANP and ANP based FMEA methods, it is seen that the riskiest results and the most important challenges are Financial and Economic challenges and Technical and Technological challenges, respectively. According to the ANP, the most important challenges are respectively “Lack of technical competence and field experts”, “Lack of resources”, and “High initial investment cost”. According to the ANP based FMEA, the most important challenges are “Lack of resources”, “Lack of technical competence and field experts” and “Uncertain long-term economic return/payback periods and investment risks”, respectively. In the study, it is thought that the relationships and rankings determined will be a roadmap to reach net zero carbon targets in supply chains.
</description>
<pubDate>Tue, 19 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163482</guid>
<dc:date>2025-08-19T00:00:00Z</dc:date>
</item>
<item>
<title>Robust longitudinal and lateral control for mixed-vehicular platoons with string stability guarantees</title>
<link>https://hdl.handle.net/1721.1/163481</link>
<description>Robust longitudinal and lateral control for mixed-vehicular platoons with string stability guarantees
Chen, Qien; Wang, Shimin; Gao, Bolin; Zhan, Zhi; Zhong, Renxin
Integrating longitudinal and lateral controls for vehicular platoons mixed with Connected and Autonomous Vehicles (CAVs) and Level-2 Automated Vehicles (L2AVs) to guarantee string stability against model uncertainty and external disturbances is essential yet challenging. This paper tackles this challenge by introducing a novel integrated longitudinal and lateral control (ILLC) strategy that guarantees input-to-state string stability (ISSS) for heterogeneous vehicular platoons. The proposed ILLC strategy significantly enhances the robustness of vehicular platoons by maintaining the desired headway and ensuring the ISSS against disturbances. By incorporating a disturbance observer, we directly address the disturbance estimation error within the string stability analysis. We validate the effectiveness of our method through simulations of various traffic scenarios. Compared to conventional cooperative adaptive cruise control (CACC) techniques, the proposed method achieves faster convergence to the desired states and exhibits bounded state fluctuations. Furthermore, our method can effectively attenuate external disturbances and dissipate stop-and-go waves.
</description>
<pubDate>Wed, 16 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163481</guid>
<dc:date>2025-07-16T00:00:00Z</dc:date>
</item>
<item>
<title>Finite Rank Perturbation of Non-Hermitian Random Matrices: Heavy Tail and Sparse Regimes</title>
<link>https://hdl.handle.net/1721.1/163480</link>
<description>Finite Rank Perturbation of Non-Hermitian Random Matrices: Heavy Tail and Sparse Regimes
Han, Yi
Abstract In this work we investigate spectral properties of squared random matrices with independent entries that have only two finite moments. We revisit the problem of perturbing a large, i.i.d. random matrix by a finite rank error. We prove that under a merely second moment condition, for a large class of perturbation matrix with bounded rank and bounded operator norm, the outlier eigenvalues of perturbed matrix still converge to that of the perturbation, which was previously known when matrix entries have finite fourth moment. We then show that the same perturbation holds for very sparse random matrices with i.i.d. entries, all the way up to a constant number of nonzero entries per row and column.
</description>
<pubDate>Mon, 29 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163480</guid>
<dc:date>2025-09-29T00:00:00Z</dc:date>
</item>
<item>
<title>Arene extrusion as an approach to reductive elimination at boron: implication of carbene-ligated haloborylene as a transient reactive intermediate</title>
<link>https://hdl.handle.net/1721.1/163479</link>
<description>Arene extrusion as an approach to reductive elimination at boron: implication of carbene-ligated haloborylene as a transient reactive intermediate
Zhang, Chonghe; Gilliard, Robert J; Cummins, Christopher C
Herein, we report boron-centered arene extrusion reactions to afford putative cyclic(alkyl)(amino) carbene (CAAC)-ligated chloroborylene and bromoborylene intermediates. The borylene precursors, chloro-boranorbornadiene (ClB(C6Me6), 2Cl) and bromo-boranorbornadiene (BrB(C6Me6), 2Br) were synthesized through the reaction of the corresponding 1-halo-2,3,4,5-tetramethylborole dimer (XBC4Me4)2 (X = Cl, 1Cl; X = Br, 1Br) with 2-butyne. Treatment of 2Cl with CAACs resulted in the release of di-coordinate chloro-borylene (CAAC)BCl from hexamethylbenzene (C6Me6) at room temperature. In contrast, the reaction of 2Br with CAAC led to the formation of a boronium species [(CAAC)BC6Me6]+Br− (7) at room temperature. Heating 7 in toluene promoted the release of di-coordinate bromo-borylene (CAAC)BBr as a transient species. Surprisingly, heating 7 in dichloromethane resulted in the C–H activation of hexamethylbenzene. The conversion of a CAAC-stabilized bromo-borepin to a borylene, a boron-centered retro Büchner reaction, was also investigated.
</description>
<pubDate>Thu, 03 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163479</guid>
<dc:date>2024-10-03T00:00:00Z</dc:date>
</item>
<item>
<title>Clustering in typical unit-distance avoiding sets</title>
<link>https://hdl.handle.net/1721.1/163478</link>
<description>Clustering in typical unit-distance avoiding sets
Cohen, A.; Mani, N.
In the 1960s Moser asked how dense a subset of R d can be if no pairs of points in the subset are exactly distance 1 apart. There has been a long line of work showing upper bounds on this density. One curious feature of dense unit distance avoiding sets is that they appear to be ''clumpy,'' i.e. forbidding unit distances comes hand in hand with having more than the expected number distance ≈ 2 pairs. In this work we rigorously establish this phenomenon in R 2 . We show that dense unit distance avoiding sets have over-represented distance ≈ 2 pairs, and that this clustering extends to typical unit distance avoiding sets. To do so, we build off of the linear programming approach used previously to prove upper bounds on the density of unit distance avoiding sets.
</description>
<pubDate>Mon, 22 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163478</guid>
<dc:date>2025-09-22T00:00:00Z</dc:date>
</item>
<item>
<title>Structure of Lower Tails in Sparse Random Graphs</title>
<link>https://hdl.handle.net/1721.1/163477</link>
<description>Structure of Lower Tails in Sparse Random Graphs
Chin, Byron
We study the typical structure of a sparse Erdős–Rényi random graph conditioned on the lower tail subgraph count event. We show that in certain regimes, a typical graph sampled from the conditional distribution resembles the entropy minimizer of the mean field approximation in the sense of both subgraph counts and cut norm. The main ingredients are an adaptation of an entropy increment scheme of Kozma and Samotij, and a new stability for the solution of the associated entropy variational problem. The proof can be interpreted as a structural application of the new probabilistic hypergraph container lemma for sparser than average sets, and suggests a more general framework for establishing such typical behavior statements.
</description>
<pubDate>Mon, 11 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163477</guid>
<dc:date>2025-08-11T00:00:00Z</dc:date>
</item>
<item>
<title>t-channel dark matter models – a whitepaper</title>
<link>https://hdl.handle.net/1721.1/163476</link>
<description>t-channel dark matter models – a whitepaper
Arina, Chiara; Fuks, Benjamin; Panizzi, Luca; Baker, Michael J.; Cornell, Alan S.; Heisig, Jan; Maier, Benedikt; Pedro, Rute; Trischuk, Dominique; Agin, Diyar; Arbey, Alexandre; Arcadi, Giorgio; Bagnaschi, Emanuele; Bai, Kehang; Bhatia, Disha; Becker, Mathias; Belyaev, Alexander; Benoit, Ferdinand; Blanke, Monika; Burzynski, Jackson
This report, summarising work achieved in the context of the LHC Dark Matter Working Group, investigates the phenomenology of t-channel dark matter models, spanning minimal setups with a single dark matter candidate and mediator to more complex constructions closer to UV-complete models. For each considered class of models, we examine collider, cosmological and astrophysical implications. In addition, we explore scenarios with either promptly decaying or long-lived particles, as well as featuring diverse dark matter production mechanisms in the early universe. By providing a unified analysis framework, numerical tools and guidelines, this work aims to support future experimental and theoretical efforts in exploring t-channel dark matter models at colliders and in cosmology.
</description>
<pubDate>Fri, 12 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163476</guid>
<dc:date>2025-09-12T00:00:00Z</dc:date>
</item>
<item>
<title>Organic aerosol formation from 222 nm germicidal light: ozone-initiated vs. non-ozone pathways</title>
<link>https://hdl.handle.net/1721.1/163475</link>
<description>Organic aerosol formation from 222 nm germicidal light: ozone-initiated vs. non-ozone pathways
Goss, Matthew B; Kroll, Jesse H
Germicidal ultraviolet lamps outputting 222 nm light (GUV222) have the potential to reduce the airborne spread of disease through effective inactivation of pathogens, while remaining safe for direct human exposure. However, recent studies have identified these lamps as a source of ozone and other secondary pollutants such as secondary organic aerosol (SOA), and the health effects of these pollutants must be balanced against the benefits of pathogen inactivation. While ozone reactions are likely to account for much of this secondary indoor air pollution, 222 nm light may initiate additional non-ozone chemical processes, including the formation of other oxidants and direct photolytic reactions, which are not as well understood. This work examines the impacts of GUV222 on SOA formation and composition by comparing limonene oxidation under GUV222 and O3-only control conditions in a laboratory chamber. Differences between these experiments enable us to distinguish patterns in aerosol formation driven by ozone chemistry from those driven by other photolytic processes. These experiments also examine the influence of the addition of NO2 and nitrous acid (HONO), and investigate SOA formation in sampled outdoor air. SOA composition and yield vary only slightly with respect to GUV222vs. ozone-only conditions; NO2 and HONO photolysis do not appreciably affect the observed chemistry. In contrast, we observe consistent new particle formation under high-fluence 222 nm light (45 μW cm−2) that differs substantially from ozone-only experiments. This observed new particle formation represents an additional reason to keep GUV222 fluence rates to the lowest effective levels.
</description>
<pubDate>Thu, 17 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163475</guid>
<dc:date>2024-10-17T00:00:00Z</dc:date>
</item>
<item>
<title>Leveraging Electrons for Electrochemical CO2 Capture Using a Hemi-Labile Iron Complex</title>
<link>https://hdl.handle.net/1721.1/163474</link>
<description>Leveraging Electrons for Electrochemical CO2 Capture Using a Hemi-Labile Iron Complex
Seo, Hyowon; Chen, Ying; Walter, Eric; Abdinejad, Maryam; Hatton, T Alan
Climate change, driven by anthropogenic carbon emissions, demands urgent action to prevent a 2050 tipping point. With CO2 levels at 427 ppm (50% above pre-industrial levels), deploying energy-efficient carbon capture technologies is crucial. Electrochemical carbon capture processes that have been touted to have the potential to meet these needs rely on the applied cell voltage, and electron utilization (CO2 molecules separated per electron), which has generally been asserted to have a theoretical limit of one. Here, we introduce an electron-leveraging strategy to enhance electron utilization beyond this limit to 1.43 by employing Fe-EDDHA, a redox-active coordination complex having a ligand with multiple hemi-labile coordination sites. The reversibility and robustness of the system were enabled by the efficient prevention of CO2 reduction upon the introduction of nicotinamide as a guardian of the iron(2+) center. The proof-of-concept cyclic system exhibits a minimum operational energy of 22.6 kJe mol−1 and an average of 63.7 kJe mol−1 over 29 cycles, using a simulated flue gas (15% CO2). Our electron-leveraging strategy holds promise for advancing energy-efficient electrochemical carbon capture technologies, and offers an alternative to prevalent redox potential shifting methods proposed to mitigate undesired electron transfer reactions in redox-active materials across diverse operational conditions.
</description>
<pubDate>Mon, 04 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163474</guid>
<dc:date>2025-08-04T00:00:00Z</dc:date>
</item>
<item>
<title>A Programmable Nanovaccine Platform Based on M13 Bacteriophage for Personalized Cancer Vaccine and Therapy</title>
<link>https://hdl.handle.net/1721.1/163473</link>
<description>A Programmable Nanovaccine Platform Based on M13 Bacteriophage for Personalized Cancer Vaccine and Therapy
Huang, Shengnan; He, Yanpu; Madow, Allison; Peng, Huaiyao; Griffin, Mirielle; Qi, Jifa; Huang, Mantao; Amoroso, Heather; Abrashoff, Riley; Heldman, Nimrod; Belcher, Angela M
Nanovaccines co-assemble antigens and adjuvants to elicit robust immuneresponses but often require complex synthesis and post-modiﬁcationprocedures. Here, a programmable nanovaccine platform based on the M13bacteriophage is developed for the scalable production of vaccines andsingle-step modular engineering of adjuvanticity, length, and antigen density.By reprogramming the sequence and size of the noncoding phage genome,the Toll-like receptor 9 activation and the length of the phage are preciselycontrolled. With a novel molecular engineering approach, the antigen densityis tuned from 13.6% to 70.3%. A systematic modulation reveals an optimaladjuvanticity at a constant antigen density for maximum anti-tumor CD8+ Tcell response, and vice versa, using the model antigen SIINFEKL. The M13phage-based nanovaccine induces durable memory immunity lasting over ayear. In addition, a 24-fold increase in neoantigen-speciﬁc CD8+ T cellfrequency is achieved when increasing both the adjuvanticity and antigendensity. Furthermore, when combined with anti-PD-1 therapy, the M13phage-based personalized vaccine eradicates established MC-38 tumors in75% of treated animals and they develop 100% resistance against tumorinvasion when challenged 5 months after treatment. These ﬁndings establishM13 phage as a powerful and versatile nanovaccine platform withtransformative potential for personalized cancer immunotherapy.
</description>
<pubDate>Wed, 27 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163473</guid>
<dc:date>2025-08-27T00:00:00Z</dc:date>
</item>
<item>
<title>Efficient Approximate Unitary Designs from Random Pauli Rotations</title>
<link>https://hdl.handle.net/1721.1/163472</link>
<description>Efficient Approximate Unitary Designs from Random Pauli Rotations
Haah, Jeongwan; Liu, Yunchao; Tan, Xinyu
We construct random walks on simple Lie groups that quickly converge to the Haar measure for all moments up to order t. Specifically, a step of the walk on the unitary or orthogonal group of dimension 2 n is a random Pauli rotation e i θ P / 2 . The spectral gap of this random walk is shown to be Ω ( 1 / t ) , which coincides with the best previously known bound for a random walk on the permutation group on { 0 , 1 } n . This implies that the walk gives an ε -approximate unitary t-design in depth O ( n t 2 + t log 1 ε ) d where d = O ( log n ) is the circuit depth to implement e i θ P / 2 . Our simple proof uses quadratic Casimir operators of Lie algebras.
</description>
<pubDate>Thu, 30 Oct 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163472</guid>
<dc:date>2025-10-30T00:00:00Z</dc:date>
</item>
<item>
<title>How FDI reshapes host markets’ trade profile and politics</title>
<link>https://hdl.handle.net/1721.1/163471</link>
<description>How FDI reshapes host markets’ trade profile and politics
Kim, In Song; Liao, Steven; Miyano, Sayumi
A fast-growing literature indicates that ﬁrms’ engagement in foreign directinvestment (FDI) and trade is key to understanding deepening global valuechains and their political implications. However, existing studies have mainlyfocused on the ramiﬁcations for FDI home countries while often overlookingthe ﬁrm-product level interactions between FDI and trade, where their inter-dependencies manifest. This study examines how ﬁrms’ FDI reshapes hostcountries’ trade proﬁles at this level, empowering new political coalitions fortrade liberalization. Analyzing greenﬁeld FDI projects globally since 2003, weﬁnd that hosts experienced an average increase of over 45 export products inthe following year. To overcome the challenges of connecting ﬁrms to prod-ucts, we link FDI data with Vietnamese customs records. We ﬁnd that Viet-namese export (import) volumes of FDI-related products increased by 90%(30%) within 4 years of initial investments. Importantly, these products alsobeneﬁted from more substantial tariff cuts in bilateral Free Trade Agreements.
</description>
<pubDate>Fri, 12 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163471</guid>
<dc:date>2025-09-12T00:00:00Z</dc:date>
</item>
<item>
<title>Integrating metabolic scaling and coexistence theories</title>
<link>https://hdl.handle.net/1721.1/163470</link>
<description>Integrating metabolic scaling and coexistence theories
Saavedra, Serguei; Arroyo, José Ignacio; Deng, Jie; Marquet, Pablo A; Kempes, Christopher P
Metabolic scaling theory has been pivotal in formalizing the expected energyexpenditures across populations as a function of body size. Coexistence theoryhas provided a mathematization of the environmental conditions compatiblewith multispecies coexistence. Yet, it has been challenging to explain howobserved community-wide patterns, such as the inverse relationship betweenpopulation abundance density and body size, can be unified under boththeories. Here, we provide the foundation for a tractable, scalable, and extend-able framework to study the coexistence of resource-mediated competingpopulations as a function of their body size. For a given thermal domain andresponse, this integration reveals that the metabolically predicted 1/4 powerdependence of carrying capacity of biomass density on body size can be under-stood as the average distribution of carrying capacities across feasible environ-mental conditions, especially for large communities. In line with empiricalobservations, our integration predicts that such average distribution leads tocommunities in which population biomass densities at equilibrium are inde-pendent from body size, and consequently, population abundance densitiesare inversely related to body size. This integration opens new opportunities toincrease our understanding of how metabolic scaling relationships at thepopulation level can shape processes at the community level under changingenvironments.
</description>
<pubDate>Tue, 05 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163470</guid>
<dc:date>2025-08-05T00:00:00Z</dc:date>
</item>
<item>
<title>Covariant phase space and L∞ algebras</title>
<link>https://hdl.handle.net/1721.1/163469</link>
<description>Covariant phase space and L∞ algebras
Bernardes, Vinícius; Erler, Theodore; Fırat, Atakan H.
We propose a symplectic structure for the phase space of a generic Lagrangian field theory expressed in the framework of L∞ algebras. The symplectic structure does not require explicit knowledge of the derivative content of the Lagrangian, and therefore is applicable to nonlocal models, such as string field theory, where traditional constructions are difficult to apply. We test our proposal in a number of examples ranging from general relativity to p-adic string theory.
</description>
<pubDate>Fri, 05 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163469</guid>
<dc:date>2025-09-05T00:00:00Z</dc:date>
</item>
<item>
<title>Deciphering the origins of the elements through galactic archeology</title>
<link>https://hdl.handle.net/1721.1/163468</link>
<description>Deciphering the origins of the elements through galactic archeology
Farouqi, Khalil; Frebel, Anna; Thielemann, Friedrich-Karl
Low-metallicity stars preserve the signatures of the first stellar nucleosynthesis events in the Galaxy, as their surface abundances reflect the composition of the interstellar medium from the time when they were born. Aside from primordial Big Bang nucleosynthesis, massive stars, due to their short lifetimes, dominate the wind and explosive ejecta into the interstellar medium of the early Galaxy. Most of them will end as core-collapse supernova (CCSN) explosions, and typical ejected abundance distributions, e.g. in terms of the α -element-to-Fe ratios, reflect these contributions. Essentially all CCSNe contribute 56Fe (decaying from radioactive 56Ni). Therefore, low-metallicity stars can be used to test whether the abundances of any other elements are correlated with those of Fe, i.e. whether these elements have been co-produced in the progenitor sources or if they require either a different or additional astrophysical origin(s). The present analysis focuses on stars with [Fe/H]&lt;-2, as they probe the earliest formation phase of the Galaxy when only one or very few nucleosynthesis events had contributed their ejecta to the gas from which the lowest metallicity stars form. This was also the era before low and intermediate mass stars (or type Ia supernovae) could contribute any additional heavy elements. Following earlier work on the origin of heavy r-process elements [1], we extend the present study to examine Pearson and Spearman correlations of Fe with Li, Be, C, N, O, Na, Mg, Si, S, K, Ca, Ti, Cr, Ni, Zn, Ge, Se, Sr, Y, Zr, Mo, Ba, La, Ce, Sm, Eu, Gd, Dy, Yb, Lu, Hf, Os, Ir, Pb, and Th, using high-resolution stellar abundance data from the SAGA [2] and JINA [3] databases. The main goal is to identify which of the observed elements (i) may have been co-produced with Fe in (possibly a variety of) CCSNe, and which elements require (ii) either a completely different, or (iii) at least an additional astrophysical origin.
</description>
<pubDate>Fri, 12 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163468</guid>
<dc:date>2025-09-12T00:00:00Z</dc:date>
</item>
<item>
<title>Absolute Security with Multiple-Slit Diffraction in Terahertz Communication Links</title>
<link>https://hdl.handle.net/1721.1/163467</link>
<description>Absolute Security with Multiple-Slit Diffraction in Terahertz Communication Links
Shiri, Yaseman; Yeh, Chia-Yi; Fang, Zhaoji; Shrestha, Rabi; Guerboukha, Hichem; Médard, Muriel; Malowicki, John; Overrocker, David; Fanelli, Paul; Thawdar, Ngwe; Mittleman, Daniel M.
Many widely used antennas in terahertz (THz) directional communications (including horn antennas) are not fully compatible with the recently proposed absolute security approach due to the absence of strong frequency-dependent minima in the intrinsic antenna pattern. To this end, we propose to use a multiple-slit aperture to modify these non-suitable radiation patterns in a non-intrusive manner. Based on the principle of diffraction, the multi-slit aperture creates frequency varying minima critical for absolute security. We show that improved security performance, quantified by the size of the secure region in space (termed blind region), can be achieved by employing a wider diffraction aperture with a wider slit opening. We further characterize how the non-uniform wavefront, which is typical in practical transmission and results in varying amplitude and phase at different slit openings, affects the size of the blind region. This diffraction-based scheme is experimentally demonstrated with a horn antenna operating near 200 GHz. We demonstrate that, while the intrinsic horn antenna yields no blind region for angles within 16° from the intended user, the modified antenna configuration produces strong minima sufficient to create blind regions at angles as small as 4° and an expanding blind region with increasing transmission bandwidth, thus validating the security gain with this approach.
</description>
<pubDate>Mon, 18 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163467</guid>
<dc:date>2025-08-18T00:00:00Z</dc:date>
</item>
<item>
<title>A gravity-based mounting approach for large-scale cryogenic calorimeter arrays</title>
<link>https://hdl.handle.net/1721.1/163466</link>
<description>A gravity-based mounting approach for large-scale cryogenic calorimeter arrays
CUPID Collaboration
Cryogenic calorimeters are among the leading technologies for searching for rare events. The CUPID experiment is exploiting this technology to deploy a tonne-scale detector to search for neutrinoless double-beta decay of 100 Mo. The CUPID collaboration proposed an innovative approach to assembling cryogenic calorimeters in a stacked configuration, held in position solely by gravity. This gravity-based assembly method is unprecedented in the field of cryogenic calorimeters and offers several advantages, including relaxed mechanical tolerances and simplified construction. To assess and optimize its performance, we constructed a medium-scale prototype hosting 28  Li 2 MoO 4 crystals and 30 Ge light detectors, both operated as cryogenic calorimeters at the Laboratori Nazionali del Gran Sasso (Italy). Despite an unexpected excess of noise in the light detectors, the results of this test proved (i) a thermal stability better than ±0.5 mK at 10 mK, (ii) a good energy resolution of Li 2 MoO 4 cryogenic calorimeters, (6.6 ± 2.2) keV FWHM at 2615 keV, and (iii) a Li 2 MoO 4 light yield measured by the closest light detector of 0.36 keV/MeV, sufficient to guarantee the particle identification requested by CUPID.
</description>
<pubDate>Tue, 02 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163466</guid>
<dc:date>2025-09-02T00:00:00Z</dc:date>
</item>
<item>
<title>A Higher Spin-Statistics Theorem for Invertible Quantum Field Theories</title>
<link>https://hdl.handle.net/1721.1/163465</link>
<description>A Higher Spin-Statistics Theorem for Invertible Quantum Field Theories
Krulewski, Cameron; Stehouwer, Luuk; Müller, Lukas
We prove that every unitary invertible quantum field theory satisfies a generalization of the famous spin statistics theorem. To formulate this extension, we define a higher spin action of the stable orthogonal group O on appropriate spacetime manifolds, which extends both the reflection involution and spin flip. On the algebraic side, we define a higher statistics action of O on the universal target for invertible field theories, I Z , which extends both complex conjugation and fermion parity ( - 1 ) F . We prove that every unitary invertible quantum field theory intertwines these actions.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163465</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computational metrology for materials</title>
<link>https://hdl.handle.net/1721.1/163464</link>
<description>Computational metrology for materials
Warren, James; Read, Jake; Seppala, Jonathan; Strand, Erik; Gershenfeld, Neil
Advanced materials hold great promise, but their adoption is impeded by the challenges of developing, characterizing, and modeling them, then of designing, processing, and producing something with them. Even if the results are open, the means to do each of these steps are typically proprietary and segregated. We show how principles of open-source software and hardware can be used to develop open instrumentation for materials science, so that a measurement can be accompanied by a complete computational description of how to reproduce it. And then we show how this approach can be extended to effectively measure predictive computational models rather than just model parameters. We refer to these interrelated concepts as “computational metrology.” These are illustrated with examples including a 3D printer that can do rheological characterization of unfamiliar and variable materials.
</description>
<pubDate>Thu, 31 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163464</guid>
<dc:date>2025-07-31T00:00:00Z</dc:date>
</item>
<item>
<title>Assessing Science Robustness in Uncertain Environments: Application to a Uranus Flagship Mission</title>
<link>https://hdl.handle.net/1721.1/163462</link>
<description>Assessing Science Robustness in Uncertain Environments: Application to a Uranus Flagship Mission
Gentgen, Chloe; Landau, Damon; Weiss, Benjamin P.; Jasinski, Jamie M.; De Weck, Olivier
Defining science objectives for missions to unexplored bodies can be difficult when the underlying processes and mechanisms are not well understood. This uncertainty presents a challenge when attempting to determine mission requirements to address these objectives. Additionally, uncertainties in the environment may present risks to the system and mission operations. To this end, uncertainty quantification is increasingly used to inform and validate mission design. However, a framework has yet to be developed to support trajectory tradespace exploration of missions targeting uncertain environments through science modeling. The proposed methodology develops a science systems engineering framework integrating a science representation with trajectory designs to compute quantitative science value metrics. The science model is established by identifying relevant physical models (such as governing equations and assumptions) and input variables from the literature, simulation data, as well as past mission results. Variables are defined with probability distributions, and Monte Carlo simulations are used to quantify the uncertainties. For a given trajectory, the analysis outputs predictive probability distributions of the science value metrics, highlighting the trajectory's science performance and its robustness to uncertainty in the physical processes. The framework is applicable to any mission targeting highly dynamic and uncertain processes. This paper demonstrates its application to a future Uranus Flagship mission, focusing on magnetosphere science objectives. Listed as the highest priority Flagship mission by the latest Decadal Survey, a mission to the Uranian system aims to answer science questions regarding Uranus's interior and atmosphere, its satellites and rings, and its magnetosphere. Analytic and numerical models have been developed to understand Uranus' magnetosphere; however, significant uncertainties remain, leading to challenges when defining magnetosphere science investigations. By applying the proposed methodology, this paper shows a significant variation in predicted science metrics of interest (e.g., number of magnetopause crossings) that can be expected from similar trajectories due to varying environment conditions (solar wind and interplanetary magnetic field) or different arrival times at Uranus. These results should inform the flow-down of measurement requirements to mission design requirements for magnetosphere science.
2025 IEEE Aerospace Conference, 1-8 March, Big Sky, MT, USA
</description>
<pubDate>Mon, 14 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163462</guid>
<dc:date>2025-07-14T00:00:00Z</dc:date>
</item>
<item>
<title>A Magnetotelluric Study of Mantle Heterogeneities Beneath the Northeastern United States</title>
<link>https://hdl.handle.net/1721.1/163408</link>
<description>A Magnetotelluric Study of Mantle Heterogeneities Beneath the Northeastern United States
Kim, Jae Deok; Evans, Rob. L.
Analysis of magnetotelluric (MT) data across the northern Appalachian region reveals significantmantle heterogeneity. By inverting a subset of long‐period EarthScope USArray MT data, we constructed athree‐dimensional electrical resistivity model that provides new insights into the seismic low‐velocity NorthernAppalachian Anomaly (NAA). Comparison with empirical conductivity models indicates that the low‐resistivity anomalies along the northern and western edges of the NAA cannot be explained by temperaturealone and likely require the presence of volatiles, such as CO2‐rich or hydrous melts, or other volatile‐bearingphases, to reduce mantle resistivity to the observed levels. In addition, our modeling suggests that certainalternative lithologies, particularly hydrous clinopyroxenites, may also contribute to the observed conductivity,implying that compositional heterogeneity plays a role alongside fluids or melt. These conductive features mayreflect partial melting or metasomatic enrichment of carbonated and hydrated mantle domains introduced duringpast subduction or plume interactions, potentially mobilized by edge‐driven convection at lithosphericboundaries. We also resolve a deep resistive feature in western New England, interpreted as a dry and depletedlithospheric block, though its nature remains uncertain due to limited seismic expression and the relatively lowsensitivity of MT to resistive structures. Our results suggest that the upper mantle beneath New England is bothcompositionally and thermally heterogeneous, shaped by a complex tectonic history involving subduction,metasomatism, lithospheric thinning, and ongoing asthenospheric processes.
</description>
<pubDate>Sat, 25 Oct 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163408</guid>
<dc:date>2025-10-25T00:00:00Z</dc:date>
</item>
<item>
<title>“Reason” En Masse</title>
<link>https://hdl.handle.net/1721.1/163407</link>
<description>“Reason” En Masse
Watkins, Eliot
We can use “reason,” with its normative sense, as both a count noun (“there is a reason for her to Φ”) and a mass noun (“there is plenty of reason for her to Φ”). How are the count and mass senses of “reason” related? Daniel Fogal argues that the mass sense is fundamental: Just as lights are merely those things that give light and anxieties are merely those things that give anxiety, reasons are merely those things that give reason. In this article, I develop an opposing analysis of the mass noun “reason” that puts reasons first. Just as the detail on the Mona Lisa is composed of particular details (brushstrokes and colors) and the crime in L.A. is composed of particular crimes (pickpocketings and speeding offenses), so the reason for you to go to the dentist is composed of your reasons to go. Reasons stand to reason as parts to a whole. Such a picture makes reasons fundamental once more, but it has a cost of entry. In order to accommodate the behavior of “reason” in comparative constructions, you need to abandon the idea that reasons are facts we can count up. On the contrary: They're not facts, and you can't count them.
</description>
<pubDate>Sun, 03 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163407</guid>
<dc:date>2025-08-03T00:00:00Z</dc:date>
</item>
<item>
<title>Battle in the Clouds</title>
<link>https://hdl.handle.net/1721.1/163406</link>
<description>Battle in the Clouds
Moran‐Thomas, Amy
This narrative experiment brings together scenes from my family histories in western Pennsylvania coal country, alongsideongoing visits to learn about rising health issues in the region today. Increasing numbers of residents express concerns aboutchronic problems such as young cancers, and many people worry about potential exposures coming from past and present energyinfrastructures. These growing health concerns, some of them my own, also brought me to revisit Rachel Carson’s medical writingsfrom her family home in western Pennsylvania. Looking out from her childhood bedroom with my mother and returning toCarson’s archival notes on “transmissible cancers” and her childhood essay, “A Battle in the Clouds,” these descriptions circlelong-accumulating debates about chronic diseases and their causes and effects over time. Returning to varieties of changing cloudstoday, this essay reflects on how chronic exposures—unevenly accumulating in bodies and landscapes and across generations—show “undone sciences” of many kinds in need of collective attention. It traces how families are grappling with the sense ofneeding to connect their own dots; the ways local communities are coming together to process displaced responsibilities; and theimplications for health, public trust, and care when so much is left in clouds.
</description>
<pubDate>Fri, 18 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163406</guid>
<dc:date>2025-07-18T00:00:00Z</dc:date>
</item>
<item>
<title>Leaf Stripping on Uniform Attachment Trees</title>
<link>https://hdl.handle.net/1721.1/163405</link>
<description>Leaf Stripping on Uniform Attachment Trees
Addario‐Berry, Louigi; Brandenberger, Anna; Briend, Simon; Broutin, Nicolas; Lugosi, Gábor
In this note, we analyze the performance of a simple root-finding algorithm in uniform attachment trees. The leaf-stripping algorithm recursively removes all leaves of the tree for a carefully chosen number of rounds. We show that, with probability 1 − &#120576;, the set of remaining vertices contains the root and has a size only depending on &#120576; but noton the size of the tree.
</description>
<pubDate>Mon, 04 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163405</guid>
<dc:date>2025-08-04T00:00:00Z</dc:date>
</item>
<item>
<title>Unnatural Wills: Inheritance Disputes and Inequality</title>
<link>https://hdl.handle.net/1721.1/163404</link>
<description>Unnatural Wills: Inheritance Disputes and Inequality
O'Brien, Shay
Within the conceptual frame of relational economic sociology, inheritance disputes are a canonical form of relational mismatch.But the social patterning of relational mismatches, and their various ties to inequality, remain murky. In this paper, I examineall known inheritance disputes in Dallas from 1895–1945 within their social context to generate hypotheses about the rela-tionship between inequality and mismatches more broadly. Inheritance disputes were usually resolved by increasing the spreadof fortunes; in this sense, they moderated wealth inequality between individuals. But not everyone was equally able to maketheir preferred estate distribution a reality. Using a series of case studies, I argue that dispute resolutions tended to reifynormative family structures and naturalize sharp, moralized distinctions between fuzzy social categories. The legal resolutionsto this class of relational mismatches may marginally mitigate individual‐level wealth inequality and simultaneously producecategorical inequalities by race, class, gender, sexuality, and family structure. I conclude with a set of hypotheses and questionsfor future studies.
</description>
<pubDate>Wed, 23 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163404</guid>
<dc:date>2025-07-23T00:00:00Z</dc:date>
</item>
<item>
<title>Decentralization, Blockchain, Artificial Intelligence (AI): Challenges and Opportunities</title>
<link>https://hdl.handle.net/1721.1/163403</link>
<description>Decentralization, Blockchain, Artificial Intelligence (AI): Challenges and Opportunities
Hui, Xiang; Tucker, Catherine
New technologies like blockchain allow firms to decentralize core functions, forcing managers to reconsider the trade-off be-tween closed, proprietary control and open strategies that involve external contributors. While proponents often advocate forfull decentralization, we argue this view overlooks important economic trade-offs. We propose that the better strategy is selectivedecentralization: a disciplined approach to choosing where to centralize for efficiency and where to decentralize for innovation.We propose a three-level framework—Infrastructure, Decision-Making, and Operational Control—to guide this choice, helpingmanagers analyze the specific costs and benefits at each layer. We apply this framework to the strategic adoption of ArtificialIntelligence (AI), where the technology's powerful pull toward centralization provides a stark test case. Our analysis shows thatan “open source AI” strategy—decentralizing operations to foster innovation while keeping infrastructure centralized for effi-ciency—is more pragmatic than full decentralization. Selective decentralization therefore emerges as a key managerial capabilityfor capturing blockchain's benefits without sacrificing scale efficiencies.
</description>
<pubDate>Tue, 22 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163403</guid>
<dc:date>2025-07-22T00:00:00Z</dc:date>
</item>
<item>
<title>Characterizing the response time of unpumped oxygen optodes for profiling applications</title>
<link>https://hdl.handle.net/1721.1/163402</link>
<description>Characterizing the response time of unpumped oxygen optodes for profiling applications
Park, Ellen; Nicholson, David; Dever, Mathieu; Atamanchuk, Dariia; Richards, Clark
The response times of the Aanderaa 4330, Aanderaa 4330 WTW, RBRcoda T.ODO|slow, and PyroScience PICO-O2-SUB were evaluated in the laboratory over a range of profiling speeds at two temperatures. The PyroScience PICO-O2-SUB had the fastest response time (1–4 s), followed by the RBRcoda T.ODO|slow (~ 15–35 s), Aanderaa 4330 (~ 30–60 s), and Aanderaa 4330W (~ 50–100 s). This study provides recommendations on improving the quality of oxygen data from optodes in profiling applications by additionally assessing the impact of response time testing setups, thermal inertia effects, and foil types on sensor response times. This study provides a new response time function based on physical principles to predict response time for these four optode types.
</description>
<pubDate>Sat, 26 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163402</guid>
<dc:date>2025-07-26T00:00:00Z</dc:date>
</item>
<item>
<title>Tapping ressentiment: pharmakeus and the sublime poisons of white supremacy</title>
<link>https://hdl.handle.net/1721.1/163401</link>
<description>Tapping ressentiment: pharmakeus and the sublime poisons of white supremacy
Ruffin, Jessica
This auto-philosophical essay takes up Nietzsche’s concept of ressentiment; the archival record of Mark and Phillis; and Derrida’s engagement with pharmakon as a means of working through the question of what is to be done with the poisons of white supremacy, which persist in present worldly environments as well as our bodies and histories. Engaging aesthetics, Black thought, and phenomenology of race, the work aims for an embodied therapeutic movement that might open the way for ethical receptivity within the white supremacist world. Eschewing a universalizing tone while recognizing the ahistoricities of white supremacist cultural techniques, the essay enlists autobiography and practices of the self to give voice to the reservoirs of white supremacist poison permeating a worldly body.
</description>
<pubDate>Mon, 14 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163401</guid>
<dc:date>2025-04-14T00:00:00Z</dc:date>
</item>
<item>
<title>SLAM Handbook: From Localization and Mapping to Spatial Intelligence</title>
<link>https://hdl.handle.net/1721.1/163400</link>
<description>SLAM Handbook: From Localization and Mapping to Spatial Intelligence
Carlone, Luca; Kim, Ayoung; Barfoot, Timothy; Cremers, Daniel; Dellaert, Frank
Simultaneous Localization and Mapping —better known as SLAM— refers to the&#13;
fundamental problem of building spatial models of an environment while simultaneously determining the position of a robot within that environment. The term&#13;
itself was first coined in 1995 by Hugh Durrant-Whyte and John Leonard, marking&#13;
the formalization of a problem that sits at the intersection of robotics, geometry,&#13;
controls, and probabilistic inference.&#13;
SLAM is as elegant as it is formidable. At its core, it addresses the challenge&#13;
of reasoning over high-dimensional, uncertain, and dynamic systems. The process&#13;
demands precise spatial inference and robust probabilistic modeling to build coherent maps of the world —maps that must be constructed in real time, often under&#13;
conditions of noise and ambiguity.&#13;
What makes SLAM particularly compelling is its universality. In computer vision,&#13;
it is mirrored in the problem of Structure from Motion; in robotics, it underpins&#13;
everything from indoor autonomous navigation to planetary exploration and selfdriving cars. Since its inception, SLAM has inspired tens of thousands of research&#13;
papers, drawing deeply from disciplines as diverse as physics, statistics, computer&#13;
vision, geometry, controls, and machine learning. Its evolution has catalyzed the&#13;
development of increasingly capable autonomous systems, able to operate at scale&#13;
in complex, open-world environments.&#13;
This volume brings together contributions from some of the field’s foremost experts and rising stars. The chapters represent the state of the art in SLAM today,&#13;
reflecting both the depth of theoretical innovations and the breadth of practical&#13;
applications. From its early formulations based on Kalman filters and Bayesian&#13;
estimation, SLAM has matured into a rich tapestry of mathematical frameworks&#13;
—encompassing graph-based optimization, factor graphs, nonlinear least squares,&#13;
and deep learning-based techniques. Beyond introducing the mathematical foundations of SLAM, this volume provides valuable guidance to the practitioner by&#13;
discussing real-world use cases ranging from vision-based and LiDAR-based SLAM&#13;
systems to legged locomotion. It also covers recent developments in Spatial AI,&#13;
showing how advances in deep learning, differentiable rendering, and large vision and language models point the way toward representations that provide robots with&#13;
a rich spatial and semantic understanding of their environment.
</description>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163400</guid>
</item>
<item>
<title>Towards a Science Exocortex</title>
<link>https://hdl.handle.net/1721.1/163398</link>
<description>Towards a Science Exocortex
Yager, Kevin G.
Artificial intelligence (AI) methods are poised to revolutionize intellectual work, with generative AI enabling automation of text analysis, text generation, and simple decision making or reasoning. The impact to science is only just beginning, but the opportunity is significant since scientific research relies fundamentally on extended chains of cognitive work. Here, we review the state of the art in agentic AI systems, and discuss how these methods could be extended to have even greater impact on science. We propose the development of an exocortex, a synthetic extension of a person's cognition. A science exocortex could be designed as a swarm of AI agents, with each agent individually streamlining specific researcher tasks, and whose inter-communication leads to emergent behavior that greatly extend the researcher's cognition and volition.
</description>
<pubDate>Thu, 15 Aug 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163398</guid>
<dc:date>2024-08-15T00:00:00Z</dc:date>
</item>
<item>
<title>Domestic groundwater wells in Appalachia show evidence of low-dose, complex mixtures of legacy pollutants</title>
<link>https://hdl.handle.net/1721.1/163397</link>
<description>Domestic groundwater wells in Appalachia show evidence of low-dose, complex mixtures of legacy pollutants
Bugher, Nicolette Anna; Xiong, Boya; Gentles, Runako I.; Glist, Lukas D.; Siegel, Helen G.; Johnson, Nicholaus P.; Clark, Cassandra J.; Deziel, Nicole; Saiers, James E.; Plata, Desiree
Lack of water quality data for private drinking water sources prevents robust evaluation of exposure risk for communities co-located with historically contaminated sites and ongoing industrial activity. Areas of the Appalachian region of the United States (i.e., Pennsylvania, Ohio and West Virginia) contain extensive hydraulic fracturing activity, as well as other extractive and industrial technologies, in close proximity to communities reliant on private drinking water sources, creating concern over potential groundwater contamination. In this study, we characterized volatile organic compound (VOC) occurrence at 307 private groundwater well sites within Pennsylvania, Ohio, and West Virginia. The majority (97%) of water samples contained at least one VOC, while the average number of VOCs detected at a given site was 5 ± 3. The majority of individual VOC concentrations fell below applicable U.S. Environmental Protection Agency (EPA) Maximum Contamination Levels (MCLs), except for chloroform (MCL of 80 μg L−1; n = 1 at 98 μg L−1), 1,2-dibromoethane (MCL of 0.05 μg L−1; n = 3 ranging from 0.05 to 0.35 μg L−1), and 1,2-dibromo-3-chloropropane (MCL of 0.2 μg L−1; n = 7 ranging from 0.20 to 0.58 μg L−1). To evaluate well susceptibility to VOCs from industrial activity, distance to hydraulic fracturing site was used to assess correlations with contaminant occurrences. Proximity to closest hydraulic fracturing well-site revealed no statistically significant linear relationships with either individual VOC concentrations, or frequency of VOC detections. Evaluation of other known industrial contamination sites (e.g., US EPA Superfund sites) revealed elevated levels of three VOCs (chloroform, toluene, benzene) in groundwaters within 10 km of those Superfund sites in West Virginia and Ohio, illuminating possible point source influence. Lack of correlation between VOC concentrations and proximity to specific point sources indicates complex geochemical processes governing trace VOC contamination of private drinking water sources. While individual concentrations of VOCs fell well below recommended human health levels, the low dose exposure to multiple VOCs occurring in drinking supplies for Appalachian communities was noted, highlighting the importance of groundwater well monitoring.
</description>
<pubDate>Thu, 20 Jun 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163397</guid>
<dc:date>2024-06-20T00:00:00Z</dc:date>
</item>
<item>
<title>The Opportunity for Utilizing End‐of‐Life Scrap to Meet Growing Copper Demand</title>
<link>https://hdl.handle.net/1721.1/163396</link>
<description>The Opportunity for Utilizing End‐of‐Life Scrap to Meet Growing Copper Demand
Diersen, Isabel; Bhuwalka, Karan; Olivetti, Elsa
As electrification trends and clean energy deployment drive up copper demand, there will be pressure on copper supply chains.With annual copper demand expected to grow by 50% and reach 49 Mt by 2035, the world will continue to need additional sourcesof copper supply. While expanding mining projects could increase copper production, given the significant stock of material,secondary copper can play a vital role in meeting demand. We analyze the opportunity to meet growing copper demand via in-creased scrap collection and improved technical recycling efficiencies. We use an economic model of the global copper system—with China analyzed separately from the rest of the world—to quantify supply evolution by incorporating price feedback betweendemand and supply. The model quantifies the impact of the increased collection on the displacement of mining production anddemonstrates how increasing recycling can modulate supply risks and copper prices. Aligned with recent literature on futurecopper flows, we find that there is an opportunity to increase scrap supply in 2040 by 46% (6.3 Mt) compared with the baseline.
</description>
<pubDate>Fri, 11 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163396</guid>
<dc:date>2025-07-11T00:00:00Z</dc:date>
</item>
<item>
<title>Topographic Stress as a Mechanical Weathering Mechanism on Titan</title>
<link>https://hdl.handle.net/1721.1/163395</link>
<description>Topographic Stress as a Mechanical Weathering Mechanism on Titan
Seltzer, Cassandra; Martel, Stephen J; Perron, J Taylor
Titan is unique among icy moons for its active surface processes and extensive erosional features.The presence of coarse sediment suggests that mechanical weathering breaks down Titan's surface material, butthe exact processes of mechanical weathering are unknown. We tested the idea that topographic features perturbambient crustal stresses enough to generate or enhance fractures. We used a two‐dimensional boundary elementmodel to predict the likely stress state within hypothetical erosional landforms on Titan, including river valleysand isolated ridges, and to model the locations and types of resulting fractures. Our results suggest thattopographic stress perturbations are indeed sufficient to generate fractures and drive mechanical weathering,with little sensitivity to the density of the material making up Titan's crust and landforms and no dependence onits elastic moduli. For material density of 800 to1,200 kg/m3, opening‐mode failure is predicted to occur withinhypothetical Titan landforms with a width of hundreds of meters, relief of tens of meters or more, and horizontaltidal or tectonic stresses up to 1 MPa of compression, which encompasses typical predicted tidal stresses rangingbetween 10 kPa of compression and 10 kPa of tension. Under the same conditions, shear fracture is predicted tooccur if the cohesion of the material is less than 100 kPa or if pore fluid pressures reduce local effective normalstresses. We therefore suggest that Titan's crust may be highly fractured and permeable, and that the predictedfractures could help generate sediment and provide pathways for subsurface transport of fluids.
</description>
<pubDate>Tue, 29 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163395</guid>
<dc:date>2025-07-29T00:00:00Z</dc:date>
</item>
<item>
<title>The existence of subspace designs</title>
<link>https://hdl.handle.net/1721.1/163394</link>
<description>The existence of subspace designs
Keevash, Peter; Sah, Ashwin; Sawhney, Mehtaab
We prove the existence of subspace designs with anygiven parameters, provided that the dimension of theunderlying space is sufficiently large in terms of theother parameters of the design and satisfies the obvi-ous necessary divisibility conditions. This settles an openproblem from the 1970s. Moreover, we also obtain anapproximate formula for the number of such designs.
</description>
<pubDate>Thu, 17 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163394</guid>
<dc:date>2025-07-17T00:00:00Z</dc:date>
</item>
<item>
<title>Verbal disputes, social totality, and trans politics</title>
<link>https://hdl.handle.net/1721.1/163393</link>
<description>Verbal disputes, social totality, and trans politics
Zhou, Katie
A puzzling feature about the dispute over whether transwomen are women is its apparent verbality: gender-critical theorists assert a biological fact about transwomen, and trans-inclusionary theorists respond byasserting a social/psychological fact about trans women.But plausibly, both theorists’ assertions are compatible,and so there is no real disagreement. In this paper, Iargue that the two theorists are not talking past eachother. But I also argue that extant accounts of the dis-pute fail to adequately explain why the dispute is notmerely verbal. Indeed, clarifying the dispute requires usto ask what it is for something to be a gender concept,as opposed to a merely biological or social/psychologicalconcept. After developing a questions-based account ofconcepts and conceptual roles, I suggest that a neces-sary feature of gender concepts is that we use them toconstruct unified and portable narratives about how wewill stand in relation to one another as social individu-als, regardless of the particular social context we are in.This allows us to understand the trans woman disputeas a dispute about whether we should prioritize biolog-ical or social/psychological facts when interpreting ourrelations to one another.
</description>
<pubDate>Tue, 22 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163393</guid>
<dc:date>2025-07-22T00:00:00Z</dc:date>
</item>
<item>
<title>Microscale Metal Additive Manufacturing by Solid‐State Impact Bonding of Shaped Thin Films</title>
<link>https://hdl.handle.net/1721.1/163392</link>
<description>Microscale Metal Additive Manufacturing by Solid‐State Impact Bonding of Shaped Thin Films
Reiser, Alain; Schuh, Christopher A
The deposition of device-grade inorganic materials is one key challenge towardthe implementation of additive manufacturing (AM) in microfabrication, andto that end, a broad range of physico-chemical principles has been exploredfor 3D fabrication with micro- and nanoscale resolution. Yet, for metals,a process that achieves material quality rivalling that of established thin-ﬁlmdeposition methods, and at the same time, has the potential to combinehigh throughput production with a broad palette of processable materials, isstill lacking. Here, the kinetic, solid-state bonding of metal thin ﬁlms for theadditive assembly of high-purity, high-density metals with micrometer-scaleprecision is introduced. Indirect laser ablation accelerates micrometer-thickgold ﬁlms to hundreds of meters per second without their heating or ablation.Their subsequent impact on the substrate above a critical velocity forms apermanent, metallic bond in the solid state. Stacked layers are of high density(&gt;99%). By deﬁning thin-ﬁlm layers with established lithographic methodsprior to launch, a variable feature size (2–50 µm), arbitrary shape of bondedlayers, and parallel transfer of up to 36 independent ﬁlm units in a single shot,is demonstrated. Thus, the solid-state kinetic bonding principle as a viableand potentially versatile route for micro-scale AM of metals is established.
</description>
<pubDate>Mon, 14 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163392</guid>
<dc:date>2025-07-14T00:00:00Z</dc:date>
</item>
<item>
<title>Persistent Disruptions in Prefrontal Connectivity Despite Behavioral Rescue by Environmental Enrichment in a Mouse Model of Rett Syndrome</title>
<link>https://hdl.handle.net/1721.1/163391</link>
<description>Persistent Disruptions in Prefrontal Connectivity Despite Behavioral Rescue by Environmental Enrichment in a Mouse Model of Rett Syndrome
Ährlund‐Richter, Sofie; Harpe, Jonathan; Fernandes, Giselle; Lam, Ruby; Sur, Mriganka
Rett syndrome, a neurodevelopmental disorder caused by loss-of-function mutations in the MECP2 gene, is characterized by severe motor, cognitive, and emotional impairments. Some of the deficits may result from changes in cortical connections, especially downstream projections of the prefrontal cortex (PFC), which may also be targets of restoration following rearing conditions such as environmental enrichment that alleviate specific symptoms. Here, using a heterozygous Mecp2+/− female mouse model closely analogous to human Rett syndrome, we investigated the impact of early environmental enrichment on behavioral deficits and PFC connectivity. Behavioral analyses revealed that enriched housing rescued fine motor deficits and reduced anxiety, with enrichment-housed Mecp2+/− mice performing comparably to wild-type (WT) controls in rotarod and open field assays. Anatomical mapping of top-down anterior cingulate cortex (ACA) projections demonstrated altered PFC connectivity in Mecp2+/− mice, with increased axonal density in the somatosensory cortex and decreased density in the motor cortex compared to WT controls. ACA axons revealed shifts in hemispheric distribution, particularly in the medial network regions, with Mecp2+/− mice exhibiting reduced ipsilateral dominance. These changes were unaffected by enriched housing, suggesting that structural abnormalities in PFC connectivity persist despite behavioral improvements. Enriched housing rescued brain-derived neurotrophic factor (BDNF) levels in the hippocampus but failed to restore BDNF levels in the PFC, consistent with the persistent deficits observed in prefrontal axonal projections. These findings highlight the focal nature of changes induced by reduction of MeCP2 and by exposure to environmental enrichment and suggest that environmental enrichment starting in adolescence can alleviate behavioral deficits in Mecp2+/− mice without reversing abnormalities in large-scale cortical connectivity.
</description>
<pubDate>Thu, 17 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163391</guid>
<dc:date>2025-07-17T00:00:00Z</dc:date>
</item>
<item>
<title>Agentic deep graph reasoning yields self-organizing knowledge networks</title>
<link>https://hdl.handle.net/1721.1/163388</link>
<description>Agentic deep graph reasoning yields self-organizing knowledge networks
Buehler, Markus J.
We present an agentic, autonomous graph expansion framework that iteratively structures and refines knowledge in situ. Unlike conventional knowledge graph construction methods relying on static extraction or single-pass learning, our approach couples a reasoning-native large language model with a continually updated graph representation. At each step, the system actively generates new concepts and relationships, merges them into a global graph, and formulates subsequent prompts based on its evolving structure. Through this feedback-driven loop, the model organizes information into a scale-free network characterized by hub formation, stable modularity, and bridging nodes that link disparate knowledge clusters. Over hundreds of iterations, new nodes and edges continue to appear without saturating, while centrality measures and shortest path distributions evolve to yield increasingly distributed connectivity. Applied to materials design problems, we present compositional reasoning experiments to foster knowledge synthesis, yielding cross-domain ideas that transcend rote summarization.
</description>
<pubDate>Thu, 31 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163388</guid>
<dc:date>2025-07-31T00:00:00Z</dc:date>
</item>
<item>
<title>EvenQuads Game and Error-Correcting Codes</title>
<link>https://hdl.handle.net/1721.1/163387</link>
<description>EvenQuads Game and Error-Correcting Codes
Byrapuram, Nikhil; Choi, Hwiseo; Ge, Adam; Ge, Selena; Lee, Sylvia Z.; Liang, Evin; Mandal, Rajarshi; Oki, Aika; Wu, Daniel; Yang, Michael; Khovanova, Tanya
EvenQuads is a new card game that is a generalization of the SET game, where each card is characterized by three attributes, each taking four possible values. Four cards form a quad when, for each attribute, the values are the same, all different, or half and half. For any ℓ cards selected from the deck of EvenQuads, it is possible to construct an error-correcting linear binary code of length ℓ and Hamming distance 4, where quads correspond to codewords of weight 4. Using error-correcting codes, we calculate the number of possible quads that can be formed with up to 8 cards. We also estimate the number of cards that do not contain quads for decks of different sizes. In addition, we discuss properties of error-correcting codes built on semimagic, magic, and strongly magic quad squares. This highlights a rich interplay between recreational mathematics games and coding theory and encourages others to explore similar combinatorial games for hidden connections!
</description>
<pubDate>Fri, 22 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163387</guid>
<dc:date>2025-08-22T00:00:00Z</dc:date>
</item>
<item>
<title>Machine Learning Applications Enabling Fusion Energy: Recent Developments</title>
<link>https://hdl.handle.net/1721.1/163386</link>
<description>Machine Learning Applications Enabling Fusion Energy: Recent Developments
Rea, Cristina
Over the last few years, machine learning helped to develop advanced capabilities for fusion energy over a broad range of domains. This includes advanced algorithms to extract information from fusion diagnostics, enhanced algorithms for plasma state estimation and control, accelerated simulation tools to improve predictive capabilities, and expanded modeling capabilities for fusion materials design. This topical collection covers recent developments in machine learning applied research further enabling the path to fusion energy; in particular it covers a wide breadth of fusion subfields – from inertial confinement fusion, to magnetically confined plasma, including high temperature superconducting magnet design and optimization. This editorial summarizes the collection while also providing a critical outlook on how machine learning can be used in the future to accelerate the development of fusion energy as a reliable energy source.
</description>
<pubDate>Wed, 03 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163386</guid>
<dc:date>2025-09-03T00:00:00Z</dc:date>
</item>
<item>
<title>Synthesis‐Related Nanoscale Defects in Mo‐Based Janus Monolayers Revealed by Cross‐Correlated AFM and TERS Imaging</title>
<link>https://hdl.handle.net/1721.1/163385</link>
<description>Synthesis‐Related Nanoscale Defects in Mo‐Based Janus Monolayers Revealed by Cross‐Correlated AFM and TERS Imaging
Zhang, Tianyi; Krayev, Andrey; Yang, Tilo H; Mao, Nannan; Hoang, Lauren; Wang, Zhien; Liu, Hongwei; Peng, Yu‐Ren; Zhu, Yunyue; Zheng, Xudong; Isotta, Eleonora; Kira, Maria E; Righi, Ariete; Pimenta, Marcos A; Chueh, Yu‐Lun; Pop, Eric; Mannix, Andrew J; Kong, Jing
2D Janus transition metal dichalcogenides (TMDs) are promising candidatesfor various applications including non-linear optics, energy harvesting, andcatalysis. These materials are usually synthesized via chemical conversionof pristine TMDs. Nanometer-scale characterization of the obtained Janusmaterials’ morphology and local composition is crucial for both the synthesisoptimization and the future device applications. In this work, we present theresults of cross-correlated atomic force microscopy (AFM) and tip-enhancedRaman spectroscopy (TERS) study of Janus monolayers synthesizedby the hydrogen plasma-assisted chemical conversion of MoSe 2 andMoS2 . We demonstrate that the choice of both the growth substrate and thestarting TMD inﬂuences the residual strain, thereby shaping the nanoscalemorphology of the resulting Janus material. Furthermore, by employingTERS imaging, we show the presence of nanoscale islands (≈20 nm across)of MoSe 2 - Mo SSe (MoS2 -MoSeS ) vertical heterostructures originating from thebilayer nanoislands in the precursor monolayer crystals. The understanding ofthe origins of nanoscale defects in Janus TMDs revealed in this study can helpwith further optimization of the Janus conversion process towards uniformand wrinkle-/crack-free Janus materials. Moreover, this work shows thatcross-correlated AFM and TERS imaging is a powerful and accessible methodfor studying nanoscale composition and defects in Janus TMD monolayers.
</description>
<pubDate>Fri, 08 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163385</guid>
<dc:date>2025-08-08T00:00:00Z</dc:date>
</item>
<item>
<title>The Kigali story, the Singapore model, and rights to the city</title>
<link>https://hdl.handle.net/1721.1/163384</link>
<description>The Kigali story, the Singapore model, and rights to the city
Fischer, Michael MJ
Three recent ethnographies of Kigali's urban planning and development provide a welcome addition to a long tradition of such ethnographies, including Lisa Redfield Peattie's famous fieldwork in the planning of Ciudad Guayana (1968; 1987), Grace Goodell's ethnographic account of the disjunction between planning offices in Tehran and the urban settlements (sharaks) of the Khuzistan Development Project modelled on the Tennessee Vally Authority (1986), and Gökce Günel's ethnographic analysis of the disjunction between plans for, and implementation of, Mazdar City and Mazdar Institute in Abu Dhabi (2019).
</description>
<pubDate>Tue, 05 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163384</guid>
<dc:date>2025-08-05T00:00:00Z</dc:date>
</item>
<item>
<title>Pressurized plankton observatory offers a new window into deep‐sea larval behavior</title>
<link>https://hdl.handle.net/1721.1/163383</link>
<description>Pressurized plankton observatory offers a new window into deep‐sea larval behavior
Zúñiga Mouret, Rodrigo; Hourdez, Stéphane; Curran, Molly; DiBenedetto, Michelle H.; Mills, Susan W.; Vetriani, Costantino; Arellano, Shawn M.; Weston, Johanna N. J.; Dykman, Lauren N.; Best, Ayinde C.; Pires, Anthony; Mullineaux, Lauren S.
The High-Pressure Plankton Observatory (HiPPO) is designed to quantify motions of zooplankton for behavioral study, including swimming and metabolic responses to environmental perturbations. It builds on prior chamber designs while filling gaps in capability for resolving orientation of small (&lt; 1 mm) plankton, tracking their movements over ecologically relevant spatial scales, and recording in flow-through conditions on a vessel at sea. The HiPPO chamber has a direct light path for silhouette imaging of zooplankton as they move vertically and horizontally across a 3.56 cm diameter viewing area. Seawater forced by a high-performance liquid chromatography pump is exchanged continuously through the chamber, but flushing of zooplankton is prevented by fine mesh at the ports. A high-resolution camera/computer setup enables sustained imaging of plankton motions for quantitative analysis. Application of HiPPO to an investigation of larval behavior of deep-sea hydrothermal vent species revealed swimming behaviors similar to those of shallow-water species, including upward and downward helices, meandering, and short hovers. In conditions with microbial biofilm (a potential settlement cue) on a 2024 expedition, vent larvae unexpectedly swam rapidly upward in tight helices at velocities (0.15 cm s−1) higher than those observed in prior experiments with no biofilm (0.03 cm s−1). Many factors varied between the 2024 and earlier trials, so the difference cannot be attributed with certainty to a cue response. This study describes key new features of HiPPO and demonstrates the system's ability to document novel zooplankton behavior.
</description>
<pubDate>Wed, 23 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163383</guid>
<dc:date>2025-07-23T00:00:00Z</dc:date>
</item>
<item>
<title>Social-ecological system approaches for water resources management</title>
<link>https://hdl.handle.net/1721.1/163382</link>
<description>Social-ecological system approaches for water resources management
Gain, Animesh K.; Hossain, Sarwar; Benson, David; Di Baldassarre, Giuliano; Giupponi, Carlo; Huq, Nazmul
In the era of the Anthropocene, understanding the dynamic interactions between humans andwater is crucial for supporting both human well-being and the sustainable management ofresources. The current water management challenges are inherently unpredictable and difficultto control. Social-ecological systems (SESs) approaches explicitly recognize the connections andfeedbacks between human and natural systems. For addressing the complex challenges of theAnthropocene, consideration of SES attributes such as causality (or interdependence), feedback,non-linearity, heterogeneity, and cross-scale dynamics is important. In addition, innovative quali-tative and quantitative methods such as Bayesian networks, agent-based modelling, systemdynamics, network analysis, multicriteria analysis, integrated assessment and role-play gameshave recently been used in SES research. The overall goal of this review is to gauge the extentto which SES attributes and methods are considered within the current interdisciplinary waterparadigm. The paper therefore develops the normative theoretical characteristics of SES in termsof its key attributes (i.e. causality, feedback, heterogeneity, nonlinearity, and cross-scale dynamics)incorporated in the water paradigm approaches. The paper then compares the methods appliedin the interdisciplinary water paradigm and examines how they can complement each other.Finally, the paper reflects back on the usefulness of SES attributes and methods for assessing theinterdisciplinary water paradigm and makes recommendations for future research.
</description>
<pubDate>Thu, 18 Jun 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163382</guid>
<dc:date>2020-06-18T00:00:00Z</dc:date>
</item>
<item>
<title>Quantifying and improving the optical performance of the laser ablation aerosol particle time of flight mass spectrometer (LAAPToF) instrument</title>
<link>https://hdl.handle.net/1721.1/163381</link>
<description>Quantifying and improving the optical performance of the laser ablation aerosol particle time of flight mass spectrometer (LAAPToF) instrument
Zawadowicz, Maria A; Lance, Sara; Jayne, John T; Croteau, Philip; Worsnop, Douglas R; Mahrt, Fabian; Leisner, Thomas; Cziczo, Daniel J
Single particle mass spectrometer (SPMS) instruments have been used for in-situ chemicalcharacterization of atmospheric aerosols, both in the field and laboratory, for over two deca-des. SPMSs typically combine precise optical particle sizing with laser desorption and ioniza-tion followed by time of flight mass spectrometry. Among the advantages of SPMSs overother aerosol chemistry measurement techniques are their single particle resolution andhigh sensitivity to trace chemical species. The AeroMegt Laser Ablation Aerosol ParticleTime of Flight Mass Spectrometer (LAAPToF) is a commercially available member of thisinstrument class, aiming for a compact size and simplicity for the end user. This articlequantifies the performance of LAAPToF with an emphasis on optical counting efficiency.Recommendations for improving detection compared to the base LAAPToF hardware aredescribed. Our results show that changes to the optical detection scheme can lead to overtwo orders of magnitude improvement in optical counting efficiency in the size range500–2000 nm vacuum aerodynamic diameter. We also present mass spectral performancefor characterizing atmospherically relevant particles in a comparison to a current SPMSdesign, the Particle Analysis by Laser Mass Spectrometry.
</description>
<pubDate>Fri, 21 Feb 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163381</guid>
<dc:date>2020-02-21T00:00:00Z</dc:date>
</item>
<item>
<title>Expression of endogenous Anopheles gambiae microRNAs using an Anopheles gambiae densovirus (AgDNV) intronic expression system</title>
<link>https://hdl.handle.net/1721.1/163380</link>
<description>Expression of endogenous Anopheles gambiae microRNAs using an Anopheles gambiae densovirus (AgDNV) intronic expression system
Johnson, Rebecca M.; Metz, Hillery C.; Suzuki, Yasutsugu; McLean, Kyle J.; Rasgon, Jason L.
Background Anopheles gambiae densovirus (AgDNV) is a highly species-specific parvovirus that reaches high titers in adult Anopheles gambiae mosquitoes with few transcriptomic effects and minimal significant fitness effects. Given these characteristics, AgDNV has been proposed as a viral vector for basic research and mosquito control. Previous work created an AgDNV co-expression system with a wild-type AgDNV helper plasmid and a transducing plasmid expressing enhanced green fluorescent protein (EGFP) that can be used to co-transfect cells to generate infectious recombinant transducing AgDNV virions. Generated virions infect the An. gambiae midgut, fat body, and ovaries, yet this viral vector system is limited in the size of transgenes that can be expressed due to capsid packaging limitations. Methods Considering these size constraints, we created an artificial intron within the EGFP gene of the transducing construct that can express small pieces of genetic material such as microRNAs (miRNAs), microRNA sponges, or other small sequences. Placement of this intron in EGFP created a fluorescent reporter such that incorrect splicing produces a frameshift mutation in EGFP and an early stop codon, whereas correct splicing results in normal EGFP expression and co-transcription of the intronic genetic cargo. A selection of miRNAs with predicted or demonstrated importance in mosquito immunity and reproduction with expression localized to the fat body or ovaries were chosen as intronic cargo. Construct expression and splicing was evaluated, and the impact of miRNA expression on putative miRNA targets was measured in vitro and in vivo. Results The created intron was correctly spliced in cells and mosquitoes; however, miRNA delivery resulted in inconsistent changes to miRNA and predicted target gene transcript levels—possibly due to organ-specific miRNA expression or inaccurate putative target predictions leading to miRNA–target gene sequence mismatch. Conclusions Although our results on target gene expression were inconsistent, with optimization this viral vector and developed intron have potential as an expression tool within An. gambiae mosquitoes or cell lines.
</description>
<pubDate>Tue, 19 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163380</guid>
<dc:date>2025-08-19T00:00:00Z</dc:date>
</item>
<item>
<title>Advanced Modeling and Microstructural Insights into the Hot Deformation Behavior of Fe–11Al–5Mn–1Nb–1C Low-Density Steel</title>
<link>https://hdl.handle.net/1721.1/163379</link>
<description>Advanced Modeling and Microstructural Insights into the Hot Deformation Behavior of Fe–11Al–5Mn–1Nb–1C Low-Density Steel
Mahanta, Bashista K.; Rawat, Pankaj; Bhan, Sumit; Roy, Swagata
The hot deformation behavior of Fe–11Al–5Mn–1Nb–1C low-density steel was investigated using a GLEEBLE 3800R thermomechanical simulator across a temperature range of 900–1200 ℃ and strain rates of 1–0.001 s−1. An Arrhenius-type constitutive model was developed to predict flow stress during deformation, alongside a bilayer evolutionary neural network (EvoNN) model based on an artificial neural network (ANN) approach. The EvoNN model demonstrated higher prediction accuracy than the constitutive model. Microstructural analysis revealed a ferritic matrix with kappa carbide as a secondary phase at 900 and 1000 ℃, while at 1100 and 1200 ℃, a dual-phase structure (ferrite + austenite) with fine kappa carbides at the phase interface was observed. NbC particles were consistently present in all hot compressed samples. Partial dynamic recrystallization (DRX) occurred at 900 and 1000 ℃, whereas more extensive DRX was observed at 1100 and 1200 ℃. Grain coarsening was evident at lower strain rates, increasing as the strain rate decreased. Fine NbC particles and kappa carbides pinned grain boundaries, potentially delaying DRX onset, while coarse NbC particles appeared to enhance particle-stimulated nucleation (PSN), introducing complexity to DRX dynamics and contributing to model discrepancies in the constitutive and EvoNN model.
</description>
<pubDate>Sun, 18 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163379</guid>
<dc:date>2025-05-18T00:00:00Z</dc:date>
</item>
<item>
<title>Three-pion Bose-Einstein correlations measured in proton-proton collisions</title>
<link>https://hdl.handle.net/1721.1/163378</link>
<description>Three-pion Bose-Einstein correlations measured in proton-proton collisions
Aaij, R.; Abdelmotteleb, A. S. W.; Abellan Beteta, C.; Abudinén, F.; Ackernley, T.; Adefisoye, A. A.; Adeva, B.; Adinolfi, M.; Adlarson, P.; Agapopoulou, C.; Aidala, C. A.; Ajaltouni, Z.; Akar, S.; Akiba, K.; Albicocco, P.; Albrecht, J.
A study on the Bose-Einstein correlations for triplets of same-sign pions is presented. The analysis is performed using proton-proton collisions at a centre-of-mass energy of s = 7 TeV, recorded by the LHCb experiment, corresponding to an integrated luminosity of 1.0 fb−1. For the first time, the results are interpreted in the core-halo model. The parameters of the model are determined in regions of charged-particle multiplicity. This measurement provides insight into the nature of hadronisation in terms of coherence, being consistent with the presence of coherent emission of pions.
</description>
<pubDate>Thu, 21 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163378</guid>
<dc:date>2025-08-21T00:00:00Z</dc:date>
</item>
<item>
<title>Search for dark matter produced in association with one or two top quarks in proton-proton collisions at √s = 13 TeV</title>
<link>https://hdl.handle.net/1721.1/163377</link>
<description>Search for dark matter produced in association with one or two top quarks in proton-proton collisions at √s = 13 TeV
Chekhovsky, V.; Hayrapetyan, A.; Makarenko, V.; Tumasyan, A.; Adam, W.; Andrejkovic, J. W.; Benato, L.; Bergauer, T.; Chatterjee, S.; Damanakis, K.; Dragicevic, M.; Hussain, P. S.; Jeitler, M.; Krammer, N.; Li, A.; Liko, D.; Mikulec, I.
A search is performed for dark matter (DM) produced in association with a single top quark or a pair of top quarks using the data collected with the CMS detector at the LHC from proton-proton collisions at a center-of-mass energy of 13 TeV, corresponding to 138 fb−1 of integrated luminosity. An excess of events with a large imbalance of transverse momentum is searched for across 0, 1 and 2 lepton final states. Novel multivariate techniques are used to take advantage of the differences in kinematic properties between the two DM production mechanisms. No significant deviations with respect to the standard model predictions are observed. The results are interpreted considering a simplified model in which the mediator is either a scalar or pseudoscalar particle and couples to top quarks and to DM fermions. Axion-like particles that are coupled to top quarks and DM fermions are also considered. Expected exclusion limits of 410 and 380 GeV for scalar and pseudoscalar mediator masses, respectively, are set at the 95% confidence level. A DM particle mass of 1 GeV is assumed, with mediator couplings to fermions and DM particles set to unity. A small signal-like excess is observed in data, with the largest local significance observed to be 1.9 standard deviations for the 150 GeV pseudoscalar mediator hypothesis. Because of this excess, mediator masses are only excluded below 310 (320) GeV for the scalar (pseudoscalar) mediator. The results are also translated into model-independent 95% confidence level upper limits on the visible cross section of DM production in association with top quarks, ranging from 1 pb to 0.02 pb.
</description>
<pubDate>Tue, 12 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163377</guid>
<dc:date>2025-08-12T00:00:00Z</dc:date>
</item>
<item>
<title>A parametric approach to plot-based urban design: A climate-responsive algorithmic control for the generation of urban block</title>
<link>https://hdl.handle.net/1721.1/163376</link>
<description>A parametric approach to plot-based urban design: A climate-responsive algorithmic control for the generation of urban block
Çalışkan, Olgu; Akay, Mert
In modern urbanism, (re)production of urban land predominantly relies on large parcels through intensive capital investments. Such a mainstream significantly shapes the overall urban form, subsequently influencing the quality of life through the perceived characteristics of the form and program of the planned districts. Consequently, critical urban design theory increasingly prioritizes the plot as the fundamental unit of future urban development. While ‘plot-based urbanism’ presents a responsive approach to this issue, there remains a notable gap in systematic methodologies that can be universally applied across different contexts. In this paper, the authors propose an algorithmic framework that would be employed as a design control tool based on the associative logic of plot-based urban formation. The model framework comprises three steps: (1) plot layout generation, (2) building configuration, and (3) incremental formation of the block fabric. The applied model demonstrates the compositional variation and coherence within the urban block while concurrently optimizing the climatic performance of the emerging fabric.
</description>
<pubDate>Fri, 17 Oct 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163376</guid>
<dc:date>2025-10-17T00:00:00Z</dc:date>
</item>
<item>
<title>Assessing uncertainties in parton showers at double logarithmic accuracy for jet quenching studies</title>
<link>https://hdl.handle.net/1721.1/163375</link>
<description>Assessing uncertainties in parton showers at double logarithmic accuracy for jet quenching studies
Andres, Carlota; Apolinário, Liliana; Armesto, Néstor; Cordeiro, André; Dominguez, Fabio; Milhano, José G.
We present a systematic study of how different choices of ordering and phase-space constraints in parton showers affect the space-time structure of vacuum parton cascades and their interface with jet quenching models. Using a simplified Monte Carlo shower implemented at double logarithmic accuracy, we analyse variations in emission patterns and resulting phase-space arising from three ordering variables: inverse formation time, invariant mass, and opening angle. These are coupled with two kinematic reconstruction schemes defined by different phase-space constraints. We show that, while global features are relatively stable, differences emerge in the temporal evolution of the cascade. To probe the impact of these differences, we introduce a simplified model for in-medium energy loss based on formation time and colour decoherence, enabling us to evaluate the sensitivity of quenching observables to the underlying space-time structure of the vacuum shower. We further quantify the role of time-ordering violations and propose strategies to preserve a consistent space-time interpretation. Lastly, we explore a range of alternative quenching models confirming the robustness of our conclusions. Our findings highlight the importance of maintaining a coherent space-time structure in parton shower algorithms when modelling jet propagation in an extended QCD medium, as this structure becomes a physically meaningful and testable component of the jet itself.
</description>
<pubDate>Wed, 20 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163375</guid>
<dc:date>2025-08-20T00:00:00Z</dc:date>
</item>
<item>
<title>Caribbean Creep meets Chesapeake Creep: marine bioinvasions and community shifts along the Mid-Atlantic Coast, USA</title>
<link>https://hdl.handle.net/1721.1/163373</link>
<description>Caribbean Creep meets Chesapeake Creep: marine bioinvasions and community shifts along the Mid-Atlantic Coast, USA
Fowler, Amy E.; Blakeslee, April M. H.; Davinack, Andrew; Aguilar, Robert; Andersen, Miranda; Benadon, Clara; Choong, Henry H. C.; Green-Gavrielidis, Lindsay; Greenberg, Sarah R.; Hartshorn, El; Hobbs, Niels-Viggo; Labbe, Sara; Larson, Kristen
The Mid-Atlantic waters of North America are warming faster than &gt; 90% of other global oceans, leading to significant increases in bottom water temperatures and influencing shifts in marine community structure. Given this modern-day scenario of significant community shifts over space and time, baseline surveys of species diversity are increasingly valuable. Therefore, we performed the first-ever marine bioinvasions Rapid Assessment Survey (RAS) along the Mid-Atlantic waters of the United States in June 2023, focused on marina floating pontoons in Virginia, Maryland, Delaware, and New Jersey. We recorded 29 non-indigenous, 16 cryptogenic, and 10 species that have expanded their ranges in the mid-Atlantic. Seven of these 10 species have expanded northwards from southern locations in the Caribbean (“Caribbean Creep”) or the western Atlantic (“Chesapeake Creep”), and three have expanded southwards. Five non-indigenous species (NIS) were found at more than 60% of the 10 sampled sites: the bryozoans Bugula neritina, Schizoporella pungens, Tricellaria inopinata, macroalgae Codium fragile subsp. fragile, and the sea anemone Aiptasiogeton eruptaurantia. We did not document any new nonindigenous species not already recorded on the Western Atlantic coast. All 10 communities were distinctly different, and species dominance varied by latitude and by site. This first-ever RAS of the Mid-Atlantic waters of the United States provides critical insight into how marine communities have been and are changing as a result of colonization by NIS, including those that have expanded their ranges as a result of human-induced climate change.
</description>
<pubDate>Mon, 18 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163373</guid>
<dc:date>2025-08-18T00:00:00Z</dc:date>
</item>
<item>
<title>Observation of the distribution of nuclear magnetization in a molecule</title>
<link>https://hdl.handle.net/1721.1/163372</link>
<description>Observation of the distribution of nuclear magnetization in a molecule
Wilkins, S. G.; Udrescu, S. M.; Athanasakis-Kaklamanakis, M.; Garcia Ruiz, R. F.; Belosevic, I.; Berger, R.; Bissell, M. L.; Breier, A. A.; Brinson, A. J.; Chrysalidis, K.; Cocolios, T. E.; de Groote, R. P.; Dorne, A.; Flanagan, K. T.; Franchoo, S.; Gaul, K.; Geldhof, S.; Giesen, T. F.; Hanstorp, D.; Heinke, R.; Isaev, T.; Koszorus, A.; Kujanpa, S.; Lalanne, L.; Neyens, G.; Nichols, M.; Perrett, H.A.; Reilly, J.R.; Skripnikov, L. V.; Rothe, S.; van den Borne, B.; Wang, W.; Wessolek, J.; Yang, X.F.; Zulch, C.Z.
Rapid progress in the experimental control and interrogation of molecules, combined&#13;
with developments in precise calculations of their structure, are enabling new opportunities in the investigation of nuclear and particle physics phenomena. Molecules&#13;
containing heavy, octupole-deformed nuclei such as radium are of particular interest&#13;
for such studies, offering an enhanced sensitivity to the properties of fundamental particles and interactions. Here, we report precision laser spectroscopy measurements&#13;
and theoretical calculations of the structure of the radioactive radium monofluoride&#13;
molecule, 225Ra19F. Our results allow fine details of the short-range electron-nucleus&#13;
interaction to be revealed, indicating the high sensitivity of this molecule to the distribution of magnetization, currently a poorly constrained nuclear property, within the&#13;
radium nucleus. These results provide a direct and stringent test of the description of&#13;
the electronic wavefunction inside the nuclear volume, highlighting the suitability of&#13;
these molecules to investigate subatomic phenomena.
</description>
<pubDate>Thu, 23 Oct 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163372</guid>
<dc:date>2025-10-23T00:00:00Z</dc:date>
</item>
<item>
<title>Design and Performance of Metal Hydride Composite Neutron Shields for Compact, High-Power Fusion Reactors</title>
<link>https://hdl.handle.net/1721.1/163370</link>
<description>Design and Performance of Metal Hydride Composite Neutron Shields for Compact, High-Power Fusion Reactors
Fletcher, Jack W; Peterson, Ethan E; Trelewicz, Jason R; Snead, Lance L
We present the process and results of neutronics-driven shielding design using metal and ceramic matrix metal hydride neutron shields within the context of compact, high-power tokamaks. In particular, hafnium hydrides were considered within a matrix of stainless steel or magnesium oxide and contrasted with established and novel fast neutron shielding materials. These shielding materials are found to substantially increase the lifetime of toroidal field magnets made of high-temperature superconductors by a factor of up to 14.5. Specifically, a stainless steel–20% HfH1.7 thermal shield and outer neutron shield, paired with an inner tungsten carbide (WC) shield and toroidal field magnet case and winding pack both doped with 40% HfH1.7 by volume, were found to achieve a 93.1% reduction in peak fast neutron flux to high-temperature superconductor tapes. Simultaneously, this configuration reduced the total mass (and cost) of the neutron shield, as well as the nuclear heating rate of the magnet coil, in comparison to monolithic shields of WC and boron carbide.
</description>
<pubDate>Mon, 04 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163370</guid>
<dc:date>2025-08-04T00:00:00Z</dc:date>
</item>
<item>
<title>GeoConformal Prediction: A Model-Agnostic Framework for Measuring the Uncertainty of Spatial Prediction</title>
<link>https://hdl.handle.net/1721.1/163369</link>
<description>GeoConformal Prediction: A Model-Agnostic Framework for Measuring the Uncertainty of Spatial Prediction
Lou, Xiayin; Luo, Peng; Meng, Liqiu
Spatial prediction is a fundamental task in geography, providing essential data support for various scenarios.Recent advancements, empowered by the development of geospatial artificial intelligence (GeoAI), haveprimarily focused on improving prediction accuracy while overlooking reliable measurements of predictionuncertainty. Such measures are crucial for enhancing model trustworthiness and supporting responsibledecision-making. To address this issue, we propose a model-agnostic uncertainty assessment method calledGeoConformal Prediction (GeoCP). First, a simulation study is conducted to validate the usefulness ofGeoCP. Then, we applied GeoCP to two classic spatial prediction cases, spatial regression and spatialinterpolation, to evaluate its reliability. For the case of spatial regression, we used XGBoost to predicthousing prices, followed by GeoCP to calculate uncertainty. Our results show that GeoCP achieved acoverage rate of 93.67 percent, whereas bootstrapping methods reached a maximum coverage of 81.00percent after 2,000 runs. We then applied GeoCP for the case of spatial interpolation models. By comparinga GeoAI-based geostatistical model with a traditional geostatistical model (Kriging), we found that theuncertainty obtained from GeoCP aligned closely with the variance in Kriging. Finally, using GeoCP, weanalyzed the sources of uncertainty in spatial prediction. We found that explicitly including local features inAI models can significantly reduce prediction uncertainty, especially in areas with strong local dependence.Our findings suggest that GeoCP holds substantial potential not only for geographic knowledge discovery butalso for guiding the design of future GeoAI models, paving the way for more reliable and interpretablespatial prediction frameworks. The method is implemented in an open-source Python package namedgeoconformal. Key Words: conformal prediction, GeoAI, Kriging, spatial regression, spatial uncertainty.
</description>
<pubDate>Tue, 01 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163369</guid>
<dc:date>2025-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Belief revision revised</title>
<link>https://hdl.handle.net/1721.1/163368</link>
<description>Belief revision revised
Pearson, Joshua Edward
I outline a novel counterexample to the principle ofbelief revision, Anticipation: if both learning &#119890; andlearning not-&#119890; would render belief in &#119901; unjustified, youcannot now be justified in believing &#119901;. If I am right,not only is the leading theory of belief revision false, soare various recently proposed weakenings. I develop anddefend a new theory that correctly predicts the failuresof Anticipation I argue for, predicated on the simpleidea that one is justified in ruling out possibility just incase that possibility is sufficiently improbable.
</description>
<pubDate>Sun, 27 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163368</guid>
<dc:date>2025-07-27T00:00:00Z</dc:date>
</item>
<item>
<title>Incorporating Deep Learning Into System Dynamics: Amortized Bayesian Inference for Scalable Likelihood‐Free Parameter Estimation</title>
<link>https://hdl.handle.net/1721.1/163367</link>
<description>Incorporating Deep Learning Into System Dynamics: Amortized Bayesian Inference for Scalable Likelihood‐Free Parameter Estimation
Rahmandad, Hazhir; Akhavan, Ali; Jalali, Mohammad S
Estimating parameters and their credible intervals for complex system dynamics models is challenging but critical to continu-ous model improvement and reliable communication with an increasing fraction of audiences. The purpose of this study is tointegrate Amortized Bayesian Inference (ABI) methods with system dynamics. Utilizing Neural Posterior Estimation (NPE), wetrain neural networks using synthetic data (pairs of ground truth parameters and outcome time series) to estimate parameters ofsystem dynamics models. We apply this method to two example models: a simple Random Walk model and a moderately complexSEIRb model. We show that the trained neural networks can output the posterior for parameters instantly given new unseentime series data. Our analysis highlights the potential of ABI to facilitate a principled, scalable, and likelihood-free inferenceworkflow that enhance the integration of models of complex systems with data. Accompanying code streamlines application todiverse system dynamics models.
</description>
<pubDate>Tue, 21 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163367</guid>
<dc:date>2025-01-21T00:00:00Z</dc:date>
</item>
<item>
<title>Influences of Non‐Oberbeck–Boussinesq Effects on Tracer Transport in Icy Ocean Worlds</title>
<link>https://hdl.handle.net/1721.1/163366</link>
<description>Influences of Non‐Oberbeck–Boussinesq Effects on Tracer Transport in Icy Ocean Worlds
Wang, Shuang; Kang, Wanying
The subsurface oceans on icy satellites are potentially habitable. To understand their habitability,we need to know how tracers with various lifetimes distribute. Convection is the main vehicle for tracertransport, and we expect convection on icy satellites to differ from regular rotating convection, because aspressure increases, water's thermal expansivity can vary by orders of magnitude or even reverse sign nearfreezing point. Any variation of fluid properties would break the Oberbeck–Boussinesq approximation, leadingto non‐Oberbeck–Boussinesq (NOB) effects, measured by a coefficient ϵ. In this work, we identify twocompeting impacts of NOB effects on tracer transport. The first promotes overall upward tracer transport at ϵ2‐order, while the second enhances transport near the bottom source but inhibits transport further up at ϵ3‐order. Inweakly nonlinear regime, the former effect dominates, causing more tracers reaching the ice shell. While instrongly nonlinear regime, the latter effect dominates, reducing tracer concentrations near the ice shell. Byvarying particle lifetimes, we find that NOB corrections are most pronounced when particle lifetime iscomparable to the timescale of upward tracer transport. Additionally, when NOB effects are strong enough tocreate a stratified layer in the upper part of the ocean, tracer transport into the stratified layer is set by energetics.These effects are expected to prolong the transport timescale of chemical tracers or biosignatures from theseafloor to the ice shell on icy satellites.
</description>
<pubDate>Mon, 14 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163366</guid>
<dc:date>2025-07-14T00:00:00Z</dc:date>
</item>
<item>
<title>Single Word Change Is All You Need: Using LLMs to Create Synthetic Training Examples for Text Classifiers</title>
<link>https://hdl.handle.net/1721.1/163365</link>
<description>Single Word Change Is All You Need: Using LLMs to Create Synthetic Training Examples for Text Classifiers
Xu, Lei; Alnegheimish, Sarah; Berti‐Equille, Laure; Cuesta‐Infante, Alfredo; Veeramachaneni, Kalyan
In text classification, creating an adversarial example means subtly perturbing a few words in a sentence without changing itsmeaning, causing it to be misclassified by a classifier. A concerning observation is that a significant portion of adversarial exam-ples generated by existing methods change only one word. This single-word perturbation vulnerability represents a significantweakness in classifiers, which malicious users can exploit to efficiently create a multitude of adversarial examples. This paperstudies this problem and makes the following key contributions: (1) We introduce a novel metric &#120588; to quantitatively assess a clas-sifier's robustness against single-word perturbation. (2) We present the SP-Attack, designed to exploit the single-word perturbationvulnerability, achieving a higher attack success rate, better preserving sentence meaning, while reducing computation costscompared to state-of-the-art adversarial methods. (3) We propose SP-Defence, which aims to improve &#120588; by applying data augmen-tation in learning. Experimental results on 4 datasets and 2 masked language models show that SP-Defence improves &#120588; by 14.6%and 13.9% and decreases the attack success rate of SP-Attack by 30.4% and 21.2% on two classifiers respectively, and decreasesthe attack success rate of existing attack methods that involve multiple-word perturbation.
</description>
<pubDate>Mon, 07 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163365</guid>
<dc:date>2025-07-07T00:00:00Z</dc:date>
</item>
<item>
<title>Evaluation and Spatial Optimization Model of Urban Medical Resource Distribution Considering Equity and Efficiency</title>
<link>https://hdl.handle.net/1721.1/163364</link>
<description>Evaluation and Spatial Optimization Model of Urban Medical Resource Distribution Considering Equity and Efficiency
Yao, Yao; Wang, Yujia; Liang, Lin; Yan, Xiaoqin; Dong, Anning; Guan, Qingfeng; Luo, Peng
The rapidly increasing demand for medical resources in accelerating urbanization countries is facing the challenge of unequalresource distribution. Despite numerous studies on the siting of medical resources aimed at improving public accessibility andefficiency to these resources, there is comparatively less research focusing on the equity of access to medical resources. Thisstudy establishes a framework that optimizes the distribution of medical resources by considering both equity and efficiency. Weintroduce an optimization allocation model for both equity and efficiency based on the location set coverage problem (LSCP). Themodel combines region growing algorithm and genetic algorithm to optimize site selection for hospitals. Taking Wuhan as thestudy area, the results demonstrate that the optimized service coverage increases by 21.2%, and the number of people served hasreached 87.3%. The hospital bed utilization rate in downtown areas reaches 92.89%, while it exceeds 99% at suburban hospitals.The optimized site selection significantly enhances medical resource utilization efficiency, effectively addressing the resourcedistribution inequity between urban and rural areas. This study offers a novel approach to optimizing medical resource alloca-tion, effectively balancing equity and efficiency, and providing valuable theoretical underpinnings for enhancing medical servicesystems in emerging urban areas.
</description>
<pubDate>Sat, 05 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163364</guid>
<dc:date>2025-07-05T00:00:00Z</dc:date>
</item>
<item>
<title>On the use of high‐density polyethylene bottles for long‐term storage of total alkalinity samples</title>
<link>https://hdl.handle.net/1721.1/163363</link>
<description>On the use of high‐density polyethylene bottles for long‐term storage of total alkalinity samples
Woosley, Ryan J; Neithardt, Daina; Bruno, Jessica A; Lahn, Lou
Total alkalinity (TA) plays an important role in buffering seawater and determining how much anthropogeniccarbon dioxide the oceans can absorb and mitigate the rise in atmospheric concentrations. Total alkalinity varieswith location, depth, and time making it an important variable needed to quantify and monitor ocean acidiﬁcation,and potentially for ocean alkalinity enhancement interventions. Currently, best practices are to use expensivehigh-quality borosilicate glass bottles for collecting and storing these samples. However, unlike other carbon systemvariables, TA is not affected by gas exchange meaning plastic bottles may be suitable for TA sample storage. Plasticbottles are lighter, cheaper, and less prone to breakage making them easier to handle and ship. Here, we test the suit-ability of high-density polyethylene (HDPE) for collection and long-term storage of TA samples. In two sets of exper-iments, it was determined that HDPE is not suitable for long-term storage of TA samples as there were large changesin TA over time and precision of duplicate samples was very poor. We hypothesize that HDPE plastic is slightlyporous leading to leaching of alkalinity either into or out of the bottle over time impacting the value of the sample.Use of HDPE bottles for TA samples is not recommended for long term sample storage.
</description>
<pubDate>Wed, 25 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163363</guid>
<dc:date>2025-06-25T00:00:00Z</dc:date>
</item>
<item>
<title>Adapting temporal preference to scarcity: A role for emotion?</title>
<link>https://hdl.handle.net/1721.1/163362</link>
<description>Adapting temporal preference to scarcity: A role for emotion?
Blain, Bastien; Globig, Laura K.; Sharot, Tali
A critical optimization problem is how to distribute resource consumption over time. Humans tend to value immediate rewards over equivalent future rewards—a phenomenon called temporal discounting. Such imbalance can lead to poor health, education, and financial decisions. It is also a hurdle for implementing sustainability policies. A major research goal is to identify factors that influence temporal discounting, so that policymakers could develop interventions to correct for this imbalance. One such factor is available resources; scarcity may increase in temporal discounting. Another potential factor is emotion; negative emotions may lead to high temporal discounting. However, emotion and resources are not independent. For example, losing a large sum of money will lead to negative affect. Here, we take advantage of one of the largest global ‘income shocks’ in history, to tease apart the role of emotion and income on temporal discounting. We tested 1,145 individuals as the market was crashing in late March 2020 and unemployment rising and then retested 200 of those individuals as the market was recovering in June 2020. We found that income shock was strongly related to an increase in delay discounting using cross-sectional and longitudinal data. Importantly, this relationship was independent of the negative impact on affect. These findings suggest that, contrary to wide held assumptions, people directly adapt delay discounting to environmental constraints, without the need for input from the affective system. This independence may be adaptive, as affect is a noisy reflection of environmental constraints, which may lead to suboptimal choice.
</description>
<pubDate>Fri, 20 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163362</guid>
<dc:date>2025-06-20T00:00:00Z</dc:date>
</item>
<item>
<title>Shattering in the Ising p-spin glass model</title>
<link>https://hdl.handle.net/1721.1/163361</link>
<description>Shattering in the Ising p-spin glass model
Gamarnik, David; Jagannath, Aukosh; Kızıldağ, Eren C.
We study the Ising p-spin glass model for large p. We show that for any inverse temperature ln 2 &lt; β &lt; 2 ln 2 and any large p, the model exhibits shattering: w.h.p. as n → ∞ , there exists exponentially many well-separated clusters such that (a) each cluster has exponentially small Gibbs mass, and (b) the clusters collectively contain all but a vanishing fraction of Gibbs mass. Moreover, these clusters consist of configurations with energy near β . Range of temperatures for which shattering occurs is within the replica symmetric region. To the best of our knowledge, this is the first shattering result regarding the Ising p-spin glass models. Furthermore, we show that for any γ &gt; 0 and any large enough p, the model exhibits an intricate geometrical property known as the multi Overlap Gap Property above the energy value γ 2 ln 2 . Our proofs are elementary, and in particular based on simple applications of the first and the second moment methods.
</description>
<pubDate>Thu, 11 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163361</guid>
<dc:date>2025-09-11T00:00:00Z</dc:date>
</item>
<item>
<title>Combined mechanical ventilatory and mechanical circulatory support aids pulmonary vascular state in cardiogenic shock</title>
<link>https://hdl.handle.net/1721.1/163360</link>
<description>Combined mechanical ventilatory and mechanical circulatory support aids pulmonary vascular state in cardiogenic shock
Lamberti, Kimberly K.; Edelman, Elazer R.; Keller, Steven P.
Background Percutaneous ventricular assist devices (pVADs) support patients in circulatory failure and increasingly concomitant respiratory failure. The presence of co-existent lung disease creates a management challenge due to cardiopulmonary interactions, especially when there is simultaneous mechanical ventilation and mechanical circulatory support. Enhanced understanding of the combined effects of these devices is necessary to better inform care for circulatory failure patients. Methods A porcine model of titratable acute cardiogenic shock was used to quantify the effect of pVAD support on cardiac loading states in five intubated animals with positive pressure ventilation and varied intrathoracic pressure. Cardiovascular hemodynamics were assessed across positive end-expiratory pressure (PEEP) ramps in animals in health, health with pVAD, and pVAD-supported cardiogenic shock induced via coronary microembolization. Results This study employed invasive physiological metrics and assessment of right and left ventricular press-volume loops to recreate classic Frank-Starling curves. Increased intrathoracic pressure altered transmural pressure in the ventricles and the pulmonary vasculature and resulted in decreased venous return and stroke volume while increasing end-diastolic pressure consistent with decreased ventricular compliance. In pVAD-supported cardiogenic shock, elevated PEEP enhanced left ventricular output and increased pulmonary vascular compliance in several animals, contrary to traditional decrements observed with elevated PEEP. The right ventricular functional response aligned with these varied responses in pulmonary vascular state. Conclusions These results demonstrate that combined used of cardiopulmonary support devices in cardiogenic shock can create variable responses compared to classic physiological understanding. In pVAD-supported cardiogenic shock, an increase in ventilatory PEEP increased unloading from the heart and improved right ventricular function, counter to traditional findings. This demonstrates that combined use of these technologies could be leveraged to optimize a patient’s volume status in complex shock and provides promise for management of patients with cardiopulmonary failure requiring simultaneous use of mechanical circulatory support and mechanical ventilation.
</description>
<pubDate>Wed, 15 Oct 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163360</guid>
<dc:date>2025-10-15T00:00:00Z</dc:date>
</item>
<item>
<title>On Chip-Firing on Undirected Binary Trees</title>
<link>https://hdl.handle.net/1721.1/163359</link>
<description>On Chip-Firing on Undirected Binary Trees
Inagaki, Ryota; Khovanova, Tanya; Luo, Austin
Chip-firing is a combinatorial game played on an undirected graph in which we place chips on vertices and disperse them. We study chip-firing on an infinite binary tree in which we add a self-loop to the root to ensure each vertex has degree 3. A vertex can fire if the number of chips placed on it is at least its degree. In our case, a vertex can fire if it has at least three chips, and it fires by dispersing one chip to each neighbor. Motivated by a 2023 paper by Musiker and Nguyen on this setting of chip-firing, we give an upper bound for the number of stable configurations when we place 2 ℓ - 1 labeled chips at the root. When starting with N chips at the root where N is a positive integer, we determine the number of times each vertex fires when N is not necessarily of the form 2 ℓ - 1 . We also calculate the total number of fires in this case.
</description>
<pubDate>Mon, 18 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163359</guid>
<dc:date>2025-08-18T00:00:00Z</dc:date>
</item>
<item>
<title>Measurement of charged hadron multiplicity in Au+Au collisions at s NN = 200 GeV with the sPHENIX detector</title>
<link>https://hdl.handle.net/1721.1/163358</link>
<description>Measurement of charged hadron multiplicity in Au+Au collisions at s NN = 200 GeV with the sPHENIX detector
Abdulhamid, M. I.; Acharya, U.; Adams, E. R.; Adawi, G.; Aidala, C. A.; Akiba, Y.; Alfred, M.; Ali, S.; Alsayegh, A.; Altaf, S.; Amedi, H.; Anderson, D. M.; Andrieux, V. V.; Angerami, A.; Applegate, N.; Aso, H.; Aune, S.
The pseudorapidity distribution of charged hadrons produced in Au+Au collisions at a center-of-mass energy of s NN = 200 GeV is measured using data collected by the sPHENIX detector. Charged hadron yields are extracted by counting cluster pairs in the inner and outer layers of the Intermediate Silicon Tracker, with corrections applied for detector acceptance, reconstruction efficiency, combinatorial pairs, and contributions from secondary decays. The measured distributions cover |η| &lt; 1.1 across various centralities, and the average pseudorapidity density of charged hadrons at mid-rapidity is compared to predictions from Monte Carlo heavy-ion event generators. This result, featuring full azimuthal coverage at mid-rapidity, is consistent with previous experimental measurements at the Relativistic Heavy Ion Collider, thereby supporting the broader sPHENIX physics program.
</description>
<pubDate>Tue, 12 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163358</guid>
<dc:date>2025-08-12T00:00:00Z</dc:date>
</item>
<item>
<title>Search for a heavy pseudoscalar Higgs boson decaying to a 125 GeV Higgs boson and a Z boson in final states with two tau and two light leptons in proton-proton collisions at √s = 13 TeV</title>
<link>https://hdl.handle.net/1721.1/163357</link>
<description>Search for a heavy pseudoscalar Higgs boson decaying to a 125 GeV Higgs boson and a Z boson in final states with two tau and two light leptons in proton-proton collisions at √s = 13 TeV
Chekhovsky, V.; Hayrapetyan, A.; Makarenko, V.; Tumasyan, A.; Adam, W.; Andrejkovic, J. W.; Benato, L.; Bergauer, T.; Chatterjee, S.; Damanakis, K.; Dragicevic, M.; Hussain, P. S.; Jeitler, M.; Krammer, N.; Li, A.; Liko, D.; Mikulec, I.
A search for a heavy pseudoscalar Higgs boson, A, decaying to a 125 GeV Higgs&#13;
boson h and a Z boson is presented. The h boson is identified via its decay to a pair of tau&#13;
leptons, while the Z boson is identified via its decay to a pair of electrons or muons. The&#13;
search targets the production of the A boson via the gluon-gluon fusion process, gg → A,&#13;
and in association with bottom quarks, bb¯A. The analysis uses a data sample corresponding&#13;
to an integrated luminosity of 138 fb−1&#13;
collected with the CMS detector at the CERN LHC&#13;
in proton-proton collisions at a centre-of-mass energy of √&#13;
s = 13 TeV. Constraints are set on&#13;
the product of the cross sections of the A production mechanisms and the A → Zh decay&#13;
branching fraction. The observed (expected) upper limit at 95% confidence level ranges&#13;
from 0.049 (0.060) pb to 1.02 (0.79) pb for the gg → A process and from 0.053 (0.059) pb&#13;
to 0.79 (0.61) pb for the bb¯A process in the probed range of the A boson mass, mA, from&#13;
225 GeV to 1 TeV. The results of the search are used to constrain parameters within the&#13;
M&#13;
125&#13;
h,EFT benchmark scenario of the minimal supersymmetric extension of the standard model.&#13;
Values of tan β below 2.2 are excluded in this scenario at 95% confidence level for all mA&#13;
values in the range from 225 to 350 GeV.
</description>
<pubDate>Thu, 09 Oct 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163357</guid>
<dc:date>2025-10-09T00:00:00Z</dc:date>
</item>
<item>
<title>Measurements of inclusive and differential cross sections for top quark production in association with a Z boson in proton-proton collisions at s = 13 TeV</title>
<link>https://hdl.handle.net/1721.1/163356</link>
<description>Measurements of inclusive and differential cross sections for top quark production in association with a Z boson in proton-proton collisions at s = 13 TeV
Hayrapetyan, A.; Tumasyan, A.; Adam, W.; Andrejkovic, J. W.; Bergauer, T.; Chatterjee, S.; Damanakis, K.; Dragicevic, M.; Hussain, P. S.; Jeitler, M.; Krammer, N.; Li, A.; Liko, D.; Mikulec, I.; Schieck, J.; Schöfbeck, R.; Schwarz, D.
Measurements are presented of inclusive and differential cross sections for Z boson associated production of top quark pairs ( t t ¯ Z ) and single top quarks (tZq or tWZ). The data were recorded in proton-proton collisions at a center-of-mass energy of 13 TeV, corresponding to an integrated luminosity of 138 fb−1. Events with three or more leptons, electrons or muons, are selected and a multiclass deep neural network is used to separate three event categories, the t t ¯ Z and tWZ processes, the tZq process, and the backgrounds. A profile likelihood approach is used to unfold the differential cross sections, to account for systematic uncertainties, and to determine the correlations between the two signal categories in one global fit. The inclusive cross sections for a dilepton invariant mass between 70 and 110 GeV are measured to be 1.14 ± 0.07 pb for the sum of t t ¯ Z and tWZ, and 0.81 ± 0.10 pb for tZq, in good agreement with theoretical predictions.
</description>
<pubDate>Wed, 26 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163356</guid>
<dc:date>2025-02-26T00:00:00Z</dc:date>
</item>
<item>
<title>Priming agents transiently reduce the clearance of cell-free DNA to improve liquid biopsies</title>
<link>https://hdl.handle.net/1721.1/163355</link>
<description>Priming agents transiently reduce the clearance of cell-free DNA to improve liquid biopsies
Martin-Alonso, Carmen; Tabrizi, Shervin; Xiong, Kan; Blewett, Timothy; Sridhar, Sainetra; Crnjac, Andjela; Patel, Sahil; An, Zhenyi; Bekdemir, Ahmet; Shea, Douglas; Wang, Shih-Ting; Rodriguez-Aponte, Sergio; Naranjo, Christopher A; Rhoades, Justin; Kirkpatrick, Jesse D; Fleming, Heather E; Amini, Ava P; Golub, Todd R; Love, J Christopher; Bhatia, Sangeeta N; Adalsteinsson, Viktor A
Liquid biopsies enable early detection and monitoring of diseases such as cancer, but their sensitivity remains limited by the scarcity of analytes such as cell-free DNA (cfDNA) in blood. Improvements to sensitivity have primarily relied on enhancing sequencing technology ex vivo. We sought to transiently augment the level of circulating tumor DNA (ctDNA) in a blood draw by attenuating its clearance in vivo. We report two intravenous priming agents given 1 to 2 hours before a blood draw to recover more ctDNA. Our priming agents consist of nanoparticles that act on the cells responsible for cfDNA clearance and DNA-binding antibodies that protect cfDNA. In tumor-bearing mice, they greatly increase the recovery of ctDNA and improve the sensitivity for detecting small tumors.
</description>
<pubDate>Fri, 19 Jan 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163355</guid>
<dc:date>2024-01-19T00:00:00Z</dc:date>
</item>
<item>
<title>Vaccine targeting to mucosal lymphoid tissues promotes humoral immunity in the gastrointestinal tract</title>
<link>https://hdl.handle.net/1721.1/163354</link>
<description>Vaccine targeting to mucosal lymphoid tissues promotes humoral immunity in the gastrointestinal tract
Kocabiyik, Ozgun; Amlashi, Parastoo; Vo, A Lina; Suh, Heikyung; Rodriguez-Aponte, Sergio A; Dalvie, Neil C; Love, J Christopher; Andrabi, Raiees; Irvine, Darrell J
Viruses, bacteria, and parasites frequently cause infections in the gastrointestinal tract, but traditional vaccination strategies typically elicit little or no mucosal antibody responses. Here, we report a strategy to effectively concentrate immunogens and adjuvants in gut-draining lymph nodes (LNs) to induce gut-associated mucosal immunity. We prepared nanoemulsions (NEs) based on biodegradable oils commonly used as vaccine adjuvants, which encapsulated a potent Toll-like receptor agonist and displayed antigen conjugated to their surface. Following intraperitoneal administration, these NEs accumulated in gut-draining mesenteric LNs, priming strong germinal center responses and promoting B cell class switching to immunoglobulin A (IgA). Optimized NEs elicited 10- to 1000-fold higher antigen-specific IgG and IgA titers in the serum and feces, respectively, compared to free antigen mixed with NE, and strong neutralizing antibody titers against severe acute respiratory syndrome coronavirus 2. Thus, robust gut humoral immunity can be elicited by exploiting the unique lymphatic collection pathways of the gut with a lymph-targeting vaccine formulation.
</description>
<pubDate>Wed, 29 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163354</guid>
<dc:date>2024-05-29T00:00:00Z</dc:date>
</item>
<item>
<title>Expansion of tumor-reactive CD8+ T cell clonotypes occurs in the spleen in response to immune checkpoint blockade</title>
<link>https://hdl.handle.net/1721.1/163353</link>
<description>Expansion of tumor-reactive CD8+ T cell clonotypes occurs in the spleen in response to immune checkpoint blockade
Morgan, Duncan M; Horton, Brendan L; Bhandarkar, Vidit; Van, Richard; Dinter, Teresa; Zagorulya, Maria; Love, J Christopher; Spranger, Stefani
Immune checkpoint blockade (ICB) enhances T cell responses against cancer, leading to long-term&#13;
survival in a fraction of patients. CD8+ T cell differentiation in response to chronic antigen&#13;
stimulation is highly complex and it remains unclear precisely which T cell differentiation states at&#13;
which anatomic sites are critical for the response to ICB. We identified an intermediate-exhausted&#13;
population in the white pulp of the spleen which underwent significant expansion in response&#13;
to ICB and gave rise to the majority of tumor-infiltrating clonotypes. Increased systemic antigen&#13;
perturbed differentiation of this population towards a most circulatory exhausted_KLR state, while&#13;
a lack of cross-presented tumor-antigen blunted its differentiation in the spleen. An analogous&#13;
population of exhausted_KLR CD8+ T cells in human blood samples exhibited diminished tumortrafficking ability. Collectively, our data demonstrate the critical role of antigen density within the&#13;
spleen for the differentiation and expansion of T cell clonotypes in response to ICB.
</description>
<pubDate>Fri, 13 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163353</guid>
<dc:date>2024-09-13T00:00:00Z</dc:date>
</item>
<item>
<title>Drell-Yan transverse-momentum spectra at N3LL′ and approximate N4LL with SCETlib</title>
<link>https://hdl.handle.net/1721.1/163352</link>
<description>Drell-Yan transverse-momentum spectra at N3LL′ and approximate N4LL with SCETlib
Billis, Georgios; Michel, Johannes K. L.; Tackmann, Frank J.
We provide state-of-the-art precision QCD predictions for the fiducial W and Z boson transverse momentum spectra at the LHC at N3LL′ and approximate N4LL in resummed perturbation theory, matched to available O α s 3 fixed-order results. Our predictions consistently combine all information from across the spectrum in a unified way, ranging from the nonperturbative region of small transverse momenta to the fixed-order tail, with an emphasis on estimating the magnitude of residual perturbative uncertainties, and in particular of those related to the matching. Parametric uncertainties related to the strong coupling, the collinear PDFs, and the nonperturbative transverse momentum-dependent (TMD) dynamics are studied in detail. To assess the latter, we explicitly demonstrate how the full complexity of flavor and Bjorken x-dependent TMD dynamics can be captured by a single, effective nonperturbative function for the resonant production of any given vector boson at a given collider. We point out that the cumulative p T Z cross section at the level of precision enabled by our predictions provides strong constraining power for PDF determinations at full N3LO.
</description>
<pubDate>Tue, 25 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163352</guid>
<dc:date>2025-02-25T00:00:00Z</dc:date>
</item>
<item>
<title>Search for a heavy resonance decaying into a Z and a Higgs boson in events with an energetic jet and two electrons, two muons, or missing transverse momentum in proton-proton collisions at √s = 13 TeV</title>
<link>https://hdl.handle.net/1721.1/163351</link>
<description>Search for a heavy resonance decaying into a Z and a Higgs boson in events with an energetic jet and two electrons, two muons, or missing transverse momentum in proton-proton collisions at √s = 13 TeV
Hayrapetyan, A.; Tumasyan, A.; Adam, W.; Andrejkovic, J. W.; Bergauer, T.; Chatterjee, S.; Damanakis, K.; Dragicevic, M.; Hussain, P. S.; Jeitler, M.; Krammer, N.; Li, A.; Liko, D.; Mikulec, I.; Schieck, J.; Schöfbeck, R.
A search is presented for a heavy resonance decaying into a Z boson and a Higgs (H) boson. The analysis is based on data from proton-proton collisions at a centre-of-mass energy of 13 TeV corresponding to an integrated luminosity of 138 fb−1, recorded with the CMS experiment in the years 2016–2018. Resonance masses between 1.4 and 5 TeV are considered, resulting in large transverse momenta of the Z and H bosons. Final states that result from Z boson decays to pairs of electrons, muons, or neutrinos are considered. The H boson is reconstructed as a single large-radius jet, recoiling against the Z boson. Machine-learning flavour-tagging techniques are employed to identify decays of a Lorentz-boosted H boson into pairs of charm or bottom quarks, or into four quarks via the intermediate H → WW* and ZZ* decays. The analysis targets H boson decays that were not generally included in previous searches using the H → b b ¯ channel. Compared with previous analyses, the sensitivity for high resonance masses is improved significantly in the channel where at most one b quark is tagged.
</description>
<pubDate>Thu, 13 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163351</guid>
<dc:date>2025-02-13T00:00:00Z</dc:date>
</item>
<item>
<title>Measurement of the CKM angle γ in B± → DK*(892)± decays</title>
<link>https://hdl.handle.net/1721.1/163350</link>
<description>Measurement of the CKM angle γ in B± → DK*(892)± decays
Aaij, R.; Abdelmotteleb, A. S. W.; Abellan Beteta, C.; Abudinén, F.; Ackernley, T.; Adefisoye, A. A.; Adeva, B.; Adinolfi, M.; Adlarson, P.; Agapopoulou, C.; Aidala, C. A.; Ajaltouni, Z.; Akar, S.; Akiba, K.; Albicocco, P.; Albrecht, J.
Measurements of CP observables and the CKM angle γ are performed in B± → DK*(892)± decays, where D represents a superposition of D0 and D ¯ 0 states, using the LHCb dataset collected during Run 1 (2011–2012) and Run 2 (2015–2018). A study of this channel is presented with the D meson reconstructed in two-body final states K±π∓, K+K− and π+π−; four-body final states K±π∓π±π∓ and π+π−π+π−; and three-body final states K S 0 π + π − and K S 0 K + K − . This analysis includes the first observation of the suppressed B± → [π±K∓]DK*± and B± → [π±K∓π±π∓]DK*± decays. The combined result gives γ = (63 ± 13)°.
</description>
<pubDate>Tue, 18 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163350</guid>
<dc:date>2025-02-18T00:00:00Z</dc:date>
</item>
<item>
<title>Measurement of the t¯tH and tH production rates in the H → bb¯ decay channel using proton-proton collision data at √s = 13 TeV</title>
<link>https://hdl.handle.net/1721.1/163349</link>
<description>Measurement of the t¯tH and tH production rates in the H → bb¯ decay channel using proton-proton collision data at √s = 13 TeV
Hayrapetyan, A.; Tumasyan, A.; Adam, W.; Andrejkovic, J. W.; Bergauer, T.; Chatterjee, S.; Damanakis, K.; Dragicevic, M.; Hussain, P. S.; Jeitler, M.; Krammer, N.; Li, A.; Liko, D.; Mikulec, I.; Schieck, J.; Schöfbeck, R.; Schwarz, D.
An analysis of the production of a Higgs boson (H) in association with a top quark-antiquark pair ( t t ¯ H ) or a single top quark (tH) is presented. The Higgs boson decay into a bottom quark-antiquark pair (H → b b ¯ ) is targeted, and three different final states of the top quark decays are considered, defined by the number of leptons (electrons or muons) in the event. The analysis utilises proton-proton collision data collected at the CERN LHC with the CMS experiment at s = 13 TeV in 2016–2018, which correspond to an integrated luminosity of 138 fb−1. The observed t t ¯ H production rate relative to the standard model expectation is 0.33 ± 0.26 = 0.33 ± 0.17(stat) ± 0.21(syst). Additionally, the t t ¯ H production rate is determined in intervals of Higgs boson transverse momentum. An upper limit at 95% confidence level is set on the tH production rate of 14.6 times the standard model prediction, with an expectation of 19.3 − 6.0 + 9.2 . Finally, constraints are derived on the strength and structure of the coupling between the Higgs boson and the top quark from simultaneous extraction of the t t ¯ H and tH production rates, and the results are combined with those obtained in other Higgs boson decay channels.
</description>
<pubDate>Fri, 14 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163349</guid>
<dc:date>2025-02-14T00:00:00Z</dc:date>
</item>
<item>
<title>Differential cross section measurements for the production of top quark pairs and of additional jets using dilepton events from pp collisions at √s = 13 TeV</title>
<link>https://hdl.handle.net/1721.1/163348</link>
<description>Differential cross section measurements for the production of top quark pairs and of additional jets using dilepton events from pp collisions at √s = 13 TeV
Tumasyan, A.; Adam, W.; Andrejkovic, J. W.; Bergauer, T.; Chatterjee, S.; Damanakis, K.; Dragicevic, M.; Escalante Del Valle, A.; Hussain, P. S.; Jeitler, M.; Krammer, N.; Lechner, L.; Liko, D.; Mikulec, I.; Paulitsch, P.; Pitters, F. M.; Schieck, J.; Schöfbeck, R.
Differential cross sections for top quark pair ( t t ¯ ) production are measured in proton-proton collisions at a center-of-mass energy of 13 TeV using a sample of events containing two oppositely charged leptons. The data were recorded with the CMS detector at the CERN Large Hadron Collider and correspond to an integrated luminosity of 138 fb−1. The differential cross sections are measured as functions of kinematic observables of the t t ¯ system, the top quark and antiquark and their decay products, as well as of the number of additional jets in the event. The results are presented as functions of up to three variables and are corrected to the parton and particle levels. When compared to standard model predictions based on quantum chromodynamics at different levels of accuracy, it is found that the calculations do not always describe the observed data. The deviations are found to be largest for the multi-differential cross sections.
</description>
<pubDate>Tue, 11 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163348</guid>
<dc:date>2025-02-11T00:00:00Z</dc:date>
</item>
<item>
<title>Search for dark matter produced in association with a pair of bottom quarks in proton-proton collisions at s = 13 TeV</title>
<link>https://hdl.handle.net/1721.1/163347</link>
<description>Search for dark matter produced in association with a pair of bottom quarks in proton-proton collisions at s = 13 TeV
Hayrapetyan, A.; Tumasyan, A.; Adam, W.; Andrejkovic, J. W.; Bergauer, T.; Chatterjee, S.; Damanakis, K.; Dragicevic, M.; Hussain, P. S.; Jeitler, M.; Krammer, N.; Li, A.; Liko, D.; Mikulec, I.; Schieck, J.; Schöfbeck, R.; Schwarz, D.; Sonawane, M.
A search for dark matter (DM) particles produced in association with bottom quarks is presented. The analysis uses proton-proton collision data at a center-of-mass energy of s = 13 TeV, corresponding to an integrated luminosity of 138 fb−1. The search is performed in a final state with large missing transverse momentum and a pair of jets originating from bottom quarks. No significant excess of data is observed with respect to the standard model expectation. Results are interpreted in the context of a type-II two-Higgs-doublet model with an additional light pseudoscalar (2HDM+a). An upper limit is set on the mass of the lighter pseudoscalar, probing masses up to 260 GeV at 95% confidence level. Sensitivity to the parameter space with the ratio of the vacuum expectation values of the two Higgs doublets, tan β, greater than 15 is achieved, capitalizing on the enhancement of couplings between pseudoscalars and bottom quarks with high tan β.
</description>
<pubDate>Tue, 11 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163347</guid>
<dc:date>2025-02-11T00:00:00Z</dc:date>
</item>
<item>
<title>Boundary terms in string field theory</title>
<link>https://hdl.handle.net/1721.1/163346</link>
<description>Boundary terms in string field theory
Fırat, Atakan H.; Mamade, Raji A.
We supplement the string field theory action with boundary terms to make its variational principle well-posed. Central to our considerations is the violation of the stress-energy tensor conservation in non-compact CFTs due to the boundary terms. This manifests as the failure of the cyclicity of the BRST operator, which encodes the target space integration by parts identities at the level of the worldsheet. Using this failure, we argue that the free closed string field theory action admits a well-posed variational principle upon including an additional boundary contribution. We explicitly work out the resulting action up to the massless level and show that it is related to the expansion of the low-energy effective string action endowed with the Gibbons-Hawking-York term on a flat background. We also discuss the structure of the boundary terms in the interacting theory.
</description>
<pubDate>Tue, 11 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163346</guid>
<dc:date>2025-02-11T00:00:00Z</dc:date>
</item>
<item>
<title>Search for resonant pair production of Higgs bosons in the bbb¯ b¯ final state using large-area jets in proton-proton collisions at √s = 13 TeV</title>
<link>https://hdl.handle.net/1721.1/163244</link>
<description>Search for resonant pair production of Higgs bosons in the bbb¯ b¯ final state using large-area jets in proton-proton collisions at √s = 13 TeV
Tumasyan, A.; Adam, W.; Andrejkovic, J. W.; Bergauer, T.; Chatterjee, S.; Damanakis, K.; Dragicevic, M.; Escalante Del Valle, A.; Frühwirth, R.; Jeitler, M.; Krammer, N.; Lechner, L.; Liko, D.; Mikulec, I.; Paulitsch, P.; Pitters, F. M.
A search is presented for the resonant production of a pair of standard model-like Higgs bosons using data from proton-proton collisions at a centre-of-mass energy of 13 TeV, collected by the CMS experiment at the CERN LHC in 2016–2018, corresponding to an integrated luminosity of 138 fb−1. The final state consists of two b quark-antiquark pairs. The search is conducted in the region of phase space where at least one of the pairs is highly Lorentz-boosted and is reconstructed as a single large-area jet. The other pair may be either similarly merged or resolved, the latter reconstructed using two b-tagged jets. The data are found to be consistent with standard model processes and are interpreted as 95% confidence level upper limits on the product of the cross sections and the branching fractions of the spin-0 radion and the spin-2 bulk graviton that arise in warped extradimensional models. The limits set are in the range 9.74–0.29 fb and 4.94–0.19 fb for a narrow radion and a graviton, respectively, with masses between 1 and 3 TeV. For a radion and for a bulk graviton with widths 10% of their masses, the limits are in the range 12.5–0.35 fb and 8.23–0.23 fb, respectively, for the same masses. These limits result in the exclusion of a narrow-width graviton with a mass below 1.2 TeV, and of narrow and 10%-width radions with masses below 2.6, and 2.9 TeV, respectively.
</description>
<pubDate>Fri, 07 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163244</guid>
<dc:date>2025-02-07T00:00:00Z</dc:date>
</item>
<item>
<title>Future Circular Collider Feasibility Study Report</title>
<link>https://hdl.handle.net/1721.1/163243</link>
<description>Future Circular Collider Feasibility Study Report
Benedikt, M.; Zimmermann, F.; Auchmann, B.; Bartmann, W.; Burnet, J. P.; Carli, C.; Chancé, A.; Craievich, P.; Giovannozzi, M.; Grojean, C.; Gutleber, J.; Hanke, K.; Henriques, A.; Janot, P.; Lourenço, C.; Mangano, M.; Otto, T.; Poole, J.; Rajagopalan, S.; Raubenheimer, T.
Volume 3 of the FCC Feasibility Report presents studies related to civil engineering, the development of a project implementation scenario, and environmental and sustainability aspects. The report details the iterative improvements made to the civil engineering concepts since 2018, taking into account subsurface conditions, accelerator and experiment requirements, and territorial considerations. It outlines a technically feasible and economically viable civil engineering configuration that serves as the baseline for detailed subsurface investigations, construction design, cost estimation, and project implementation planning. Additionally, the report highlights ongoing subsurface investigations in key areas to support the development of an improved 3D subsurface model of the region. The report describes the development of the project scenario based on the ‘avoid-reduce-compensate’ iterative optimisation approach. The reference scenario balances optimal physics performance with territorial compatibility, implementation risks, and costs. Environmental field investigations covering almost 600 hectares of terrain—including numerous urban, economic, social, and technical aspects—confirmed the project’s technical feasibility and contributed to the preparation of essential input documents for the formal project authorisation phase. The summary also highlights the initiation of public dialogue as part of the authorisation process. The results of a comprehensive socio-economic impact assessment, which included significant environmental effects, are presented. Even under the most conservative and stringent conditions, a positive benefit-cost ratio for the FCC-ee is obtained. Finally, the report provides a summary of the studies conducted to document the current state of the environment.
</description>
<pubDate>Mon, 13 Oct 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163243</guid>
<dc:date>2025-10-13T00:00:00Z</dc:date>
</item>
<item>
<title>Locational and Spatial Development Patterns in U.S. Urban Micro Housing</title>
<link>https://hdl.handle.net/1721.1/163242</link>
<description>Locational and Spatial Development Patterns in U.S. Urban Micro Housing
Wang, Bing; Seiler, Michael J.; Liu, Kui; Du, Jinfeng
While previous studies of micro-housing have primarily relied on qualitative methods or case-based analyses, this study deploys a more rigorous, data-driven approach. We construct a hand-collected dataset covering 11 major U.S. cities to enable a quantitative examination of this emerging housing form. Drawing on 40 variables from 32 projects, including locational data, physical characteristics, market performance, and amenity features, we identified five distinct micro-housing typologies: TechEd, Dependent, Stand-Alone, Luxury, and Affordable Sharing Economy. In the context of increasing remote work and the growing influence of the sharing economy, these distinct micro-housing types are becoming increasingly relevant as an urban development model. This paper represents a first step toward systematically understanding these building typologies and uncovers their locational patterns through empirical analysis.
</description>
<pubDate>Thu, 16 Oct 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163242</guid>
<dc:date>2025-10-16T00:00:00Z</dc:date>
</item>
<item>
<title>Psyche Mission Description and Design Rationale</title>
<link>https://hdl.handle.net/1721.1/163241</link>
<description>Psyche Mission Description and Design Rationale
Polanskey, Carol A.; Elkins-Tanton, Linda T.; Bell, James F.; Alonge, Eleanor K.; Bairstow, Sarah H.; Binzel, Richard P.; Biswas, Abhijit; Bury, Luke; Cisneros, Ernest; Han, Dongsuk; Jun, Insoo; Klipstein, William M.; Lawrence, David J.; McCoy, Timothy J.; Mastrodemos, Nickolaos
The Psyche spacecraft launched on October 13, 2023 to journey to the asteroid of the same name. Psyche is the largest M-class asteroid and possibly the remanent core of an early differentiated planetesimal that was disrupted by collisions. The Psyche mission will test that hypothesis as the 14th mission in NASA’s Discovery Program. An alternative hypothesis is that the asteroid is unmelted primordial material. We describe the proposal competition process leading to selection of the mission and its context with other small body missions. This paper will briefly introduce the three science instruments, gravity science investigation, and Deep Space Optical Communications technology demonstration, leading into a detailed explanation of the science mission architecture. The orbital science phase is divided into a series of circular mapping orbits at four distinct altitudes, each selected to address specific science objectives. The requirements and objectives for each orbit are accompanied by an assessment of the effectiveness of each phase. We discuss the structure of the Psyche team during the operations phase along with the roles and responsibilities of the science and flight operations teams. Key elements of mission operations that are unique to the Psyche mission are provided. The Science Data Center manages and archives the Psyche mission data. The contents of the archive data sets for each instrument are outlined as well as the interfaces between the Science Data Center, the instrument teams, and the Planetary Data System.
</description>
<pubDate>Tue, 14 Oct 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163241</guid>
<dc:date>2025-10-14T00:00:00Z</dc:date>
</item>
<item>
<title>Quantum Perfect Matchings</title>
<link>https://hdl.handle.net/1721.1/163240</link>
<description>Quantum Perfect Matchings
Cui, David; Mančinska, Laura; Nezhadi, Seyed S.; Roberson, David E.
We investigate quantum and nonsignaling generalizations of perfect matchings in graphs using nonlocal games. Specifically, we introduce nonlocal games that test for L-perfect matchings in bipartite graphs, perfect matchings in general graphs and hypergraphs, and fractional perfect matchings. Our definitions come from the fact that these games are classical property tests for the corresponding matching conditions. We use the existence of perfect quantum and nonsignaling strategies for these games to define quantum and nonsignaling versions of perfect matchings. Finally, we provide characterizations of when graphs exhibit these extended properties: For nonsignaling matchings, we give a complete combinatorial characterization. In particular, a graph has a nonsignaling perfect matching if and only if it admits a fractional perfect matching that has bounded value on triangles. In bipartite graphs, the nonsignaling L-perfect matching property is achieved exactly when the left component of the graph can be split into two disjoint subgraphs: one with a classical L-perfect matching and another with left-degree 2. In the quantum setting, we show that complete graphs K n with odd n ≥ 7 have quantum perfect matchings. We prove that a graph has a quantum perfect matching if and only if the quantum independence number of its line graph is maximal, extending a classical relationship between perfect matchings and line graph independence numbers. For bipartite graphs, we establish that the L-perfect matching game does not exhibit quantum pseudotelepathy, but we characterize the quantum advantage for complete bipartite graphs K n , 2 . Additionally, we prove that deciding quantum perfect matchings in hypergraphs is undecidable and leaves open the question of its complexity in graphs.
</description>
<pubDate>Tue, 14 Oct 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163240</guid>
<dc:date>2025-10-14T00:00:00Z</dc:date>
</item>
<item>
<title>Trophic transfer of lipid-derived energy through Adélie and gentoo penguins near Palmer Station along the west Antarctic Peninsula</title>
<link>https://hdl.handle.net/1721.1/163239</link>
<description>Trophic transfer of lipid-derived energy through Adélie and gentoo penguins near Palmer Station along the west Antarctic Peninsula
Bent, Shavonna M.; Cimino, Megan A.; Connors, Elizabeth J.; Thomas, Maya I.; Miller, Carolyn A.; Fredricks, Helen F.; Van Mooy, Benjamin A. S.
Although Adélie and gentoo penguins are experiencing similar climatic conditions along the west Antarctic Peninsula (WAP), Adélie populations have decreased in the northern WAP, while gentoo populations have increased. We examined the lipid component of regurgitated prey (chick diets) from each penguin species to elucidate broader population trends. Nearly 90% of chick diet samples were composed of only krill, which we confirmed contained abundant phosphatidyl choline. Chick diets rich in fish had similar total caloric content to krill-only diets; however, these “fishy” chick diets had significantly more energy derived from triacylglycerides, an important energy-rich storage molecule, and were only found in gentoo penguins. We found that whole-krill eaten by adult penguins had 1.25–3.75 times more energy than chick diets, highlighting the role of digestion in the transfer of energy to chicks. Our results highlight dynamics between climate, predator–prey relationships, and trophic transfer of energy in the Antarctic food web.
</description>
<pubDate>Tue, 14 Oct 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163239</guid>
<dc:date>2025-10-14T00:00:00Z</dc:date>
</item>
<item>
<title>IsoDAR@Yemilab: Preliminary design report—volume I (cyclotron driver)</title>
<link>https://hdl.handle.net/1721.1/163238</link>
<description>IsoDAR@Yemilab: Preliminary design report—volume I (cyclotron driver)
Winklehner, Daniel; Abs, Michel; Alonso, Jose R.; Conrad, Janet M.; Engebretson, Samuel J.; Forton, Eric; Herrod, Alexander T.; Joassin, Denis; Moon, Jarrett; de Neuter, Sébastien; Van der Kraaij, Erik; Wéry, Gil; Winkler, Eleanor; Adelmann, Andreas; Axani, Spencer N.; Barletta, William A.; Barlow, Roger; Bartoszek, Larry; Bungau, Adriana; Calabretta, Luciano
This Preliminary Design Report (PDR) describes the IsoDAR electron-antineutrino source in two volumes which are mostly site-independent and describe the cyclotron driver providing a 10 mA/60 MeV proton beam (this Volume); and the medium energy beam transport line (MEBT) and target (Volume II). The IsoDAR driver and target will produce about 1.15 × 10 23 electron-antineutrinos over 5 years while operating with the anticipated 10 mA/60 MeV beam at an estimated 80% duty factor. Paired with a kton-scale liquid scintillator detector, it will enable a broad particle physics program including searches for new symmetries, new interactions and new particles. Here in Volume I, we describe the driver, which includes the ion source, low energy beam transport, and cyclotron. The latter features Radio-Frequency Quadrupole (RFQ) direct axial injection and represents the first accelerator purpose-built to make use of so-called vortex motion.
</description>
<pubDate>Wed, 15 Oct 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163238</guid>
<dc:date>2025-10-15T00:00:00Z</dc:date>
</item>
<item>
<title>Physics-Based Inverse Problem Approach for Estimating Operating Conditions in Forced Convection Systems with Uncertainty Quantification</title>
<link>https://hdl.handle.net/1721.1/163237</link>
<description>Physics-Based Inverse Problem Approach for Estimating Operating Conditions in Forced Convection Systems with Uncertainty Quantification
Kim, Haeseong; Cetiner, Sacit M; Bucci, Matteo
Accurately determining the operating conditions of thermal systems with limited measurements is a critical challenge in convection-dominated problems of interest for nuclear engineering applications. Because of the complexity of these phenomena, existing research has often relied on data-driven reconstruction of physical quantities. In this work, instead of using a data-driven approach, which usually lacks interpretability, we focus on a physics-based inverse problem to estimate unknown causes from available observations. We address the problem of estimating operating conditions (such as heat source intensity and flow rate) in a steady-state turbulent forced convection system from a limited number of temperature measurements. Based on a forward model with quantified uncertainty, we employed Newton’s method to estimate unknown parameters and incorporated uncertainty quantification. The uncertainty analysis addresses the impact of measurement uncertainty and errors in closure relationships. The identified uncertainties provide insights into their mitigation and inform experimental design. The structured approach to inverse analysis enables accurate estimation with minimal sensor data, as shown in this specific example. The analysis will contribute to the development of advanced sparse sensing techniques, with potential implications for broader industrial and environmental applications.
</description>
<pubDate>Wed, 03 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163237</guid>
<dc:date>2025-09-03T00:00:00Z</dc:date>
</item>
<item>
<title>Design-Based Uncertainty for Quasi-Experiments</title>
<link>https://hdl.handle.net/1721.1/163236</link>
<description>Design-Based Uncertainty for Quasi-Experiments
Rambachan, Ashesh; Roth, Jonathan
Design-based frameworks of uncertainty are frequently used in settings where the treatment is (conditionally) randomly assigned. This article develops a design-based framework suitable for analyzing quasi-experimental settings in the social sciences, in which the treatment assignment can be viewed as the realization of some stochastic process but there is concern about unobserved selection into treatment. In our framework, treatments are stochastic, but units may differ in their probabilities of receiving treatment, thereby allowing for rich forms of selection. We provide conditions under which the estimands of popular quasi-experimental estimators correspond to interpretable finite-population causal parameters. We characterize the biases and distortions to inference that arise when these conditions are violated. These results can be used to conduct sensitivity analyses when there are concerns about selection into treatment. Taken together, our results establish a rigorous foundation for quasi-experimental analyses that more closely aligns with the way empirical researchers discuss the variation in the data. Supplementary materials for this article are available online, including a standardized description of the materials available for reproducing the work.
</description>
<pubDate>Wed, 27 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163236</guid>
<dc:date>2025-08-27T00:00:00Z</dc:date>
</item>
<item>
<title>Golden Dome and Arms Control: Impediment or Opportunity?</title>
<link>https://hdl.handle.net/1721.1/163235</link>
<description>Golden Dome and Arms Control: Impediment or Opportunity?
Vaddi, Pranay R.; Warden, John K.
The Trump administration identified arms control talks with Russia and China as an early priority. At the same time, the US President directed the Defense Department to develop a comprehensive air and missile defense system for the United States, and potentially for forward-deployed forces and allies as well. The interrelationship between strategic offensive and defensive arms will complicate, but not necessarily derail, the administration’s strategic arms control agenda.
</description>
<pubDate>Tue, 15 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163235</guid>
<dc:date>2025-07-15T00:00:00Z</dc:date>
</item>
<item>
<title>Syndicated Lending Relationships, Information Asymmetry, and Market Making in the Secondary Loan Market</title>
<link>https://hdl.handle.net/1721.1/163234</link>
<description>Syndicated Lending Relationships, Information Asymmetry, and Market Making in the Secondary Loan Market
PHILLIPS, MATTHEW A
This paper investigates why commercial lenders make markets for the loansthat they sell on the secondary market. Using loan-level data, I ﬁnd thatorigination lenders with extensive borrower relationships and more repu-tational capital at stake are more likely to serve as market makers. Greaterparticipation of origination lenders as market makers is associated with lowertrading costs for their borrowers’ loans. This association remains even in con-ditions where origination lenders could exploit their information advantagefor market making proﬁts. Lenders beneﬁt from being market makers bymaintaining strong subsequent lending relationships with their borrowers.Collectively, this evidence is consistent with origination lenders’ participationin the secondary market being motivated by reducing trading frictions ratherthan market making proﬁts.
</description>
<pubDate>Wed, 16 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163234</guid>
<dc:date>2025-07-16T00:00:00Z</dc:date>
</item>
<item>
<title>Evolution of the South Pacific's Iron Cycle Over the Cenozoic</title>
<link>https://hdl.handle.net/1721.1/163233</link>
<description>Evolution of the South Pacific's Iron Cycle Over the Cenozoic
Tegler, Logan A.; Horner, Tristan J.; Nielsen, Sune G.; Heard, Andy W.; Squires, Katherine R.; Severmann, Silke; Peucker‐Ehrenbrink, Bernhard; Blusztajn, Jerzy; Dunlea, Ann G.
Iron (Fe) availability impacts marine primary productivity, potentially influencing the efficiency of the biological carbon pump. Stable Fe isotope analysis has emerged as a tool to understand how Fe is sourced and cycled in the water column; however its application to sediment records is complicated by overlapping isotope signatures of different sources and uncertainties in establishing chronologies. To overcome these challenges, we integrate Fe and osmium isotope measurements with multi-element geochemical analysis and statistical modeling. We apply this approach to reconstruct the history of Fe delivery to the South Pacific from three pelagic clay sequences spanning 93 million years. Our analysis reveals five principal Fe sources—dust, distal background, two distinct hydrothermal inputs, and a magnesium-rich volcanic ash. Initially, hydrothermal inputs dominated Fe deposition, but as the sites migrated away from their respective mid-ocean ridges, other sources became prominent. Notably, from 66 to 40 million years ago (Ma), distal background Fe was the primary source before a shift to increasing dust dominance around 30 Ma. This transition implies that Fe in South Pacific seawater has been dust-dominated since ≈30 Ma, despite extremely low dust deposition rates today. We speculate that the shift to episodic and low Fe fluxes in the South Pacific and Southern Ocean over the Cenozoic helped shape an ecological niche that favored phytoplankton that adapted to these conditions, such as diatoms. Our analysis highlights how Fe delivery to the ocean is driven by large-scale tectonic and climatic shifts, while also influencing climate through its integral role in marine phytoplankton and Earth's biogeochemical cycles.
</description>
<pubDate>Thu, 03 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163233</guid>
<dc:date>2025-07-03T00:00:00Z</dc:date>
</item>
<item>
<title>Starship as an Enabling Option for a Uranus Flagship Mission</title>
<link>https://hdl.handle.net/1721.1/163232</link>
<description>Starship as an Enabling Option for a Uranus Flagship Mission
Gochenaur, Daniel; Gentgen, Chloe; de Weck, Olivier
In 2022, the National Academy of Sciences Planetary Science Decadal Survey recommended exploration of Uranus as its highest priority Flagship mission for the 2030s. The Decadal recommendation relied on the Uranus Orbiter and Probe (UOP) concept as its baseline for the mission. UOP assumed a launch in 2031 on a Falcon Heavy Expendable rocket and an intermediate Jupiter flyby, allowing it to arrive at Uranus before 2050. At present, it is likely that the original UOP launch will be postponed, which will cause a Jupiter gravity assist to become unavailable and could delay the arrival at Uranus. However, a later launch date allows us to consider launch vehicles currently under development such as SpaceX's Starship, a two-stage heavy-lift launch vehicle that is intended to be refuelable on-orbit. Although Starship's performance capabilities have yet to be demonstrated, current development timelines suggest they will be known before selecting a launch vehicle for a Uranus mission. This study investigates the possibility of leveraging the anticipated capabilities of Starship to support a Flagship mission to Uranus. The results show that with on-orbit refueling, Starship will be capable of performing direct transfer to Uranus without the need for intermediate planetary flybys. Direct transfer with Starship orbit insertion allows nearly five metric tonnes of mass to be deployed to Uranus orbit using nine refueling launches in ten years, compared to more than thirteen years for UOP. If the spacecraft is used to perform the orbit insertion maneuver, five tonnes of mass can be deployed in less than nine years with seven refueling trips. Larger payload masses and shorter times of flight can be achieved by using Starship to perform aerocapture. As a mid- to high-lift to drag ratio vehicle, Starship can succesfully perform aerocapture while maintaining deceleration and heating values that are not more severe than those observed by aerocapture studies for other vehicles. With seven refueling launches and a seven-year transfer time of flight, Starship can deliver nearly six tonnes of payload mass to Uranus using aerocapture. With a longer time of flight and additional refueling launches, mission masses greater than fifty tonnes can be delivered to Uranus orbit. By using Starship to deploy a spacecraft and probe of a similar design as UOP, the reduced transfer times can facilitate an arrival at Uranus well before equinox, and can enable science phases of up to ten years. Performing the insertion burn with Starship also increases the Δv available for the science tour. Using the UOP architecture would make the mission compatible with both Falcon Heavy and Starship, thereby reducing risk. Alternatively, the additional payload mass that can be deployed to Uranus with Starship can enhance the orbiter and probe architecture beyond the current design, potentially allowing for a larger instrument suite, additional probes, and even a secondary spacecraft. To this end, a Uranus Flagship mission using Starship presents a higher-risk, yet potentially greater-science-return option that could become viable if financial conditions permit.
2025 IEEE Aerospace Conference, 1-8 March, Big Sky, MT, USA
</description>
<pubDate>Mon, 14 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163232</guid>
<dc:date>2025-07-14T00:00:00Z</dc:date>
</item>
<item>
<title>Forecasting Research Trends Using Knowledge Graphs and Large Language Models</title>
<link>https://hdl.handle.net/1721.1/163231</link>
<description>Forecasting Research Trends Using Knowledge Graphs and Large Language Models
Tomczak, Maciej; Park, Yang Jeong; Hsu, Chia‐Wei; Brown, Payden; Massa, Dario; Sankowski, Piotr; Li, Ju; Papanikolaou, Stefanos
Since ancient times, oracles (e.g., Delphi) has the ability to provide useful visions of where the society is headed, based on key event correlations and educated guesses. Currently, foundation models are able to distill and analyze enormous text-based data that can be used to understand where societal components are headed in the future. This work investigates the use of three large language models (LLM) and their ability to aid the research of nuclear materials. Using a large dataset of Journal of Nuclear Materials papers spanning from 2001 to 2021, models are evaluated and compared with perplexity, similarity of output, and knowledge graph metrics such as shortest path length. Models are compared to the highest performer, OpenAI's GPT-3.5. LLM-generated knowledge graphs with more than 2 × 105 nodes and 3.3 × 105 links are analyzed per publication year, and temporal tracking leads to the identification of criteria for publication innovation, controversy, influence, and future research trends.
</description>
<pubDate>Fri, 12 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163231</guid>
<dc:date>2025-09-12T00:00:00Z</dc:date>
</item>
<item>
<title>Assessing Cloud Feedbacks Over the Atlantic With Bias‐Corrected Downscaling</title>
<link>https://hdl.handle.net/1721.1/163230</link>
<description>Assessing Cloud Feedbacks Over the Atlantic With Bias‐Corrected Downscaling
Liu, Shuchang; Zeman, Christian; Schär, Christoph
Clouds exert a significant impact on global temperatures and climate change. Cloud‐radiativefeedback (CRF) is one of the major sources of climate change uncertainty. Understanding CRF is thereforecrucial for accurate climate projections. Biases like the double‐ITCZ problem in Global Climate Models(GCMs) hamper precise climate projections. Here, we explore a bias‐corrected downscaling method toconstrain the cloud feedback uncertainties in the tropical and sub‐tropical Atlantic region. We use regionalclimate model (RCM) simulations with convection permitting resolution, driven by debiased driving fields fromthree different global climate models (GCMs). Bias‐corrected downscaling significantly reduces biases in ITCZintensity and position, eliminating the double‐ITCZ bias across all six experiments (three GCMs for historicaland future periods). We explore the new methodology's potential to investigate the CRF in comparison to that ofthe driving GCMs. Results indicate that additional GCMs and RCMs are necessary for a more comprehensiveuncertainty estimation and more conclusive results, while our simulations suggest a potentially narrower rangeof CRF over the tropical and subtropical Atlantic, primarily due to an improved representation of stratocumulusclouds. Our study highlights the potential of bias‐corrected downscaling in constraining the uncertainty ofsimulations and estimates of cloud feedback and equilibrium climate sensitivity. The results advocate for furthersimulations with additional RCMs and domains for a more comprehensive analysis.
</description>
<pubDate>Mon, 16 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163230</guid>
<dc:date>2025-06-16T00:00:00Z</dc:date>
</item>
<item>
<title>Adaptation of Aglycosylated Monoclonal Antibodies for Improved Production in Komagataella phaffii</title>
<link>https://hdl.handle.net/1721.1/163225</link>
<description>Adaptation of Aglycosylated Monoclonal Antibodies for Improved Production in Komagataella phaffii
Yang, Yuchen; Dalvie, Neil C; Brady, Joseph R; Naranjo, Christopher A; Lorgeree, Timothy; Rodriguez‐Aponte, Sergio A; Johnston, Ryan S; Tracey, Mary K; Elenberger, Carmen M; Lee, Eric; Tié, Mark; Love, Kerry R; Love, J Christopher
Monoclonal antibodies (mAbs) are a major class of biopharmaceuticals manufactured by well-established processes using Chinese Hamster Ovary (CHO) cells. Next-generation biomanufacturing using alternative hosts like Komagataella phaffii could improve the accessibility of these medicines, address broad societal goals for sustainability, and offer financial advantages for accelerated development of new products. Antibodies produced by K. phaffii, however, may manifest unique molecular quality attributes, like host-dependent, product-related variants, that could raise potential concerns for clinical use. We demonstrate here conservative modifications to the amino acid sequence of aglycosylated antibodies based on the human IgG1 isotype that minimize product-related variations when secreted by K. phaffii. A combination of 2–3 changes of amino acids reduced variations across six different aglycosylated versions of commercial mAbs. Expression of a modified sequence of NIST mAb in both K. phaffii and CHO cells showed comparable biophysical properties and molecular variations. These results suggest a path toward the production of high-quality mAbs that could be expressed interchangeably by either yeast or mammalian cells. Improving molecular designs of proteins to enable a range of manufacturing strategies for well-characterized biopharmaceuticals could accelerate global accessibility and innovations.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163225</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Modulation of antigen delivery and lymph node activation in nonhuman primates by saponin adjuvant saponin/monophosphoryl lipid A nanoparticle</title>
<link>https://hdl.handle.net/1721.1/163223</link>
<description>Modulation of antigen delivery and lymph node activation in nonhuman primates by saponin adjuvant saponin/monophosphoryl lipid A nanoparticle
Yousefpour, Parisa; Zhang, Yiming J; Maiorino, Laura; Melo, Mariane B; Arainga Ramirez, Mariluz A; Kumarapperuma, Sidath C; Xiao, Peng; Silva, Murillo; Li, Na; Michaels, Katarzyna K; Georgeson, Erik; Eskandarzadeh, Saman; Kubitz, Michael; Groschel, Bettina; Qureshi, Kashif; Fontenot, Jane; Hangartner, Lars; Nedellec, Rebecca; Love, J Christopher; Burton, Dennis R; Schief, William R; Villinger, Francois J; Irvine, Darrell J
Saponin-based vaccine adjuvants are potent in preclinical animal models and humans, but their mechanisms of action remain poorly understood. Here, using a stabilized HIV envelope trimer immunogen, we carried out studies in nonhuman primates (NHPs) comparing the most common clinical adjuvant aluminum hydroxide (alum) with saponin/monophosphoryl lipid A nanoparticles (SMNP), an immune-stimulating complex–like adjuvant. SMNP elicited substantially stronger humoral immune responses than alum, including 7-fold higher peak antigen-specific germinal center B-cell responses, 18-fold higher autologous neutralizing antibody titers, and higher levels of antigen-specific plasma and memory B cells. Positron emission tomography and computed tomography imaging in live NHPs showed that, unlike alum, SMNP promoted rapid antigen accumulation in both proximal and distal lymph nodes (LNs). SMNP also induced strong type I interferon transcriptional signatures, expansion of innate immune cells, and increased antigen-presenting cell activation in LNs. These findings indicate that SMNP promotes multiple facets of the early immune response relevant for enhanced immunity to vaccination.
</description>
<pubDate>Mon, 25 Nov 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163223</guid>
<dc:date>2024-11-25T00:00:00Z</dc:date>
</item>
<item>
<title>Vaccines combining slow release and follicle targeting of antigens increase germinal center B cell diversity and clonal expansion</title>
<link>https://hdl.handle.net/1721.1/163222</link>
<description>Vaccines combining slow release and follicle targeting of antigens increase germinal center B cell diversity and clonal expansion
Rodrigues, Kristen A; Zhang, Yiming J; Lam, Jonathan; Aung, Aereas; Morgan, Duncan M; Romanov, Anna; Maiorino, Laura; Yousefpour, Parisa; Gibson, Grace; Ozorowski, Gabriel; Gregory, Justin R; Amlashi, Parastoo; Van, Richard; Buckley, Maureen; Ward, Andrew B; Schief, William R; Love, J Christopher; Irvine, Darrell J
Vaccine adjuvants play important roles in shaping the humoral response to immunization. Here, we analyzed mechanisms of action of a clinically relevant combination adjuvant strategy, where phosphoserine (pSer)–tagged immunogens bound to aluminum hydroxide (alum) adjuvant, promoting prolonged antigen release to draining lymph nodes, are combined with a saponin nanoparticle adjuvant termed SMNP, which alters lymph flow and antigen entry into lymph nodes. When used with a stabilized HIV envelope trimer antigen in mice, this combined adjuvant approach promoted substantial enhancements in germinal center and antibody responses relative to either adjuvant alone. Using single-cell RNA and B cell receptor sequencing, we found that the alum-pSer/SMNP combination augmented the clonal expansion and diversity of the germinal center B cell repertoire, coincident with an increased proportion of S-phase germinal center B cells and expression of positive selection markers. Moreover, we found that the combination adjuvant approach, but not alum-pSer delivery or SMNP alone, promoted accumulation of intact antigen on follicular dendritic cells, reflecting integrated effects of slow antigen delivery and altered lymph node uptake. Genetic ablation of Cr1/2 expression by follicular dendritic cells eliminated antigen accumulation and hampered the antigen-specific germinal center response, supporting antigen delivery to these cells as a key mechanism of the improved response elicited by this combination adjuvant. These results demonstrate how adjuvants with complementary mechanisms of action affecting vaccine biodistribution and kinetics can enhance humoral immunity.
</description>
<pubDate>Wed, 18 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163222</guid>
<dc:date>2025-06-18T00:00:00Z</dc:date>
</item>
<item>
<title>Accelerating cell culture media development using Bayesian optimization-based iterative experimental design</title>
<link>https://hdl.handle.net/1721.1/163220</link>
<description>Accelerating cell culture media development using Bayesian optimization-based iterative experimental design
Narayanan, Harini; Hinckley, Joshua A; Barry, Rachel; Dang, Brendan; Wolffe, Lenna A; Atari, Adel; Tseng, Yuen-Yi; Love, J Christopher
Optimizing operational conditions for complex biological systems used in life sciences research and biotechnology is an arduous task. Here, we apply a Bayesian Optimization-based iterative framework for experimental design to accelerate cell culture media development for two applications. First, we show that this approach yields new compositions of media with cytokine supplementation to maintain the viability and distribution of human peripheral blood mononuclear cells in the culture. Second, we apply this framework to optimize the production of three recombinant proteins in cultivations of &lt;jats:italic&gt;K.phaffii&lt;/jats:italic&gt;. We identified conditions with improved outcomes for both applications compared to the initial standard media using 3–30 times fewer experiments than that estimated for other methods such as the standard Design of Experiments. Subsequently, we also demonstrated the extensibility of our approach to efficiently account for additional design factors through transfer learning. These examples demonstrate how coupling data collection, modeling, and optimization in this iterative paradigm, while using an exploration-exploitation trade-off in each iteration, can reduce the time and resources for complex optimization tasks such as the one demonstrated here.
</description>
<pubDate>Tue, 01 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163220</guid>
<dc:date>2025-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Emerging immunomodulatory strategies for cell therapeutics</title>
<link>https://hdl.handle.net/1721.1/163219</link>
<description>Emerging immunomodulatory strategies for cell therapeutics
Chua, Corrine Ying Xuan; Jiang, Allen Yujie; Eufrásio-da-Silva, Tatiane; Dolatshahi-Pirouz, Alireza; Langer, Robert; Orive, Gorka; Grattoni, Alessandro
Cellular therapies are poised to transform the field of medicine by restoring dysfunctional tissues and treating various diseases in a dynamic manner not achievable by conventional pharmaceutics. Spanning various therapeutic areas inclusive of cancer, regenerative medicine, and immune disorders, cellular therapies comprise stem or non-stem cells derived from various sources. Despite numerous clinical approvals or trials underway, the host immune response presents a critical impediment to the widespread adoption and success of cellular therapies. Here, we review current research and clinical advances in immunomodulatory strategies to mitigate immune rejection or promote immune tolerance to cellular therapies. We discuss the potential of these immunomodulatory interventions to accelerate translation or maximize the prospects of improving therapeutic outcomes of cellular therapies for clinical success.
</description>
<pubDate>Wed, 01 Mar 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163219</guid>
<dc:date>2023-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Long-Time Quantum–Classical Correspondence for Open Systems in Trace Norm</title>
<link>https://hdl.handle.net/1721.1/163217</link>
<description>Long-Time Quantum–Classical Correspondence for Open Systems in Trace Norm
Li, Zhenhao
We consider a frictionless system coupled to an external Markovian environment. The quantum and classical evolution of such systems are described by the Lindblad and the Fokker–Planck equation, respectively. We show that when such a system is given by an at most quadratically growing Hamiltonian and at most linearly growing real jump functions, the quantum and classical evolutions remain close on time scales much longer than Ehrenfest time. In particular, we show that the evolution of a density matrix by the Lindblad equation is close in trace norm to the quantization of the corresponding evolution by the Fokker–Planck equation. Such agreement improves upon recent results (Galokowski and Zworski in Classical quantum correspondence in Lindblad evolution, 2024. arXiv:2403.09345 ; Hernández et al. in Decoherence ensures classicality beyond the Ehrenfest time as ħ → 0 , 2023. arXiv:2306.13717 , Hernández et al. in The limit of open quantum systems with general Lindbladians: vanishing noise ensures classicality beyond the ehrenfest time, 2023. arXiv:2307.05326 ), which proved long-time agreement in weaker norms.
</description>
<pubDate>Thu, 21 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163217</guid>
<dc:date>2025-08-21T00:00:00Z</dc:date>
</item>
<item>
<title>Search for vector-like leptons with long-lived particle decays in the CMS muon system in proton-proton collisions at √s = 13 TeV</title>
<link>https://hdl.handle.net/1721.1/163216</link>
<description>Search for vector-like leptons with long-lived particle decays in the CMS muon system in proton-proton collisions at √s = 13 TeV
Chekhovsky, V.; Hayrapetyan, A.; Makarenko, V.; Tumasyan, A.; Adam, W.; Andrejkovic, J. W.; Benato, L.; Bergauer, T.; Chatterjee, S.; Damanakis, K.; Dragicevic, M.; Hussain, P. S.; Jeitler, M.; Krammer, N.; Li, A.; Liko, D.
A first search is presented for vector-like leptons (VLLs) exclusively decaying into a light long-lived pseudoscalar boson and a standard model τ lepton. The pseudoscalar boson is assumed to have a mass below the τ+τ− threshold, so that it decays exclusively into two photons. It is identified using the CMS muon system. The analysis is carried out using a data set of proton-proton collisions at a center-of-mass energy of 13 TeV collected by the CMS experiment in 2016–2018, corresponding to an integrated luminosity of 138 fb−1. Selected events contain at least one pseudoscalar boson decaying electromagnetically in the muon system and at least one hadronically decaying τ lepton. No significant excess of data events is observed compared to the background expectation. Upper limits are set at 95% confidence level on the vector-like lepton production cross section as a function of the VLL mass and the pseudoscalar boson mean proper decay length. The observed and expected exclusion ranges of the VLL mass extend up to 700 and 670 GeV, respectively, depending on the pseudoscalar boson lifetime.
</description>
<pubDate>Wed, 20 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163216</guid>
<dc:date>2025-08-20T00:00:00Z</dc:date>
</item>
<item>
<title>Percolation effects on fracture in ductile-phase toughened oxide coatings</title>
<link>https://hdl.handle.net/1721.1/163215</link>
<description>Percolation effects on fracture in ductile-phase toughened oxide coatings
Gupta, Isha; Kpamegan, Aliya K.; Vaidyanathan, Annika M. L.; Cordero, Zachary C.
The toughness and damage behaviors of ductile-phase toughened oxide coatings were characterized as the reinforcement volume fraction varied across the percolation threshold. The coatings, consisting of Ni particles in a borate glass-ceramic matrix, showed a rising resistance curve, with the extent of stable crack growth increasing with Ni content. While initiation toughness was relatively insensitive to reinforcement topology, peak toughness increased sharply once the Ni reinforcement percolated, reaching a maximum value of ~ 160 J/m2 in an interpenetrating composite coating with 35 vol% Ni. This toughness is sufficiently high to resist failure in the target application of rocket engine turbomachinery, where coatings must withstand rapid thermal transients upon engine startup and shutdown. Characterization of the crack path confirmed that this toughening increment corresponded to a transition from crack deflection to crack bridging as the dominant toughening mechanism. The implications of these results on design of ductile-phase toughened coatings are discussed. Graphical abstract Double-cantilever beam specimens with the ductile-phase toughened oxide coating as an interlayer between the two beams
</description>
<pubDate>Tue, 25 Mar 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163215</guid>
<dc:date>2025-03-25T00:00:00Z</dc:date>
</item>
<item>
<title>AI Challenge for Satellite Pattern-of-Life Identification: Dataset, Design and Results</title>
<link>https://hdl.handle.net/1721.1/163214</link>
<description>AI Challenge for Satellite Pattern-of-Life Identification: Dataset, Design and Results
Siew, Peng M.; Solera, Haley E.; Lavezzi, Giovanni; Roberts, Thomas G.; Jang, Daniel; Baldsiefen, David; Tran, Binh; Yeung, Christopher; Johnson, Kurtis; Metzger, Nathan; Porcher, Francois; Haik, Isaac; Rodriguez-Fernandez, Victor; Folcik, Zachary; Price, Jeffrey
Despite the availability of extensive historical data on Earth-orbiting objects, artificial intelligence (AI) adoption in space domain awareness remains limited. To address this gap, the 2024 MIT ARCLab Prize for AI Innovation in Space challenged participants to develop AI models for characterizing satellite pattern-of-life (PoL) in Geostationary Earth Orbit. The challenge focused on developing machine learning models capable of classifying behavioral patterns and detecting key transition events in multivariate time-series data. The challenge dataset comprised of 2402 satellite trajectories spanning six months with a two-hour temporal resolution. The data are generated using high-fidelity satellite propagators based on simulated trajectories, Vector Covariance Message data, and two-line elements. This dataset features diverse operational behaviors and propulsion systems, providing a robust foundation for AI analysis. The challenge attracted over 100 teams worldwide, with more than 350 submissions showcasing a diverse range of AI approaches, including deep learning architectures (CNNs, LSTMs, transformers), gradient-boosting techniques (XGBoost, CatBoost), and hybrid models. The top performing teams demonstrated AI’s effectiveness in PoL characterization, with Hawaii2024 achieving an F2 score of 0.952 on the partial test set using a CNN-LSTM hybrid approach, followed closely by Millennial-IUP and QR_Is that utilized XGBoost with tailored transition-labeling and gradient-boosted decision tree with a model-stacking strategy, respectively. This paper presents an analysis of the competition’s dataset, evaluation methodology, and top-performing solutions.
</description>
<pubDate>Mon, 04 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163214</guid>
<dc:date>2025-08-04T00:00:00Z</dc:date>
</item>
<item>
<title>Analysis of smart imaging runtime</title>
<link>https://hdl.handle.net/1721.1/163213</link>
<description>Analysis of smart imaging runtime
Athey, Thomas; Sawmya, Shashata; Meirovitch, Yaron; Schalek, Richard; Potocek, Pavel; Chandok, Ishaan; Peemen, Maurice; Lichtman, Jeff; Samuel, Aravinthan; Shavit, Nir
Smart microscopy is a new imaging approach that involves rapid imaging, prediction of important subregions, then selective re-imaging. This approach has been validated in reducing imaging beam time in electron microscopy connectomics, but the speedup depends on various imaging workflow parameters. Here we present the first runtime analysis of traditional vs. smart microscopy and show how these parameters can magnify, or diminish potential time savings. We provide a GUI application that calculates the theoretical time savings of smart microscopy from user input parameters describing their imaging workflow. Finally, we measure end-to-end runtime of SmartEM acquisition on an electron microscope to demonstrate two strategies for faster acquisition: mixed-precision neural networks and parallelization of microscope and support computer operations.
</description>
<pubDate>Thu, 14 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163213</guid>
<dc:date>2025-08-14T00:00:00Z</dc:date>
</item>
<item>
<title>The ππ scattering amplitude at large Nc</title>
<link>https://hdl.handle.net/1721.1/163212</link>
<description>The ππ scattering amplitude at large Nc
Baeza-Ballesteros, Jorge; Hernández, Pilar; Romero-López, Fernando
We study the scaling of meson-meson scattering amplitudes with the number of colors, Nc. We use lattice calculations in a theory with Nf = 4 degenerate flavors, with Nc = 3 – 6 and pion mass Mπ ≈ 560 MeV. We focus on three different scattering channels, two of which have the same quantum numbers as some tetraquark candidates recently found at LHCb: the T cs 0 0 2900 , T c s ¯ 0 + + 2900 , T c s ¯ 0 0 2900 and T cs 1 0 2900 states. Finite-volume energies are extracted using a large set of operators, containing two-particle operators with the form of two pions or two vector mesons, and local tetraquark operators. The resulting energy spectra is used to constrain the infinite-volume scattering amplitude by means of Lüscher’s quantization condition. We consider polynomial parametrizations of the phase shift, as well as one-loop chiral perturbation theory (ChPT) predictions. We find that our lattice results follow the expected Nc scaling and are sensitive to subleading Nc corrections. In addition, we constrain the scaling of different combinations of low-energy constants from matching to large Nc ChPT. The results for the channel corresponding to a π + D s + − K + D + state show evidence of a virtual bound state with energy Evirtual = 1.63(10)Mπ for Nc = 3, while this pole disappears at Nc &gt; 3. This may be connected to the exotic states found in experiment.
</description>
<pubDate>Thu, 14 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163212</guid>
<dc:date>2025-08-14T00:00:00Z</dc:date>
</item>
<item>
<title>Semi-automated last touch detection for out-of-bounds possession decisions in football</title>
<link>https://hdl.handle.net/1721.1/163210</link>
<description>Semi-automated last touch detection for out-of-bounds possession decisions in football
Wang, Henry; Mills, Katie; Billingham, Johsan; Robertson, Sam; Hosoi, A. E.
Football referees must make quick and accurate decisions in unforgiving environments. In parallel, advances in optical tracking have created new avenues for technology-assisted officiating. Using skeletal and ball tracking data, we present a novel diphase framework for Semi-automated Last Touch detection, designed to help referees adjudicate out-of-bounds possession decisions where player and ball occlusion may pose challenges. The proposed methodology uses a touch probability model to find the decision frame of the last touch before the ball goes out-of-bounds, and rules-based or supervised learning algorithms predict the player responsible for the touch. Leveraging principles of kinematics, human anthropometry, and machine learning, the models predict the correct possession decision with up to 82.5% accuracy on a test dataset of duels from the 2022 FIFA World Cup, including over 90% for aerial duels. Our results represent potential improvements in human performance reported in previous literature and provide a baseline benchmark for future studies.
</description>
<pubDate>Wed, 27 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163210</guid>
<dc:date>2025-08-27T00:00:00Z</dc:date>
</item>
<item>
<title>Small radius inclusive jet production at the LHC through NNLO+NNLL</title>
<link>https://hdl.handle.net/1721.1/163209</link>
<description>Small radius inclusive jet production at the LHC through NNLO+NNLL
Generet, Terry; Lee, Kyle; Moult, Ian; Poncelet, Rene; Zhang, Xiaoyuan
The study of hadronic jets and their substructure at hadronic colliders is crucial for improving our understanding of QCD, and searching for new physics. As such, there has been a significant effort to improve their theoretical description. In the small radius limit, inclusive jet production exhibits a universal factorization, enabling the resummation of logarithms which greatly stabilizes theoretical predictions. In this paper, we show how to combine a recently introduced framework for small-R resummation with the Stripper subtraction formalism for fragmentation, enabling next-to-next-to-leading order calculations of small-R inclusive jet production for a wide variety of processes at the LHC. We extract the two-loop constants for the jet functions, enabling for the first time next-to-next-to-leading logarithmic resummation matched to next-to-next-to-leading order perturbative calculation. We compare with CMS data for small-R jet production, and find that our results greatly improve the accuracy of the predictions at small-R, and stabilize the perturbative convergence and error estimates at larger R. Our approach is applicable to a wide class of jet substructure observables exhibiting similar factorization theorems, opening the door to an NNLO jet substructure program at the LHC.
</description>
<pubDate>Fri, 01 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163209</guid>
<dc:date>2025-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Endosomolytic Peptides Enable the Cellular Delivery of Peptide Nucleic Acids</title>
<link>https://hdl.handle.net/1721.1/163208</link>
<description>Endosomolytic Peptides Enable the Cellular Delivery of Peptide Nucleic Acids
Giancola, JoLynn B.; Raines, Ronald T.
Precision genetic medicine enlists antisense oligonucleotides (ASOs) to bind to nucleic acid targets important for human disease. Peptide nucleic acids (PNAs) have many desirable attributes as ASOs but lack cellular permeability. Here, we use an assay based on the corrective splicing of an mRNA to assess the ability of synthetic peptides to deliver a functional PNA into a human cell. We find that the endosomolytic peptides L17E and L17ER4 are highly efficacious delivery vehicles. Co-treatment of a PNA with low micromolar L17E or L17ER4 enables robust corrective splicing in nearly all treated cells. Peptide–PNA conjugates are even more effective. These results enhance the utility of PNAs as research tools and potential therapeutic agents.
</description>
<pubDate>Mon, 11 Nov 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163208</guid>
<dc:date>2024-11-11T00:00:00Z</dc:date>
</item>
<item>
<title>Archive Labeling Sequences</title>
<link>https://hdl.handle.net/1721.1/163207</link>
<description>Archive Labeling Sequences
Khovanova, Tanya; Marton, Gregory
What follows is the story of a family of integer sequences, which started life as a Google interview puzzle back in the previous century when VHS video tapes were in use.
</description>
<pubDate>Fri, 22 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163207</guid>
<dc:date>2025-08-22T00:00:00Z</dc:date>
</item>
<item>
<title>Rivers Influence Reef Pass Formation in the Society Islands</title>
<link>https://hdl.handle.net/1721.1/163205</link>
<description>Rivers Influence Reef Pass Formation in the Society Islands
Gillen, Megan N; Ashton, Andrew D; Perron, J Taylor
Reef passes are deep, navigable channels dissecting coral reefs around volcanic islands. Many reef passes are located offshore of large island river basins, suggesting a potential causal relationship. To clarify the mechanisms that form and maintain reef passes, we quantify the relationships between reef pass location and drainage basin size in the Society Islands. River basins draining toward reef passes are larger than those draining toward unbroken reef flats, suggesting that rivers help create and sustain reef passes. The correlation between reef passes and large rivers weakens for older islands, suggesting that oceanographic processes increasingly maintain passes as islands age and subside. We propose two river-driven reef pass formation mechanisms: reef incision, in which rivers erode into reefs during sea-level lowstands, and reef encroachment, in which corals growing in lower-elevation submerged river valleys preferentially drown during periods of rapid sea-level rise, leaving gaps in the accreting reef.
</description>
<pubDate>Sat, 14 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163205</guid>
<dc:date>2025-06-14T00:00:00Z</dc:date>
</item>
<item>
<title>“Lab‐Quakes”: Quantifying the Complete Energy Budget of High‐Pressure Laboratory Failure</title>
<link>https://hdl.handle.net/1721.1/163204</link>
<description>“Lab‐Quakes”: Quantifying the Complete Energy Budget of High‐Pressure Laboratory Failure
Ortega‐Arroyo, Daniel; O'Ghaffari, Hoagy; Peč, Matěj; Gong, Zheng; Fu, Roger R; Ohl, Markus; Cattania, Camilla; Plümper, Oliver
Understanding the interplay of various energy sinks during seismic fault slip is essential for advancing earthquake physics and improving hazard assessment. However, quantifying the energy consumed by major dissipative processes remains a challenge. In this study, we investigate energy partitioning during laboratory earthquakes (“lab-quakes”) by performing general shear stick-slip experiments on synthetic granitic cataclasites at elevated confining pressure. Using ultrasound, microstructural, and novel magnetism-based thermal analyses, we independently quantified the energy allocated to seismic radiation, new surfaces, and heat dissipation. These estimates showed good agreement with far-field measurements of mechanical work during the lab-quake. Our findings revealed that under the experimental conditions the majority of the released energy (68%–98%) is dissipated as heat, while seismic radiation accounts for 1%–8%, and the creation of new surfaces consumes &lt;1%–32%. Microstructural observations indicate pre-failure deformation, which includes comminution and development of the principal slip zone, significantly influences energy partitioning. This effect is further evident in the measured shear stress drops, where events with higher stress drops proportionally emitted more energy as seismic waves. This study is the first to constrain the full energy budget of lab-quakes from an observational standpoint, providing critical insights into the dynamics of fault rupture and energy dissipation processes.
</description>
<pubDate>Thu, 28 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163204</guid>
<dc:date>2025-08-28T00:00:00Z</dc:date>
</item>
<item>
<title>A High-Precision Analytical Technique for Dissolved N2 Isotopes in Aquatic Systems: Biogeochemical Applications and Determination of Solubility Equilibrium Isotope Effects</title>
<link>https://hdl.handle.net/1721.1/163203</link>
<description>A High-Precision Analytical Technique for Dissolved N2 Isotopes in Aquatic Systems: Biogeochemical Applications and Determination of Solubility Equilibrium Isotope Effects
McPaul, Katelyn; Wankel, Scott D.; Seltzer, Alan M.
Rationale&#13;
The isotopic composition of dissolved dinitrogen gas (δ15N-N2) in water can offer a powerful constraint on the sources and pathways of nitrogen cycling in aquatic systems. However, because of the large presence of atmosphere-derived dissolved N2 in these systems, high-precision (on the order of 0.001‰) measurements of N2 isotopes paired with inert gas measurements are required to disentangle atmospheric and biogeochemical signals. Additionally, the solubility equilibrium isotope fractionation of N2 and its temperature and salinity dependence are underconstrained at this level of precision.&#13;
&#13;
Methods&#13;
We introduce a new technique for sample collection, processing, and dynamic dual-inlet mass spectrometry allowing for high-precision measurement of δ15N-N2 and δ(N2/Ar) with simultaneous measurement of δ(40Ar/36Ar) and δ(Kr/N2) in water. We evaluate the reproducibility of this technique and employ it to redetermine the solubility equilibrium isotope effects for dissolved N2 across a range of temperatures and salinities.&#13;
&#13;
Results&#13;
Our technique achieves measurement reproducibility (1σ) for δ15N-N2 (0.006‰) and δ(N2/Ar) (0.41‰) suitable for tracing biogeochemical nitrogen cycling in aquatic environments. Through a series of air–water equilibration experiments, we find a N2 solubility equilibrium isotope effect (ε = α/1000 − 1, where α = (29N2/28N2)dissolved/(29N2/28N2)gas) in water of ε(‰) = 0.753 − 0.004•T where T is the temperature (°C), with uncertainties on the order of 0.001‰ over the temperature range of ~2°C–23°C and salinity range of ~0–30 psu. We find no apparent dependence of ε on salinity.&#13;
&#13;
Conclusions&#13;
Our new method allows for high-precision measurements of the isotopic composition of dissolved N2 and Ar, and dissolved N2/Ar and Kr/N2 ratios, within the same sample. Pairing measurements of N2 with inert gases facilitates the quantification of excess N2 from biogeochemical sources and its isotopic composition. This method allows for a wide range of applications in marine, coastal, and freshwater environments to characterize and quantitatively constrain potential nitrogen-cycling sources and pathways and to differentiate between physical and biological isotope signals in these systems.
</description>
<pubDate>Tue, 17 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163203</guid>
<dc:date>2025-06-17T00:00:00Z</dc:date>
</item>
<item>
<title>Linking Lattice Strain and Fractal Dimensions to Non‐monotonic Volume Changes in Irradiated Nuclear Graphite</title>
<link>https://hdl.handle.net/1721.1/163202</link>
<description>Linking Lattice Strain and Fractal Dimensions to Non‐monotonic Volume Changes in Irradiated Nuclear Graphite
Sprouster, David J; Fayfar, Sean; Rai, Durgesh K; Campbell, Anne; Ilavsky, Jan; Snead, Lance L; Khaykovich, Boris
Graphite's resilience to high temperatures and neutron damage makes it vital for nuclear reactors, yet irradiation alters its microstructure, degrading key properties. We used small- and wide-angle X-ray scattering to study neutron-irradiated fine-grain nuclear graphite (Grade G347A) across varied temperatures and fluences. Results show significant shifts in internal strain and porosity, correlating with radiation-induced volume changes. Notably, porosity volume distribution (fractal dimensions) follows non-monotonic volume changes, suggesting a link to the Weibull distribution of fracture stress.
</description>
<pubDate>Tue, 12 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163202</guid>
<dc:date>2025-08-12T00:00:00Z</dc:date>
</item>
<item>
<title>Computing Skinning Weights via Convex Duality</title>
<link>https://hdl.handle.net/1721.1/163201</link>
<description>Computing Skinning Weights via Convex Duality
Solomon, J; Stein, O
We study the problem of optimising for skinning weights through the lens of convex duality. In particular, we show that the popular bounded biharmonic weight (BBW) model for skinning is dual to a non-negative least-squares problem, which is amenable to efficient solution via iterative algorithms; the final weights are then recoverable via a closed-form expression. Our formulation maintains convexity and is provably equivalent to the original problem. We also provide theoretical discussion giving intuition for the dual problem in the smooth case. Our final algorithm, which can be implemented in a few lines of code, achieves efficient convergence times relative to generic quadratic programming tools applied to the primal problem, without nonconvex formulations, relaxations or specialised optimisation techniques.
</description>
<pubDate>Thu, 25 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163201</guid>
<dc:date>2025-09-25T00:00:00Z</dc:date>
</item>
<item>
<title>Saving and Letting Live</title>
<link>https://hdl.handle.net/1721.1/163200</link>
<description>Saving and Letting Live
Byrne, Thomas
There is a metaphysical difference between person Akilling person B and A merely letting B die. There isalso a metaphysical difference between A saving B andA merely letting B live. This paper argues that the meta-physical difference between saving and letting live givesrise to a moral difference. It then puts that moral differ-ence to work: for example, it accounts for the long-feltmoral difference between failing to rescue a drowningchild and failing to donate $4000 to Oxfam (sufficientfor them, in the aggregate, to prevent a child’s death).
</description>
<pubDate>Thu, 31 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163200</guid>
<dc:date>2025-07-31T00:00:00Z</dc:date>
</item>
<item>
<title>Enhancement of Superconductivity in WP via Oxide-Assisted Chemical Vapor Transport</title>
<link>https://hdl.handle.net/1721.1/163198</link>
<description>Enhancement of Superconductivity in WP via Oxide-Assisted Chemical Vapor Transport
Campbell, Daniel J.; Lin, Wen-Chen; Collini, John; Eo, Yun Suk; Anand, Yash; Saha, Shanta; Graf, David; Zavalij, Peter Y.; Paglione, Johnpierre
Tungsten monophosphide (WP) has been reported to superconduct below 0.8 K, and theoretical work has predicted an unconventional Cooper pairing mechanism. Here we present&#13;
data for WP single crystals grown by means of chemical vapor transport (CVT) of WO3, P,&#13;
and I2. In comparison to synthesis using WP powder as a starting material, this technique&#13;
results in samples with substantially decreased low-temperature scattering and favors&#13;
a more three-dimensional morphology. We also find that the resistive superconducting&#13;
transitions in these samples begin above 1 K. Variation in Tc is often found in strongly&#13;
correlated superconductors, and its presence in WP could be the result of influence from a&#13;
competing order and/or a non-s-wave gap.
</description>
<pubDate>Sun, 28 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163198</guid>
<dc:date>2025-09-28T00:00:00Z</dc:date>
</item>
<item>
<title>Molecular Imbalances Between Striosome and Matrix Compartments Characterize the Pathogenesis and Pathophysiology of Huntington&amp;rsquo;s Disease Model Mouse</title>
<link>https://hdl.handle.net/1721.1/163197</link>
<description>Molecular Imbalances Between Striosome and Matrix Compartments Characterize the Pathogenesis and Pathophysiology of Huntington&amp;rsquo;s Disease Model Mouse
Morigaki, Ryoma; Yoshida, Tomoko; Fujikawa, Joji; Crittenden, Jill R.; Graybiel, Ann M.
The pathogenesis and pathophysiology of Huntington’s disease (HD) are still incompletely understood, despite the remarkable advances in identifying the molecular effects of the Htt mutation in this disease. Clinical positron emission tomography studies suggest that phosphodiesterase 10A (PDE10A) declines earlier than dopamine D1 and D2 receptors in HD, indicating that it might serve as a key molecular marker in understanding disease mechanisms. In movement disorders, mutations in the genes encoding PDE10A and G-protein α subunit (Gαolf), both critical cAMP regulators in striatal spiny projection neurons, have been linked to chorea and dystonia. These observations highlight the potential importance of striatal cyclic AMP (cAMP) signaling in these disorders, but how such dysfunction could come is unknown. Here, we suggest that a key to understanding signaling dysfunction might be to evaluate these messenger systems in light of the circuit-level compartmental organization of the caudoputamen, in which there is particular vulnerability of the striosome compartment in HD. We developed machine learning algorithms to define with high precision and reproducibility the borders of striosomes in the brains of Q175 knock-in (Q175KI) HD mice from 3–12 months of age. We demonstrate that the expression of multiple molecules, including Gαolf, PDE10A, dopamine D1 and D2 receptors, and adenosine A2A receptors, is significantly reduced in the striosomes of Q175KI mice as compared to wildtype controls, across 3, 6, and 12 months of age. By contrast, mu-opioid receptor (MOR1) expression is uniquely upregulated, suggesting a compartment-specific and age-dependent shift in molecular profiles in the Q175KI HD mouse model caudoputamen. These differential changes may serve as a useful platform to determine factors underlying the greater vulnerability of striatal projection neurons in the striosomes than in the matrix in HD.
</description>
<pubDate>Tue, 02 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163197</guid>
<dc:date>2025-09-02T00:00:00Z</dc:date>
</item>
<item>
<title>Field-Scale Rice Area and Yield Mapping in Sri Lanka with Optical Remote Sensing and Limited Training Data</title>
<link>https://hdl.handle.net/1721.1/163196</link>
<description>Field-Scale Rice Area and Yield Mapping in Sri Lanka with Optical Remote Sensing and Limited Training Data
Özdoğan, Mutlu; Wang, Sherrie; Ghose, Devaki; Fraga, Eduardo; Fernandes, Ana; Varela, Gonzalo
Rice is a staple crop for over half the world’s population, and accurate, timely information on its planted area and production is crucial for food security and agricultural policy, particularly in developing nations like Sri Lanka. However, reliable rice monitoring in regions like Sri Lanka faces significant challenges due to frequent cloud cover and the fragmented nature of smallholder farms. This research introduces a novel, cost-effective method for mapping rice-planted area and yield at field scales in Sri Lanka using optical satellite data. The rice-planted fields were identified and mapped using a phenologically tuned image classification algorithm that highlights rice presence by observing water occurrence during transplanting and vegetation activity during subsequent crop growth. To estimate yields, a random forest regression model was trained at the district level by incorporating a satellite-derived chlorophyll index and environmental variables and subsequently applied at the field level. The approach has enabled the creation of two decades (2000–2022) of reliable, field-scale rice area and yield estimates, achieving map accuracies between 70% and over 90% and yield estimates with less than 20% error. These highly granular results, which are not available through traditional surveys, show a strong correlation with government statistics. They also demonstrate the advantages of a rule-based, phenology-driven classification over purely statistical machine learning models for long-term consistency in dynamic agricultural environments. This work highlights the significant potential of remote sensing to provide accurate and detailed insights into rice cultivation, supporting policy decisions and enhancing food security in Sri Lanka and other cloud-prone regions.
</description>
<pubDate>Tue, 02 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163196</guid>
<dc:date>2025-09-02T00:00:00Z</dc:date>
</item>
<item>
<title>Generalized Pitman–Stanley Polytope: Vertices and Faces</title>
<link>https://hdl.handle.net/1721.1/163195</link>
<description>Generalized Pitman–Stanley Polytope: Vertices and Faces
Dugan, William T.; Hegarty, Maura; Morales, Alejandro H.; Raymond, Annie
In 1999, Pitman and Stanley introduced the polytope bearing their name along with a study of its faces, lattice points, and volume. The Pitman–Stanley polytope is well-studied due to its connections to probability, parking functions, the generalized permutahedra, and flow polytopes. Its lattice points correspond to plane partitions of skew shape with entries 0 and 1. Pitman and Stanley remarked that their polytope can be generalized so that lattice points correspond to plane partitions of skew shape with entries 0 , 1 , … , m . Since then, this generalization has been untouched. We study this generalization and show that it can also be realized as a flow polytope of a grid graph. We give multiple characterizations of its vertices in terms of plane partitions of skew shape and integer flows. For a fixed skew shape, we show that the number of vertices of this polytope is a polynomial in m whose leading term, in certain cases, counts standard Young tableaux of a skew shifted shape. Moreover, we give formulas for the number of faces, as well as generating functions for the number of vertices.
</description>
<pubDate>Mon, 09 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163195</guid>
<dc:date>2024-12-09T00:00:00Z</dc:date>
</item>
<item>
<title>From stellar light to astrophysical insight: automating variable star research with machine learning</title>
<link>https://hdl.handle.net/1721.1/163194</link>
<description>From stellar light to astrophysical insight: automating variable star research with machine learning
Audenaert, Jeroen
Large-scale photometric surveys are revolutionizing astronomy by delivering unprecedented amounts of data. The rich data sets from missions such as the NASA Kepler and TESS satellites, and the upcoming ESA PLATO mission, are a treasure trove for stellar variability, asteroseismology and exoplanet studies. In order to unlock the full scientific potential of these massive data sets, automated data-driven methods are needed. In this review, I illustrate how machine learning is bringing asteroseismology toward an era of automated scientific discovery, covering the full cycle from data cleaning to variability classification and parameter inference, while highlighting the recent advances in representation learning, multimodal datasets and foundation models. This invited review offers a guide to the challenges and opportunities machine learning brings for stellar variability research and how it could help unlock new frontiers in time-domain astronomy.
</description>
<pubDate>Thu, 24 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163194</guid>
<dc:date>2025-07-24T00:00:00Z</dc:date>
</item>
<item>
<title>Hidden causality in Modern Greek</title>
<link>https://hdl.handle.net/1721.1/163193</link>
<description>Hidden causality in Modern Greek
Tsilia, Anastasia
This paper explores the syntax and semantics of an attitudinal construction in Modern Greek (mg), where an attitude verb takes an accusative object followed by a complement clause. Building on existing syntactic literature (e.g., Hadjivassiliou et al. in 13th international symposium on theoretical and applied linguistics, Aristotle University of Thessaloniki, Thessaloniki, pp. 70–80, 2000; Kotzoglou in Reading Working Papers in Linguistics 6:39–56, 2002; Kotzoglou in Selected papers on theoretical and applied linguistics from 22nd ISTAL, Aristotle University of Thessaloniki, Thessaloniki, pp. 299–315, 2017; Kotzoglou and Papangeli in New horizons in the analysis of raising and control, Springer, Dordrecht, pp. 111–131, 2007), I show that the accusative object is base-generated higher than the lower clause. Yet, I show that it semantically behaves as if it is part of the intensionalized argument of the attitude verb, giving rise to de dicto readings (Tsilia in Proceedings of Sinn und Bedeutung 27, pp. 655–673, 2023). Building on this and on a causal semantic requirement associated with the accusative object, I suggest a clausal analysis of the phenomenon. More specifically, under this analysis the accusative object is the subject of a small intermediate vp clause headed by a silent proleptic cause, which then takes the complement clause as its object. This contributes to the literature suggesting that hidden clauses are cross-linguistically attested and can solve intensionality paradoxes (den Dikken et al. in Non-propositional intentionality, Oxford Academic, Oxford, pp. 46–94, 2018), as well as to the literature on prolepsis (Davies in Language 81:645–665, 2005; Salzmann in The Wiley-Blackwell companion to syntax, Blackwell, Malden, vol. 5, pp. 3203–3245, 2017a; Deal in Semantics and Linguistic Theory 28:622–648, 2018; Dawson and Deal in Proceedings of Sinn und Bedeutung 23, pp. 329–346, 2019) showing that proleptic constructions may have varying interpretations and syntactic analyses cross-linguistically.
</description>
<pubDate>Mon, 09 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163193</guid>
<dc:date>2025-06-09T00:00:00Z</dc:date>
</item>
<item>
<title>Type II RR string fields and exotic diffeomorphisms</title>
<link>https://hdl.handle.net/1721.1/163192</link>
<description>Type II RR string fields and exotic diffeomorphisms
Mamade, Raji A.; Zwiebach, Barton
We study the theory of massless fields of type II strings arising from the string field theory that uses two string fields, a physical one and an extra one that allows the writing of an action, but whose degrees of freedom ultimately decouple. The mechanism allowing the description of the self-dual five-form of type IIB, anticipated by Sen, is used by the SFT to describe all Ramond-Ramond forms in type IIB and IIA in a manifestly duality-invariant way. We find explicit expressions for the leading terms in the gauge transformation of the RR fields and focus on diffeomorphisms, which are exotic for both the physical and the extra fields, perhaps as needed to describe propagating degrees of freedom that do not gravitate. The algebra of diffeomorphisms includes field-dependent structure constants and only closes on-shell, as predicted by the type II SFT gauge algebra.
</description>
<pubDate>Fri, 05 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163192</guid>
<dc:date>2025-09-05T00:00:00Z</dc:date>
</item>
<item>
<title>Maintenance of core temperature in SCUBA divers in cold water: contributions of anthropometrics, suit type, and sex</title>
<link>https://hdl.handle.net/1721.1/163191</link>
<description>Maintenance of core temperature in SCUBA divers in cold water: contributions of anthropometrics, suit type, and sex
Orman, Tucker; Bradbury, Karleigh E.; Grosshennig, Tim; Perez, Makayla; Möller, Fabian N.; Dujić, Željko; Lovering, Andrew T.
Maintenance of core temperature (Tc) is vital for health and physiological function while SCUBA diving in cold water, but there is little research investigating the influence of anthropometrics, suit type, and sex on the rate of change in Tc during real-world diving conditions. We measured the rate of change in Tc (telemetric pill) and thermal sensation (Ts; Young questionnaire) in 62 participants (32 female) before and after non-decompression SCUBA dives using open circuit apparatus breathing air at varied depths and durations in cold water (~ 10 °C). Twenty-three participants wore drysuits (11F), and 39 participants wore wetsuits (21F). There was a significant effect of suit type on the rate of change in Tc, with those in wetsuits having a greater decrease in Tc than those in drysuits. However, there was no effect of suit type on the rate of change in Ts. In wetsuit and drysuit groups, there were significant associations between Tc/min and BSA/BM, BMI, and BM. Estimated body fat % (BF%) was significantly associated with the rate of change in Tc in the wetsuit group only. When separated by sex, there were significant associations with all the anthropometric variables and the rate of change in Tc in the female participants, but only with BM in the wetsuit males. These results suggest that drysuits offer greater thermal protection compared to wetsuits in 10 °C water, and anthropometrics should be considered when selecting the degree of thermal protection, especially for female divers.
</description>
<pubDate>Thu, 04 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163191</guid>
<dc:date>2025-09-04T00:00:00Z</dc:date>
</item>
<item>
<title>Major-element, trace-element and sulfur-isotope evidence for arc-like magmatism in the 4.0–2.9 Ga Acasta Gneiss Complex</title>
<link>https://hdl.handle.net/1721.1/163190</link>
<description>Major-element, trace-element and sulfur-isotope evidence for arc-like magmatism in the 4.0–2.9 Ga Acasta Gneiss Complex
Beaudry, Patrick; Jagoutz, Oliver; Bauer, Ann M.; Rezeau, Hervé; Reimink, Jesse R.; Grove, Timothy L.; Izon, Gareth; Ono, Shuhei
The Acasta Gneiss Complex (AGC) in northwestern Canada comprises Earth’s oldest known evolved crust, with zircon U–Pb ages up to 4.03 Ga. Several pulses of crustal generation and metamorphism are preserved in tonalitic and granitic gneisses spanning over one billion years, along with mafic and ultramafic rocks of unknown age. Major elements, trace elements and radiogenic isotope signatures have been invoked to suggest that these rocks preserve the local onset of horizontal tectonic processes. However, the behavior and influence of volatiles, which have a defining role in modern arc magmatism, remain unconstrained. Here we combine new whole-rock major- and trace-element data with multiple sulfur isotope analyses in 4.0–2.9 Ga Acasta gneisses and spatially associated mafic and ultramafic rocks to investigate the petrogenesis of the AGC. We use a recently-published major element-based melt hygrometer to estimate dissolved water contents for all published plagioclase-saturated Acasta meta-igneous rocks, and find modes at &lt; 0.5 wt.% and 5 wt.% H2O, similar to modern arc magmas. Tholeiitic and calc-alkaline trends are both present, with the former being more prominent in the oldest (ca. 4.0 Ga) samples and in mafic rocks. Zircon trace element oxybarometry reveals a shift towards more oxidized magmatic conditions by 3.75 Ga. Sulfur isotopes record a limited range in δ34S values, suggesting a common igneous end-member at ~  + 1 ‰, and positively correlate with calculated H2O contents, with more positive values (up to + 5‰) appearing in the Paleoarchean (&lt; 3.6 Ga). The Eoarchean (4.0–3.6 Ga) δ34S values are consistent with a precursor Hadean crust having an enriched sulfur isotope signature, possibly resulting from hydrous alteration or from isotopic fractionation during its formation. The temporal progression to more positive δ34S values is consistent with a shift towards more hydrous and oxidized magmatic differentiation. Most samples have near-zero Δ33S that fall along a mass-dependent fractionation (MDF) array, but one 3.5 Ga metasedimentary sample has a negative MIF Δ33S signature of -0.60 ± 0.01 ‰. Additionally, two granitic gneisses dated at 3.3 and 2.9 Ga preserve small positive MIF Δ33S values of + 0.08 ± 0.02 ‰, which could reflect recycling of sedimentary material via subduction by 3.3 Ga. Overall, our data indicate that the Acasta Gneiss Complex preserves several modes of crustal generation evolving over time, with an increasing importance of deep hydrous magmatism by 3.75 Ga and of sedimentary inputs by 3.3 Ga.
</description>
<pubDate>Fri, 22 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163190</guid>
<dc:date>2025-08-22T00:00:00Z</dc:date>
</item>
<item>
<title>The Future of Drug Delivery</title>
<link>https://hdl.handle.net/1721.1/163189</link>
<description>The Future of Drug Delivery
Gao, Jingjing; Karp, Jeffrey M; Langer, Robert; Joshi, Nitin
Drug delivery technologies have been proven to improve treatment outcomes in many ways, including enhancing therapeutic efficacy, reducing toxicity, increasing patient compliance, and enabling entirely new medical treatments. As the therapeutic landscape has evolved from small-molecule drugs to a new generation of therapeutics including proteins, peptides, monoclonal antibodies, nucleic acids, and even live cells, drug delivery technologies have also evolved to meet their unique delivery needs.
</description>
<pubDate>Tue, 24 Jan 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163189</guid>
<dc:date>2023-01-24T00:00:00Z</dc:date>
</item>
<item>
<title>Altered DNA repair pathway engagement by engineered CRISPR-Cas9 nucleases</title>
<link>https://hdl.handle.net/1721.1/163188</link>
<description>Altered DNA repair pathway engagement by engineered CRISPR-Cas9 nucleases
Chauhan, Vikash P; Sharp, Phillip A; Langer, Robert
CRISPR-Cas9 introduces targeted DNA breaks that engage competing DNA repair pathways, producing a spectrum of imprecise insertion/deletion mutations (indels) and precise templated mutations (precise edits). The relative frequencies of these pathways are thought to primarily depend on genomic sequence and cell state contexts, limiting control over mutational outcomes. Here, we report that engineered Cas9 nucleases that create different DNA break structures engage competing repair pathways at dramatically altered frequencies. We accordingly designed a Cas9 variant (vCas9) that produces breaks which suppress otherwise dominant nonhomologous end-joining (NHEJ) repair. Instead, breaks created by vCas9 are predominantly repaired by pathways utilizing homologous sequences, specifically microhomology-mediated end-joining (MMEJ) and homology-directed repair (HDR). Consequently, vCas9 enables efficient precise editing through HDR or MMEJ while suppressing indels caused by NHEJ in dividing and nondividing cells. These findings establish a paradigm of targeted nucleases custom-designed for specific mutational applications.
</description>
<pubDate>Tue, 07 Mar 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163188</guid>
<dc:date>2023-03-07T00:00:00Z</dc:date>
</item>
<item>
<title>Hydrogels for RNA delivery</title>
<link>https://hdl.handle.net/1721.1/163187</link>
<description>Hydrogels for RNA delivery
Zhong, Ruibo; Talebian, Sepehr; Mendes, Bárbara B; Wallace, Gordon; Langer, Robert; Conde, João; Shi, Jinjun
RNA-based therapeutics have shown tremendous promise in disease intervention at the genetic level, and some have been approved for clinical use, including the recent COVID-19 messenger RNA vaccines. The clinical success of RNA therapy is largely dependent on the use of chemical modification, ligand conjugation or non-viral nanoparticles to improve RNA stability and facilitate intracellular delivery. Unlike molecular-level or nanoscale approaches, macroscopic hydrogels are soft, water-swollen three-dimensional structures that possess remarkable features such as biodegradability, tunable physiochemical properties and injectability, and recently they have attracted enormous attention for use in RNA therapy. Specifically, hydrogels can be engineered to exert precise spatiotemporal control over the release of RNA therapeutics, potentially minimizing systemic toxicity and enhancing in vivo efficacy. This Review provides a comprehensive overview of hydrogel loading of RNAs and hydrogel design for controlled release, highlights their biomedical applications and offers our perspectives on the opportunities and challenges in this exciting field of RNA delivery.
</description>
<pubDate>Mon, 20 Mar 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163187</guid>
<dc:date>2023-03-20T00:00:00Z</dc:date>
</item>
<item>
<title>The Thermal Reactivity and Molecular Diversity of Particulate Organic Carbon in the Amazon River Mainstem</title>
<link>https://hdl.handle.net/1721.1/163186</link>
<description>The Thermal Reactivity and Molecular Diversity of Particulate Organic Carbon in the Amazon River Mainstem
Rosengard, Sarah Z.; Mauro S. Moura, Jose; Spencer, Robert G. M.; Johnson, Carl; McNichol, Ann; Boehman, Brenna; Galy, Valier
The Amazon River mobilizes one of the largest fluxes of particulate organic carbon (POC) fromland to coastal ocean sediments, playing an important role in the long‐term sequestration of biospheric organiccarbon in the ocean. Ramped oxidation (RPO) analyses of suspended sediments collected from the AmazonRiver mainstem, Solimões River, Madeira River, and Tapajós River presented an opportunity to parse riverinePOC by thermal reactivity, extract the activation energy distributions of specific biomolecular pools in thesesamples, and characterize the molecular diversity of POC across the floodplain. The thermal reactivity dataimply that POC from the Amazon River basin spans a wide but relatively homogenous activation energy rangeacross samples, suggesting that the degradation history of the organic carbon comprising riverine suspendedparticles is relatively constant across depths within the mainstem and different tributary locations. Couplingactivation energy distributions to stable and radiocarbon isotopic analyses shows that ca. 85% of mainstem POCderives from a range of partially degraded terrestrial sources, likely organic matter from mineral soil horizons,and that a similar range of soil sources influences the biomolecular diversity in tributary samples. In agreementwith earlier assessments, ca. 10% of the riverine POC flux is fresh vegetation and up to 5% of it is petrogenicorganic matter. Expanded RPO analyses of samples across the Amazon river‐to‐ocean continuum wouldprovide an opportunity to track the fate of these different organic matter pools downstream that is uniquelydifferent from, but complementary to, past compound‐specific and bulk analyses of riverine POC.
</description>
<pubDate>Wed, 18 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163186</guid>
<dc:date>2025-06-18T00:00:00Z</dc:date>
</item>
<item>
<title>Overcoming barriers to patient adherence: the case for developing innovative drug delivery systems</title>
<link>https://hdl.handle.net/1721.1/163185</link>
<description>Overcoming barriers to patient adherence: the case for developing innovative drug delivery systems
Baryakova, Tsvetelina H; Pogostin, Brett H; Langer, Robert; McHugh, Kevin J
Poor medication adherence is a pervasive issue with considerable health and socioeconomic consequences. Although the underlying reasons are generally understood, traditional intervention strategies rooted in patient-centric education and empowerment have proved to be prohibitively complex and/or ineffective. Formulating a pharmaceutical in a drug delivery system (DDS) is a promising alternative that can directly mitigate many common impediments to adherence, including frequent dosing, adverse effects and a delayed onset of action. Existing DDSs have already positively influenced patient acceptability and improved rates of adherence across various disease and intervention types. The next generation of systems have the potential to instate an even more radical paradigm shift by, for example, permitting oral delivery of biomacromolecules, allowing for autonomous dose regulation and enabling several doses to be mimicked with a single administration. Their success, however, is contingent on their ability to address the problems that have made DDSs unsuccessful in the past.
Provided to the PMC Covid-19 Collection by Springer Nature
</description>
<pubDate>Mon, 27 Mar 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163185</guid>
<dc:date>2023-03-27T00:00:00Z</dc:date>
</item>
<item>
<title>Rousseau's Freedom as Recognition</title>
<link>https://hdl.handle.net/1721.1/163184</link>
<description>Rousseau's Freedom as Recognition
Perilla, Julian
To yearn for freedom is to want to be seen by others as someone. Rousseau, I believe, held such a conception of freedom, alongside his intricate theory of human passions. This essay examines how freedom relates to such passions, and in particular, to the Rousseauian notion of amour-propre. Importantly, the aim here is both interpretive and positive. The essay seeks to locate Rousseau within the old republican tradition in a manner that parts ways with most contemporary readings of Rousseau. But, in doing so, it argues that republican freedom essentially involves a particular status and the recognition of such status by others. On this Rousseauian view, one is free to the extent that others see one as a limit to their arbitrary interference and as entitled to interfere with them non-arbitrarily. Finally, republican freedom, so understood, is shown to be essential to meeting the demands of healthy amour-propre, thereby bringing Rousseau's political and psychological theories closer together.
</description>
<pubDate>Thu, 19 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163184</guid>
<dc:date>2025-06-19T00:00:00Z</dc:date>
</item>
<item>
<title>Electric Field Inhomogeneity in Colloidal QD‐LEDs</title>
<link>https://hdl.handle.net/1721.1/163183</link>
<description>Electric Field Inhomogeneity in Colloidal QD‐LEDs
Srinivasan, Shreyas; Zhang, Ruiqi; Dillender, Mike; Nguyen, Thienan; Laitz, Madeleine; Kim, Taehyung; Kim, Kwang‐Hee; Kim, Tae‐Gon; Bawendi, Moungi; Bulović, Vladimir
It is demonstrated that the electroluminescent layer in a colloidal quantum dotlight emitting diode (QD-LED), formed by stochastic methods such as spin-coating, incorporates morphological thickness inhomogeneities, resulting inlocal electric ﬁeld variations. These inhomogeneities can be directly visualizedand quantiﬁed using confocal micro-photoluminescence (PL) and micro-electroluminescence (EL), as showed in QD-LEDs with stochastically processedInP/ZnSe/ZnS colloidal quantum dots (QDs). Around 5% of the device showsEL darkspots under forward bias and PL hotspots under photoexcitation,with a strong spatial correlation between these features. The PL hotspots(EL darkspots) correspond to thicker regions in the stochastically-processedQD ﬁlm. This thickness variation leads to two distinct QD sub-populationsresponding diﬀerently to optical excitation. Time and energy-resolved spectraldiﬀusion measurements reveal that most excitons belong to a “more-mobile”sub-population with fast energy transfer and short, electric ﬁeld-dependentlifetimes, while a smaller fraction belongs to a “less-mobile” sub-populationwith slower energy transfer and longer, electric ﬁeld-independentlifetimes. The “less-mobile” excitons correlate with thicker QD regions. Theseﬁndings shed light on the local electric ﬁeld inhomogeneity in QD-LEDs,oﬀering insights into device operation, possible degradation mechanisms,and strategies for developing stochastically-processed micro-QD-LEDs.
</description>
<pubDate>Fri, 13 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163183</guid>
<dc:date>2025-06-13T00:00:00Z</dc:date>
</item>
<item>
<title>Addressing Favorable and Challenging Flexible Plastic Packaging Waste Flows: A Material Flow Analysis</title>
<link>https://hdl.handle.net/1721.1/163182</link>
<description>Addressing Favorable and Challenging Flexible Plastic Packaging Waste Flows: A Material Flow Analysis
Makarova, Oksana A; Ravi, Basuhi; Sobkowicz, Margaret J; Masato, Davide; Olivetti, Elsa A
The majority of post-consumer flexible plastic packaging (FPP) in the United States ends up in landfills and incinerators. Thisrepresents a significant material loss because FPP, also referred to as plastic films or foils, comprises up to half of all plasticpackaging. Since FPP encompasses a diverse range of products with varying recycling potentials, improving material recoveryrates requires a detailed understanding of the composition and quantities of used films. This study quantifies post-consumerFPP flows in the US for 2021 and estimates the fraction most suitable for mechanical recycling. We conducted a material flowanalysis (MFA) by reconciling publicly available data on packaging film generation and recycling from the US and comparableeconomies. We then categorized post-consumer FPP into three broad categories based on factors affecting the quality of the re-sulting mechanically recycled material. Our analysis reveals that only 3%–8% of the estimated 5–15 million metric tonnes of post-consumer film were recycled in 2021. Furthermore, at most 40% of the FPP could be readily mechanically recyclable, while up tohalf would be deemed non-recoverable due to techno-economic constraints. The actual proportions of challenging-to-recycle andnon-recoverable FPP might be even higher, underscoring the need for updated studies on film generation and waste compositionto assess the feasibility of scaling up nationwide film recycling.
</description>
<pubDate>Thu, 05 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163182</guid>
<dc:date>2025-06-05T00:00:00Z</dc:date>
</item>
<item>
<title>Initiation of Sediment Resuspension by Combined Wave‐Current Conditions in an Artificial Seagrass Meadow</title>
<link>https://hdl.handle.net/1721.1/163181</link>
<description>Initiation of Sediment Resuspension by Combined Wave‐Current Conditions in an Artificial Seagrass Meadow
Zhao, Chuyan; Nepf, Heidi
Laboratory experiments examined the impact of current on ripple formation and the onset ofwave‐driven resuspension within an artificial seagrass meadow modeled after Zostera marina. Within themeadow, the current was less than or equal to the wave velocity. Meadows were constructed with three shootdensities: 247, 455 and 962 stems/m2, and each shoot had six flexible blades. The sediment bed, consisting of65 μm spherical grains, was initially 1.4 cm thick, allowing ripple and scour hole formation. The formation ofwave‐orbital ripples was dependent on meadow density and current magnitude. Over bare beds and sparsemeadows, ripples were present and not impacted by the addition of current, such that the wave velocityresuspension threshold with current was the same as that in pure wave conditions. In medium‐density meadows,the addition of current reduced ripple height due to plant‐generated turbulence. As current increased, ripple sizeand ripple‐generated turbulence decreased, requiring a higher wave velocity to resuspend sediment. That is, formedium density meadows, the critical wave velocity increased as the current velocity increased. Finally, indense meadows, no ripples formed and resuspension was driven by a critical value of plant‐induced turbulence,which was proportional to the total velocity (current plus wave velocity), such that as the current velocityincreased, the critical wave velocity decreased. A model predicting the critical wave velocity for the densemeadow was derived based on the assumption that resuspension was driven by a critical level of stem‐generatedturbulence.
</description>
<pubDate>Sun, 08 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163181</guid>
<dc:date>2025-06-08T00:00:00Z</dc:date>
</item>
<item>
<title>Thermal and Dimensional Stability of Photocatalytic Material ZnPS3 Under Extreme Environmental Conditions</title>
<link>https://hdl.handle.net/1721.1/163180</link>
<description>Thermal and Dimensional Stability of Photocatalytic Material ZnPS3 Under Extreme Environmental Conditions
Mukherjee, Abhishek; Santamaría‐García, Vivian J; Wlodarczyk, Damian; Somakumar, Ajeesh K; Sybilski, Piotr; Siebenaller, Ryan; Rowe, Emmanuel; Narayanan, Saranya; Susner, Michael A; Lozano‐Sanchez, L Marcelo; Suchocki, Andrzej; Palma, Julio L; Boriskina, Svetlana V
Zinc phosphorus trisulﬁde (ZnPS 3 ), a promising material for photocatalysisand energy storage, is shown in this study to exhibit remarkable stabilityunder extreme conditions. Its optical and structural properties are exploredunder high pressure and cryogenic temperatures using photoluminescence(PL) spectroscopy, Raman scattering, and density functional theory (DFT). Theexperimental results identify a pressure-induced phase transition starting at6.75 GPa and stabilizing by 12.5 GPa, after which ZnPS 3 demonstrates robuststability across a broad pressure range up to 24.5 GPa. DFT calculationssupport these observations and further predict a semiconductor-to-semimetaltransition at 100 GPa, while PL measurements reveal defect-assisted emissionthat quench under pressure due to enhanced non-radiative recombination. Atcryogenic temperatures, PL quenching intensiﬁes as non-radiative processesdominate, driven by a rising Grüneisen parameter and reduced phononpopulation. Cryogenic X-ray diﬀraction (XRD) also reveals a high meanthermal expansion coeﬃcient (TEC) of (4.369 ± 0.393) × 10−5 K−1 , amongthe highest reported for 2D materials. This unique combination of tunableelectronic properties under low pressure and high thermal sensitivity makesZnPS3 a strong candidate for sensing applications in extreme environments.
</description>
<pubDate>Fri, 27 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163180</guid>
<dc:date>2025-06-27T00:00:00Z</dc:date>
</item>
<item>
<title>Formulation and Calibration of CATKE, a One‐Equation Parameterization for Microscale Ocean Mixing</title>
<link>https://hdl.handle.net/1721.1/163179</link>
<description>Formulation and Calibration of CATKE, a One‐Equation Parameterization for Microscale Ocean Mixing
Wagner, Gregory LeClaire; Hillier, Adeline; Constantinou, Navid C; Silvestri, Simone; Souza, Andre; Burns, Keaton J; Hill, Chris; Campin, Jean‐Michel; Marshall, John; Ferrari, Raffaele
We describe CATKE, a parameterization for fluxes associated with small‐scale or “microscale”ocean turbulent mixing on scales between 1 and 100 m. CATKE uses a downgradient formulation that dependson a prognostic turbulent kinetic energy (TKE) variable and a diagnostic mixing length scale that includes adynamic convective adjustment (CA) component. With its dynamic convective mixing length, CATKE predictsnot just the depth spanned by convective plumes but also the characteristic convective mixing timescale, animportant aspect of turbulent convection not captured by simpler static CA schemes. As a result, CATKE candescribe the competition between convection and other processes such as shear‐driven mixing and baroclinicrestratification. To calibrate CATKE, we use Ensemble Kalman Inversion to minimize the error between 21large eddy simulations (LESs) and predictions of the LES data by CATKE‐parameterized single columnsimulations at three different vertical resolutions. We find that CATKE makes accurate predictions of bothidealized and realistic LES compared to microscale turbulence parameterizations commonly used in climatemodels.
</description>
<pubDate>Mon, 21 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163179</guid>
<dc:date>2025-04-21T00:00:00Z</dc:date>
</item>
<item>
<title>A GPU‐Based Ocean Dynamical Core for Routine Mesoscale‐Resolving Climate Simulations</title>
<link>https://hdl.handle.net/1721.1/163174</link>
<description>A GPU‐Based Ocean Dynamical Core for Routine Mesoscale‐Resolving Climate Simulations
Silvestri, Simone; Wagner, Gregory L; Constantinou, Navid C; Hill, Christopher N; Campin, Jean‐Michel; Souza, Andre N; Bishnu, Siddhartha; Churavy, Valentin; Marshall, John; Ferrari, Raffaele
We describe an ocean hydrostatic dynamical core implemented in Oceananigans optimized forGraphical Processing Unit (GPU) architectures. On 64 A100 GPUs, equivalent to 16 computational nodes incurrent state‐of‐the‐art supercomputers, our dynamical core can simulate a decade of near‐global oceandynamics per wall‐clock day at an 8‐km horizontal resolution; a resolution adequate to resolve the ocean'smesoscale eddy field. Such efficiency, achieved with relatively modest hardware resources, suggests thatclimate simulations on GPUs can incorporate fully eddy‐resolving ocean models. This removes a major sourceof systematic bias in current IPCC coupled model projections, the parameterization of ocean eddies, andrepresents a major advance in climate modeling. We discuss the computational strategies, focusing on GPU‐specific optimization and numerical implementation details that enable such high performance.
</description>
<pubDate>Mon, 21 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163174</guid>
<dc:date>2025-04-21T00:00:00Z</dc:date>
</item>
<item>
<title>A new upper bound for the growth factor in Gaussian elimination with complete pivoting</title>
<link>https://hdl.handle.net/1721.1/163173</link>
<description>A new upper bound for the growth factor in Gaussian elimination with complete pivoting
Bisain, Ankit; Edelman, Alan; Urschel, John
The growth factor in Gaussian elimination measureshow large the entries of an LU factorization can be rel-ative to the entries of the original matrix. It is a keyparameter in error estimates, and one of the most fun-damental topics in numerical analysis. We produce anupper bound of &#119899; 0.2079 ln &#119899;+0.91 for the growth factor inGaussian elimination with complete pivoting — the firstimprovement upon Wilkinson’s original 1961 bound of2 &#119899; 0.25 ln &#119899;+0.5.
</description>
<pubDate>Wed, 26 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163173</guid>
<dc:date>2025-02-26T00:00:00Z</dc:date>
</item>
<item>
<title>Does U.S. Immigration Policy Facilitate Financial Misconduct?</title>
<link>https://hdl.handle.net/1721.1/163172</link>
<description>Does U.S. Immigration Policy Facilitate Financial Misconduct?
Dai, Ruiting; Dong, Xuanjun; Shroff, Nemit; Tan, Qin
We examine whether U.S. immigration policy, specifically the H-1B visa program, affects the likelihood of financial misconduct. We argue that employers have leverage over employees on H-1B visas because such employees must maintain H-1B–eligible employment to legally reside in the United States. We posit that companies relying on H-1B visas to hire workers in accounting roles have an increased ability to misreport their financial statements due to the greater costs H-1B employees face if they are unexpectedly fired for not following the demands of their bosses or for blowing the whistle on misconduct. Using the sharp reduction in the H-1B visa cap in 2004 as a shock to such employment, we find that companies that relied on this visa program for accounting roles pre-shock experience a 2.3 percentage point decline in accounting irregularities post-shock. Cross-sectional tests show that the reduction in irregularities is greater in companies where H-1B employees have (1) a greater influence on financial reporting or (2) fewer job opportunities. In addition, the relation between H-1B visa use and irregularities is stronger in companies whose investors are more focused on near-term earnings targets. We corroborate our findings using the outcome of H-1B visa lotteries as shocks to such employment.
</description>
<pubDate>Sun, 29 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163172</guid>
<dc:date>2025-06-29T00:00:00Z</dc:date>
</item>
<item>
<title>Data‐Driven Modeling of 4D Ocean and Coastal Acidification in the Massachusetts and Cape Cod Bays From Surface Measurements</title>
<link>https://hdl.handle.net/1721.1/163171</link>
<description>Data‐Driven Modeling of 4D Ocean and Coastal Acidification in the Massachusetts and Cape Cod Bays From Surface Measurements
Champenois, B; Bastidas, C; LaBash, B; Sapsis, TP
A significant portion of atmospheric CO2 emissions is absorbed by the ocean, resulting inacidified seawater and altered carbonate composition that is harmful to marine life. Despite detrimental effects,assessing ocean and coastal acidification (OCA) is difficult due to the scarcity of in situ measurements and thehigh costs of computational modeling. We develop a parsimonious data‐driven framework to model indicatorsof OCA and test it in the Massachusetts Bay and Stellwagen Bank, a region with fishing and tourism industriesaffected by OCA. First, we trained a neural network to predict in‐depth fields for temperature and salinity(x, y, z) using surface quantities from satellites and in situ measurements (x, y). The relationship between 2Dsurface and 3D properties is captured through the in‐depth modes and coefficients obtained from principalcomponent analysis applied to a high‐resolution historical reanalysis data set. Next, we used Bayesianregression methods to estimate region‐specific relationships for in‐depth total alkalinity (TA), dissolvedinorganic carbon (DIC), and aragonite saturation state (ΩAr) as functions of temperature, salinity, andchlorophyll. Lastly, 4D daily field predictions are generated from surface measurements with a spatialresolution of 4 km horizontally and 45 sigma levels vertically. The model's performance is evaluated usingwithheld measurements across depths, locations, and seasons with RMSEs of 1.59°C, 0.31 PSU,37.54 μmol⋅kg-1, and 0.42 for temperature, salinity, TA, DIC, and ΩAr , respectively, at onewithheld location. The framework is useful for understanding OCA and includes uncertainty quantification for future planning and optimal sensor placement.
</description>
<pubDate>Tue, 03 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163171</guid>
<dc:date>2025-06-03T00:00:00Z</dc:date>
</item>
<item>
<title>Arsenic Accumulation in Microbial Biomass and the Interpretation of Signals of Early Arsenic‐Based Metabolisms</title>
<link>https://hdl.handle.net/1721.1/163170</link>
<description>Arsenic Accumulation in Microbial Biomass and the Interpretation of Signals of Early Arsenic‐Based Metabolisms
Madrigal‐Trejo, David; Baldes, Matthew J; Tamura, Nobumichi; Klepac‐Ceraj, Vanja; Bosak, Tanja
Carbonaceous particles that concentrate arsenic in microbialites as old as ~3.5 Ga are similar to As-rich organic globules in mod-ern microbialites. The former particles have been interpreted as tracers of As cycling by early microbial metabolisms. However,it is unclear if arsenic accumulation is a consequence of biological activity or passive postmortem binding of arsenic by organicmatter during diagenesis in volcanically influenced, As-rich environments. Here, we address this uncertainty by evaluating theconcentrations, speciation, and detectability of As in active or heat-killed biofilms formed by cyanobacteria or anoxygenic pho-tosynthetic microbes exposed to environmentally relevant concentrations of As(III) or As(V) (50 μM to 3 mM). The genomes ormetagenomes of these biofilms contain genes involved in detoxifying or energy-yielding As metabolisms. Biomass accumulatesAs from the solution in a concentration-dependent manner and with a preference for oxidized As(V) over As(III). Autoclaved bio-mass accumulates As even more strongly than active biomass, likely because living biofilms actively detoxify As. Active biofilmsoxidize and reduce As and accumulate both As(III) and As(V), whereas a small fraction of As(V) can be reduced in inactive bio-films that bind As during diagenesis. Arsenic enrichments in the biomass are detectable by X-ray based spectroscopy techniques(XRF, EPMA-WDS) that are commonly used to analyze geological materials. These findings enable the reconstruction of pastactive and passive interactions of microbial biomass with arsenic in fossilized microbial biofilms and microbialites from the earlyEarth.
</description>
<pubDate>Fri, 13 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163170</guid>
<dc:date>2025-06-13T00:00:00Z</dc:date>
</item>
<item>
<title>Sensor-Agnostic, LSTM-Based Human Motion Prediction Using sEMG Data</title>
<link>https://hdl.handle.net/1721.1/163169</link>
<description>Sensor-Agnostic, LSTM-Based Human Motion Prediction Using sEMG Data
Koo, Bon Ho; Siu, Ho Chit; Petersen, Lonnie G.
The use of surface electromyography (sEMG) for conventional motion classification and prediction has had limitations due to sensor hardware differences. With the popularization of deep learning-based approaches to the application of motion prediction, this study explores the effects that different hardware sensor platforms have on the performance of a deep learning neural network trained to predict the one-degree-of-freedom (DoF) angular trajectory of a human. Two different sEMG sensor platforms were used to collect raw data from subjects conducting exercises, which was used to train a neural network designed to predict the future angular trajectory of the arm. The results show that the raw data originating from different sensor hardware with different configurations (including the communication method, data acquisition unit (DAQ) usage, electrode configuration, buffering method, preprocessing method, and experimental variables like the sampling frequency) produced bi-LSTM networks that performed similarly. This points to the hardware-agnostic nature of such deep learning networks.
</description>
<pubDate>Tue, 02 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163169</guid>
<dc:date>2025-09-02T00:00:00Z</dc:date>
</item>
<item>
<title>Engineering kinetics of TLR7/8 agonist release from bottlebrush prodrugs enables tumor-focused immune stimulation</title>
<link>https://hdl.handle.net/1721.1/163168</link>
<description>Engineering kinetics of TLR7/8 agonist release from bottlebrush prodrugs enables tumor-focused immune stimulation
Bhagchandani, Sachin H; Vohidov, Farrukh; Milling, Lauren E; Tong, Evelyn Yuzhou; Brown, Christopher M; Ramseier, Michelle L; Liu, Bin; Fessenden, Timothy B; Nguyen, Hung V-T; Kiel, Gavin R; Won, Lori; Langer, Robert S; Spranger, Stefani; Shalek, Alex K; Irvine, Darrell J; Johnson, Jeremiah A
Imidazoquinolines (IMDs), such as resiquimod (R848), are of great interest as potential cancer immunotherapies because of their ability to activate Toll-like receptor 7 (TLR7) and/or TLR8 on innate immune cells. Nevertheless, intravenous administration of IMDs causes severe immune-related toxicities, and attempts to improve their tissue-selective exposure while minimizing acute systemic inflammation have proven difficult. Here, using a library of R848 “bottlebrush prodrugs” (BPDs) that differ only by their R848 release kinetics, we explore how the timing of R848 exposure affects immune stimulation in vitro and in vivo. These studies led to the discovery of R848-BPDs that exhibit optimal activation kinetics to achieve potent stimulation of myeloid cells in tumors and substantial reductions in tumor growth following systemic administration in mouse syngeneic tumor models without any observable systemic toxicity. These results suggest that release kinetics can be tuned at the molecular level to provide safe yet effective systemically administered immunostimulant prodrugs for next-generation cancer immunotherapies.
</description>
<pubDate>Wed, 19 Apr 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163168</guid>
<dc:date>2023-04-19T00:00:00Z</dc:date>
</item>
<item>
<title>A microneedle vaccine printer for thermostable COVID-19 mRNA vaccines</title>
<link>https://hdl.handle.net/1721.1/163167</link>
<description>A microneedle vaccine printer for thermostable COVID-19 mRNA vaccines
vander Straeten, Aurélien; Sarmadi, Morteza; Daristotle, John L; Kanelli, Maria; Tostanoski, Lisa H; Collins, Joe; Pardeshi, Apurva; Han, Jooli; Varshney, Dhruv; Eshaghi, Behnaz; Garcia, Johnny; Forster, Timothy A; Li, Gary; Menon, Nandita; Pyon, Sydney L; Zhang, Linzixuan; Jacob-Dolan, Catherine; Powers, Olivia C; Hall, Kevin; Alsaiari, Shahad K; Wolf, Morris; Tibbitt, Mark W; Farra, Robert; Barouch, Dan H; Langer, Robert; Jaklenec, Ana
Decentralized manufacture of thermostable mRNA vaccines in a microneedle patch (MNP) format could enhance vaccine access in low-resource communities by eliminating the need for a cold chain and trained healthcare personnel. Here we describe an automated process for printing MNP Coronavirus Disease 2019 (COVID-19) mRNA vaccines in a standalone device. The vaccine ink is composed of lipid nanoparticles loaded with mRNA and a dissolvable polymer blend that was optimized for high bioactivity by screening formulations in vitro. We demonstrate that the resulting MNPs are shelf stable for at least 6 months at room temperature when assessed using a model mRNA construct. Vaccine loading efficiency and microneedle dissolution suggest that efficacious, microgram-scale doses of mRNA encapsulated in lipid nanoparticles could be delivered with a single patch. Immunizations in mice using manually produced MNPs with mRNA encoding severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) spike protein receptor-binding domain stimulate long-term immune responses similar to those of intramuscular administration.
</description>
<pubDate>Wed, 24 Apr 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163167</guid>
<dc:date>2024-04-24T00:00:00Z</dc:date>
</item>
<item>
<title>Topical application of Lactobacilli successfully eradicates Pseudomonas aeruginosa biofilms and promotes wound healing in chronic wounds</title>
<link>https://hdl.handle.net/1721.1/163166</link>
<description>Topical application of Lactobacilli successfully eradicates Pseudomonas aeruginosa biofilms and promotes wound healing in chronic wounds
Li, Zhihao; Zhang, Sixuan; Zuber, Flavia; Altenried, Stefanie; Jaklenec, Ana; Langer, Robert; Ren, Qun
Chronic wounds are difficult to treat due to the presence of biofilm which prevents wound healing. Pseudomonas aeruginosa is one of the most common pathogens found in chronic wounds and conventional treatment strategies have been ineffective in the eradication of its biofilm, without harming the surrounding healthy tissue at the same time. Here, we introduced an innovative approach applying the probiotic product Bio-K+ (containing three lactobacilli) topically as an antimicrobial and antibiofilm agent. We identified lactic acid as the main active component. While antibiotics and antiseptics such as silver-ions only demonstrated limited efficacy, Bio-K+ was able to completely eradicate mature P. aeruginosa biofilms established in an in-vitro and ex-vivo human skin model. Furthermore, it demonstrated biocompatibility in the co-culture with human dermal fibroblasts and accelerated the migration of fibroblasts in a cell migration assay promoting wound healing. To enhance clinical practicability, we introduced Bio-K+ into the hydrocolloid dressing Aquacel, achieving sustained release of lactic acid and biofilm eradication. This new treatment approach applying probiotics could represent a major improvement in the management of chronic wounds and can be extended in treating other biofilm-associated infections.
</description>
<pubDate>Wed, 01 Nov 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163166</guid>
<dc:date>2023-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Combinatorial development of nebulized mRNA delivery formulations for the lungs</title>
<link>https://hdl.handle.net/1721.1/163165</link>
<description>Combinatorial development of nebulized mRNA delivery formulations for the lungs
Jiang, Allen Y; Witten, Jacob; Raji, Idris O; Eweje, Feyisayo; MacIsaac, Corina; Meng, Sabrina; Oladimeji, Favour A; Hu, Yizong; Manan, Rajith S; Langer, Robert; Anderson, Daniel G
Inhaled delivery of mRNA has the potential to treat a wide variety of diseases. However, nebulized mRNA lipid nanoparticles (LNPs) face several unique challenges including stability during nebulization and penetration through both cellular and extracellular barriers. Here we develop a combinatorial approach addressing these barriers. First, we observe that LNP formulations can be stabilized to resist nebulization-induced aggregation by altering the nebulization buffer to increase the LNP charge during nebulization, and by the addition of a branched polymeric excipient. Next, we synthesize a combinatorial library of ionizable, degradable lipids using reductive amination, and evaluate their delivery potential using fully differentiated air–liquid interface cultured primary lung epithelial cells. The final combination of ionizable lipid, charge-stabilized formulation and stability-enhancing excipient yields a significant improvement in lung mRNA delivery over current state-of-the-art LNPs and polymeric nanoparticles.
</description>
<pubDate>Mon, 20 Nov 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163165</guid>
<dc:date>2023-11-20T00:00:00Z</dc:date>
</item>
<item>
<title>Nanoparticle‐Mediated Delivery of Anti‐PU.1 siRNA via Localized Intracisternal Administration Reduces Neuroinflammation</title>
<link>https://hdl.handle.net/1721.1/163164</link>
<description>Nanoparticle‐Mediated Delivery of Anti‐PU.1 siRNA via Localized Intracisternal Administration Reduces Neuroinflammation
Ralvenius, William T; Andresen, Jason L; Huston, Margaret M; Penney, Jay; Bonner, Julia Maeve; Fenton, Owen S; Langer, Robert; Tsai, Li‐Huei
Neuroinflammation is a hallmark of neurodegenerative disorders including Alzheimer's disease (AD). Microglia, the brain's immune cells, express many of the AD‐risk loci identified in genome wide association studies and present a promising target for anti‐inflammatory RNA therapeutics but are difficult to transfect with current methods. Here, several lipid nanoparticle (LNP) formulations are examined, and a lead candidate that supports efficient RNA delivery in cultures of human stem cell‐derived microglia‐like cells (iMGLs) and animal models of neuroinflammation is identified. The lead microglia LNP (MG‐LNP) formulation shows minimal toxicity and improves delivery efficiency to inflammatory iMGLs, suggesting a preference for delivery into activated microglia. Intraperitoneal injection of the MG‐LNP formulation generates widespread expression of the delivered reporter construct in all organs, whereas local intracisternal injection directly into the cerebrospinal fluid leads to preferential expression in the brain. It is shown that LNP‐mediated delivery of siRNA targeting the PU.1 transcription factor, a known AD‐risk locus, successfully reduces PU.1 levels in iMGLs and reduces neuroinflammation in mice injected with LPS and in CK‐p25 mice that mimic the chronic neuroinflammation seen in AD patients. The LNP formulation represents an effective RNA delivery vehicle when applied intrathecally and can be broadly utilized to test potential neuroinflammation‐directed gene therapies.
</description>
<pubDate>Thu, 22 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163164</guid>
<dc:date>2024-02-22T00:00:00Z</dc:date>
</item>
<item>
<title>CRISPR–Cas9 delivery strategies for the modulation of immune and non-immune cells</title>
<link>https://hdl.handle.net/1721.1/163163</link>
<description>CRISPR–Cas9 delivery strategies for the modulation of immune and non-immune cells
Alsaiari, Shahad K; Eshaghi, Behnaz; Du, Bujie; Kanelli, Maria; Li, Gary; Wu, Xunhui; Zhang, Linzixuan; Chaddah, Mehr; Lau, Alicia; Yang, Xin; Langer, Robert; Jaklenec, Ana
CRISPR–Cas9 genome editing technology is a promising tool for genetically engineering immune cells and modulating immune systems. Although ex vivo genome editing of immune cells has reached clinical trials, in vivo application is still restricted by the instability and inefficient delivery of CRISPR–Cas9 components to immune cells through circulation. In this Review, we summarize ex vivo and in vivo strategies to deliver CRISPR–Cas9 components to both non-immune and immune cells. We review the progress made in non-immune cells because it offers insights that can be applied to advancing research in immune cells. We also discuss principles and challenges of immune system modulation using CRISPR–Cas9 genome editing technology.
</description>
<pubDate>Wed, 16 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163163</guid>
<dc:date>2024-10-16T00:00:00Z</dc:date>
</item>
<item>
<title>Degradable poly(β-amino ester) microparticles for cleansing products and food fortification</title>
<link>https://hdl.handle.net/1721.1/163162</link>
<description>Degradable poly(β-amino ester) microparticles for cleansing products and food fortification
Zhang, Linzixuan; Xiao, Ruiqing; Jin, Tianyi; Pan, Xinyan; Fransen, Katharina A; Alsaiari, Shahad K; Lau, Alicia; He, Ruizhe; Han, Jooli; Pedretti, Benjamin J; Yeo, Jing Ying; Yang, Xin; Olsen, Bradley D; Alexander-Katz, Alfredo; Smith, Zachary P; Langer, Robert; Jaklenec, Ana
Microplastic pollution is a pressing global crisis caused by the extensive use of nondegradable microplastic materials in daily activities. One effective approach to mitigate this issue is to replace nondegradable plastics with degradable materials that have properties amendable for targeted applications. Here we present the development of a degradable microparticle (MP) platform based on a poly(β-amino ester) (PAE) that degrades into sugar and amino acid derivatives. This PAE MP platform showed functional replacement of nondegradable microplastics used in cleansing products and food fortification. In cleansing products, PAE MPs effectively enhanced the cleansing efficiency of a representative rinse-off product and showed effective removal of potentially toxic elements, as an alternative of traditional nondegradable microbeads. In food fortification, PAE MPs provided robust protection for multiple essential vitamins and minerals against extensive cooking and storage conditions with rapid nutrient release in a simulated human digestion system. Collectively, these PAE MPs present a potential platform to replace microplastic usage on a global scale in many applications.
</description>
<pubDate>Mon, 01 Jan 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163162</guid>
<dc:date>2024-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Leveraging next-generation materials for cancer neuroscience therapies in the central nervous system</title>
<link>https://hdl.handle.net/1721.1/163161</link>
<description>Leveraging next-generation materials for cancer neuroscience therapies in the central nervous system
Bernstock, Joshua D; Johnston, Benjamin R; Friedman, Gregory K; Chiocca, EA; Langer, Robert; Srinivasan, Shriya S
Interdisciplinary strategies bridging oncology, neuroscience, bioelectronics and materials science will facilitate the development of next-generation therapies and devices for cancers of the central nervous system.
</description>
<pubDate>Mon, 22 Apr 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163161</guid>
<dc:date>2024-04-22T00:00:00Z</dc:date>
</item>
<item>
<title>On‐Patient Temporary Medical Record for Accurate, Time‐Sensitive Information at the Point of Care</title>
<link>https://hdl.handle.net/1721.1/163160</link>
<description>On‐Patient Temporary Medical Record for Accurate, Time‐Sensitive Information at the Point of Care
Collins, Joe; Han, Jooli; Sarmadi, Morteza; Allison‐Logan, Stephanie; Straeten, Aurelien vander; Perkinson, Collin F; Acolaste, Sarah; Kanelli, Maria; Daristotle, John; Karchin, Ari; Henderson, Mitchell; Cruz, Mache; Artzi, Dolev; Alsaiari, Shahad K; Zhang, Linzixuan; Levy, Lauren; Wood, Lowell; Jing, Lihong; McHugh, Kevin J; Bawendi, Moungi G; Langer, Robert; Jaklenec, Ana
Accurate medical recordkeeping is important for personal and public health. Conventional forms of on‐patient medical information, such as medical alert bracelets or finger‐markings, may compromise patient privacy because they are readily visible to other people. Here, the development of an invisible, temporary, and easily deployable on‐patient medical recordkeeping system is reported. Information is stored in unique patterns of spatially distributed near‐infrared (NIR) fluorescent quantum dots (QDs), which are delivered to the skin using dissolvable microneedle arrays. The patterns are invisible to the naked eye but detectable with an infrared camera, which can extract information with &amp;gt;98% accuracy using automated pattern recognition software. By encapsulating NIR QDs in an FDA‐approved biodegradable polymer, biodegradation rates can be tuned so that the encoded medical information can be conveyed in both a spatial and temporal manner, with some components fading within 100 days and others persisting for 6 months. This may be particularly useful for administering a series of vaccinations or treatments by indicating if enough time has passed for the patient to receive the next dose. Importantly, this system contains no personal information, does not require connection to a centralized database, and is not visible to the naked eye, ensuring patient privacy.
</description>
<pubDate>Thu, 18 Apr 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163160</guid>
<dc:date>2024-04-18T00:00:00Z</dc:date>
</item>
<item>
<title>Evaluation of optimally windowed chirp signals in industrial rheological measurements: method development and data processing</title>
<link>https://hdl.handle.net/1721.1/163157</link>
<description>Evaluation of optimally windowed chirp signals in industrial rheological measurements: method development and data processing
Perego, Alessandro; Vadillo, Damien C.; Mills, Matthew J. L.; Das, Mohua; McKinley FRS, Gareth H.
The optimally windowed chirp (OWCh) methodology offers an alternative to traditional discrete frequency sweeps, acquiring complete rheological spectra in seconds while preserving data density and accuracy. For thermorheologically simple materials, OWCh accelerates data collection, enabling rapid creation of time–temperature superposition (tTS) master curves, potentially saving hours of instrument time. For mutating materials, such as those undergoing curing, OWCh facilitates detailed rheological characterization of viscoelastic properties throughout these transition events. We implemented OWCh within an industrial analytical research framework using commercially available rheometers. This integration is enhanced by two custom Python packages, piblin and hermes-rheo, which streamline and automate analysis of rheological datasets. For thermorheologically simple materials, this framework reduces tTS master curve data collection time by 40% while increasing data density by an order of magnitude. For mutating materials, we leverage the mutation number to design OWCh waveforms, effectively probing the characteristic timescale of fast thermomechanical transitions during curing experiments.
</description>
<pubDate>Fri, 15 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163157</guid>
<dc:date>2025-08-15T00:00:00Z</dc:date>
</item>
<item>
<title>Exponential Speedups for Quantum Walks in Random Hierarchical Graphs</title>
<link>https://hdl.handle.net/1721.1/163156</link>
<description>Exponential Speedups for Quantum Walks in Random Hierarchical Graphs
Balasubramanian, Shankar; Li, Tongyang; Harrow, Aram W.
There are few known exponential speedups for quantum algorithms and these tend to fall into even fewer families. One speedup that has mostly resisted generalization is the use of quantum walks to traverse the welded-tree graph, due to Childs, Cleve, Deotto, Farhi, Gutmann, and Spielman. We show how to generalize this to a large class of hierarchical graphs in which the vertices are grouped into “supervertices” which are arranged according to a d-dimensional lattice. Supervertices can have different sizes, and edges between supervertices correspond to random connections between their constituent vertices. The hitting times of quantum walks on these graphs are related to the localization properties of zero modes in certain disordered tight binding Hamiltonians. The speedups range from superpolynomial to exponential, depending on the underlying dimension and the random graph model. We also provide concrete realizations of these hierarchical graphs, and introduce a general method for constructing graphs with efficient quantum traversal times using graph sparsification.
</description>
<pubDate>Fri, 01 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163156</guid>
<dc:date>2025-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mean robust optimization</title>
<link>https://hdl.handle.net/1721.1/163155</link>
<description>Mean robust optimization
Wang, Irina; Becker, Cole; Van Parys, Bart; Stellato, Bartolomeo
Robust optimization is a tractable and expressive technique for decision-making under uncertainty, but it can lead to overly conservative decisions when pessimistic assumptions are made on the uncertain parameters. Wasserstein distributionally robust optimization can reduce conservatism by being data-driven, but it often leads to very large problems with prohibitive solution times. We introduce mean robust optimization, a general framework that combines the best of both worlds by providing a trade-off between computational effort and conservatism. We propose uncertainty sets constructed based on clustered data rather than on observed data points directly thereby significantly reducing problem size. By varying the number of clusters, our method bridges between robust and Wasserstein distributionally robust optimization. We show finite-sample performance guarantees and explicitly control the potential additional pessimism introduced by any clustering procedure. In addition, we prove conditions for which, when the uncertainty enters linearly in the constraints, clustering does not affect the optimal solution. We illustrate the efficiency and performance preservation of our method on several numerical examples, obtaining multiple orders of magnitude speedups in solution time with little-to-no effect on the solution quality.
</description>
<pubDate>Thu, 28 Nov 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163155</guid>
<dc:date>2024-11-28T00:00:00Z</dc:date>
</item>
<item>
<title>Forced Gas Convection for Uniform Freezing of Lyophilization Vials</title>
<link>https://hdl.handle.net/1721.1/163154</link>
<description>Forced Gas Convection for Uniform Freezing of Lyophilization Vials
Burcat, Steven J.; Kadambi, Rohan P.; Stratta, Lorenzo; Braatz, Richard D.; Pisano, Roberto; Slocum, Alexander H.; Trout, Bernhardt L.
Purpose Conventional shelf-freezing in pharmaceutical lyophilization suffers from batch variation and is potentially incompatible with emerging continuous lyophilization systems. This work presents a forced gas convective freezing chamber for suspended vials in cross-flow to improve the quality of the freezing process and meet the continuous lyophilization needs. Methods First, computational fluid dynamics simulations were performed to determine key process parameters. Then, physical chambers were built to meet these requirements. Sets of twenty 10R vials containing 3 mL of aqueous solution were frozen to characterize the per-vial heat transfer. Additionally, a novel nucleation technique was investigated where conditioned vials were exposed to an impulse of &lt; - 30 ∘ C gas. Finally, frozen vials were completely dried in 12 h in an attached vacuum chamber. Results The chambers conditioned vials from 25 ∘ C to −1 ∘ C in under 20 min, with final vial temperatures varying by less than 0.5 ∘ C. The impulse technique induced nucleation in all vials within 30 s without significantly cooling them. After nucleation, the system accessed slow (0.05 g/min) and rapid (1.0 g/min) solidification rates, as well as post-solidification procedures including typical ramp and hold protocols. Dried vials had residual moisture below 2.5 wt% and showed no signs of collapse. Conclusions This freezing chamber was demonstrated to track gas temperature setpoints as low as −50 ∘ C within ±1 ∘ C and induce nucleation in all vials virtually simultaneously, enabling excellent control of the freezing process. The chamber’s cooling via forced convection and its available front and back faces make it compatible with integration into a continuous lyophilization system.
</description>
<pubDate>Tue, 29 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163154</guid>
<dc:date>2025-07-29T00:00:00Z</dc:date>
</item>
<item>
<title>EMT-ciliary signaling in quasi-mesenchymal-stem-like cells drives therapeutic resistance and is a druggable vulnerability in triple-negative breast cancer</title>
<link>https://hdl.handle.net/1721.1/163153</link>
<description>EMT-ciliary signaling in quasi-mesenchymal-stem-like cells drives therapeutic resistance and is a druggable vulnerability in triple-negative breast cancer
Tessier, Camille E.; Derrien, Jennifer; Dupuy, Aurore M. M.; Pelé, Thomas; Moquet, Martin; Roul, Julie; Douillard, Elise; El Harrif, Camille; Pinson, Xavier; Le Gallo, Matthieu; Godey, Florence; Tas, Patrick; Viel, Roselyne; Grasset, Eloïse
Cancer therapeutic resistance is mediated, in part, by phenotypic heterogeneity and the plasticity of tumor cells, the latter being enabled by epithelial–mesenchymal transition (EMT). However, EMT in human cancer therapeutic response remains poorly understood. We developed patient-derived organoids (PDOs) from human triple-negative breast cancer (TNBC) and investigated their response to chemotherapy. We found that chemotherapy treatment kills the bulk of tumor cells in PDOs, but there is selective survival of malignant cells that had activated an EMT program, entered a quasi-mesenchymal, stem cell-like state and display primary cilia. We developed a family of small-molecule inhibitors of ciliogenesis and show that treatment with these inhibitors, or genetic ablation of primary cilia, is sufficient to suppress this chemoresistance via NFκB-induced cell death. We conclude that an EMT–ciliary signaling axis induces chemoresistance in quasi-mesenchymal ciliated stem-like cells to help tumors evade chemotherapy and represents a druggable vulnerability in human TNBC.
</description>
<pubDate>Tue, 26 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163153</guid>
<dc:date>2025-08-26T00:00:00Z</dc:date>
</item>
<item>
<title>Pre-Clinical Models of Heart Failure with Preserved Ejection Fraction: Advancing Knowledge for Device Based Therapies</title>
<link>https://hdl.handle.net/1721.1/163152</link>
<description>Pre-Clinical Models of Heart Failure with Preserved Ejection Fraction: Advancing Knowledge for Device Based Therapies
Langer, Nina; Escher, Andreas; Ozturk, Caglar; Stephens, Andrew F.; Roche, Ellen T.; Granegger, Marcus; Kaye, David M.; Gregory, Shaun D.
Heart failure with preserved ejection fraction (HFpEF) is a growing health problem worldwide, accounting for half of all heart failure cases. HFpEF patients present with diverse underlying causes and symptoms, making diagnosis and treatment challenging. Current pharmacological therapies are inadequate, while approved device-based therapies have shown limited success due to patient heterogeneity. This underscores the need for improved pre-clinical models, critical for guiding the design and development of effective therapeutic devices. This paper presents an overview of current pre-clinical HFpEF models, including in-silico, in-vitro, ex-vivo, and in-vivo approaches, aimed at advancing the understanding of HFpEF physiology and the development of device-based therapies. We examined each model's ability to replicate key HFpEF characteristics, discuss their respective strengths and limitations, and highlight their role in supporting the creation of clinically relevant solutions. Additionally, the potential of emerging advancements is explored.
</description>
<pubDate>Mon, 25 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163152</guid>
<dc:date>2025-08-25T00:00:00Z</dc:date>
</item>
<item>
<title>A new approach to plurals-of-politeness and their number agreement</title>
<link>https://hdl.handle.net/1721.1/163151</link>
<description>A new approach to plurals-of-politeness and their number agreement
Kaur, Gurmeet; Sinha, Yash
Plural DPs, which indicate politeness or honorification towards a singular referent, have received significant attention in the literature. Unlike regular plurals that always trigger plural agreement, these DPs, which we call plurals-of-politeness/PoPs, can trigger singular agreement on some probes in some languages. Moreover, the distribution of singular agreement is subject to certain constraints. Expanding the class of PoPs to include not only pronominals but also nominals, which are crosslinguistically rarer and have received relatively less attention, this paper offers a new analysis of agreement with PoPs. We propose a structure of PoPs, in which the pl feature in a PoP is embedded further inside the DP than the pl feature in a regular plural. The core idea is that a probe that can access the pl feature in a regular plural can sometimes fail to do so in a PoP, resulting in singular agreement. This analysis can derive all the constraints on singular agreement with PoPs, which existing accounts of agreement with PoPs are unable to do. Additionally, by examining nominal and pronominal PoPs together, we provide the first unified account of DP-internal and external agreement with PoPs.
</description>
<pubDate>Mon, 25 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163151</guid>
<dc:date>2025-08-25T00:00:00Z</dc:date>
</item>
<item>
<title>Gallium and Indium Selective Sulfidation and Vapor Phase Transport from e-Waste Feedstocks</title>
<link>https://hdl.handle.net/1721.1/163150</link>
<description>Gallium and Indium Selective Sulfidation and Vapor Phase Transport from e-Waste Feedstocks
Benderly-Kremen, Ethan; Daehn, Katrin; Allanore, Antoine
Gallium (Ga) and indium (In) share similarities in their chemical behavior, their dilute presence in waste electronics (e-waste), and recycling rates close to 0% from such streams. Designing processes to extract gallium from LED chips and indium from LCD screens simultaneously reveals the potential and necessary distinctions for a flexible process based on elemental sulfur reactivity, which can be applied to both feedstocks. Whereas Ga- and In-compounds found in e-waste (gallium nitride, GaN; indium tin oxide, ‘ITO’) are recalcitrant to dissolution in aqueous feedstocks, the reaction with sulfur gas to form volatile sulfides may support their selective extraction from prepared e-waste. Process conditions for selective sulfidation are herein informed from thermodynamics and demonstrated experimentally. Vapor phase transport of the volatile sulfides is a powerful means to collect and enrich gallium and indium. Practical implementation likely calls for physical separation approaches to disassemble e-waste, remove excess material (epoxy, glass, metallic leads, and housing) from LED chips, and expose the ITO layer within LCD screens.
</description>
<pubDate>Mon, 25 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163150</guid>
<dc:date>2025-08-25T00:00:00Z</dc:date>
</item>
<item>
<title>Nonlinear conjugate gradient methods: worst-case convergence rates via computer-assisted analyses</title>
<link>https://hdl.handle.net/1721.1/163122</link>
<description>Nonlinear conjugate gradient methods: worst-case convergence rates via computer-assisted analyses
Das Gupta, Shuvomoy; Freund, Robert M.; Sun, Xu A.; Taylor, Adrien
We propose a computer-assisted approach to the analysis of the worst-case convergence of nonlinear conjugate gradient methods (NCGMs). Those methods are known for their generally good empirical performances for large-scale optimization, while having relatively incomplete analyses. Using our computer-assisted approach, we establish novel complexity bounds for the Polak-Ribière-Polyak (PRP) and the Fletcher-Reeves (FR) NCGMs for smooth strongly convex minimization. In particular, we construct mathematical proofs that establish the first non-asymptotic convergence bound for FR (which is historically the first developed NCGM), and a much improved non-asymptotic convergence bound for PRP. Additionally, we provide simple adversarial examples on which these methods do not perform better than gradient descent with exact line search, leaving very little room for improvements on the same class of problems.
</description>
<pubDate>Thu, 22 Aug 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163122</guid>
<dc:date>2024-08-22T00:00:00Z</dc:date>
</item>
<item>
<title>A multi-modal network equilibrium model considering captive travelers and mode correlation</title>
<link>https://hdl.handle.net/1721.1/163121</link>
<description>A multi-modal network equilibrium model considering captive travelers and mode correlation
Wang, Guangchao; Song, Defeng; Qi, Hang; Zhou, Juanhua; He, Zhengbing
In making daily commuting trips, a part of travelers, which are called captive travelers, rely on one transport mode due to a lack of access or affordability to other transport modes. To account for the effect of such captive travelers on network equilibrium performances, this paper proposes a multi-modal network equilibrium (MMNE) model that accounts for the captive travelers and the correlations between modes and between routes. First, a hybrid mode choice model is developed by integrating the dogit and nested logit (NL) models. The hybrid dogit–NL (DNL) model has smaller direct and cross elasticity than the NL model, it alleviates the property of irrelevant from independent alternatives and takes the dogit and NL modal splits as bounds. Second, the path-size logit (PSL) model is adopted for predicting travelers’ route choices with overlapping routes. The DNL–PSL MMNE model is formulated as a mathematical programming problem that admits an equivalent and unique solution. Then, a partial linearization algorithm with the Barzilai–Borwein (BB) step sizes is developed. The numerical results reveal that captive travelers lead to lower sensitivity toward transport policies and may cause higher network total travel time; while the perception of mode similarity may impair the overall attractiveness of modes with a high degree of similarity. The observations indicate that to promote green transportation, policy efforts should be made to make use of or adjust the captivity structure and produce diversified perceptions of and preferences for different green transport modes. The BB step sizes are suggested for low travel demand cases when solving the combined travel choice problems. Further, extensions of the DNL model with bundle captivities are discussed. The results of the paper help improve the network equilibrium prediction and support transport policymaking.
</description>
<pubDate>Mon, 08 Apr 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163121</guid>
<dc:date>2024-04-08T00:00:00Z</dc:date>
</item>
<item>
<title>Addressing Grid Convergence and Log-Layer Mismatch in Wall Modeled Large Eddy Simulations of Geophysical Flows Over Rough Surfaces and Canopies</title>
<link>https://hdl.handle.net/1721.1/163120</link>
<description>Addressing Grid Convergence and Log-Layer Mismatch in Wall Modeled Large Eddy Simulations of Geophysical Flows Over Rough Surfaces and Canopies
Shin, E. Y.; Yang, X. I. A.; Howland, M. F.
Wall modeled large eddy simulations are the primary scale-resolving method used to investigate boundary layer meteorology. Wall models are used to parameterize momentum, heat, and other exchanges at the surface to achieve computationally efficient predictions given the very high Reynolds numbers of planetary boundary layers and the importance of small-scales near the surface. However, wall modeled large eddy simulations can be contaminated by log-layer mismatch, where the prediction of wall shear stress (friction velocity) deviates from the intended value. It is not clear how this log-layer mismatch in boundary layers depends on parameters that represent unresolved roughness elements and on the computational setup. This study elucidates how log-layer mismatch depends on the roughness length, displacement distance, matching velocity filtering strength, and vertical grid resolution using 135 channel flow, 24 conventionally neutral boundary layer, and 12 truly neutral boundary layer wall modeled large eddy simulations. The results demonstrate two sources of log-layer mismatch. First, a spurious correlation between the friction velocity and the fluctuation of the matching velocity causes log-layer mismatch that increases with roughness length, displacement distance, and increasing grid resolution. This log-layer mismatch can be eliminated by filtering the matching velocity, but the filter timescale necessary to eliminate the error depends on the roughness parameters and grid resolution. Second, an additional source of log-layer mismatch is identified, depending on the displacement distance. This mechanism of log-layer mismatch is not alleviated by filtering the matching velocity. An analytical model of this log-layer mismatch mechanism is derived and validated against the large eddy simulations. The results demonstrate that the analytical model is able to predict the magnitude of this log layer mismatch based on a priori information about the simulation to within the uncertainty of the von Kármán constant.
</description>
<pubDate>Thu, 28 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163120</guid>
<dc:date>2025-08-28T00:00:00Z</dc:date>
</item>
<item>
<title>Effective last-mile delivery using reinforcement learning and social media-based traffic prediction in underdeveloped megacities</title>
<link>https://hdl.handle.net/1721.1/163119</link>
<description>Effective last-mile delivery using reinforcement learning and social media-based traffic prediction in underdeveloped megacities
Rabelo, Luis; Rincón-Guio, Cristian; Laynes, Valeria; Gutierrez-Franco, Edgar; Bhat, Vasanth; Zamora-Aguas, Juan; Elkamel, Marwen
This paper presents a framework for effective last-mile delivery in underdeveloped megacities by combining social media, machine learning, and reinforcement learning. Leveraging a Graph Convolutional Networks and a Long Short-Term Memory model for traffic prediction, the framework incorporates multimodal data sources, such as social media sentiment analysis, to provide real-time insights into traffic dynamics. By framing the delivery problem as a Markov Decision Process, reinforcement learning dynamically adapts routing decisions to obtain delivery efficiency, reduce delays, and minimize fuel consumption. A case study in Bogotá demonstrates the framework’s effectiveness in mitigating urban traffic challenges. This work highlights the transformative potential of integrating adaptive learning technologies to address urban logistics’ environmental, economic, and operational complexities. Future research explores advanced methodologies, including multi-agent systems and transformer-based architectures, to further enhance scalability and adaptability in dynamic urban environments.
</description>
<pubDate>Sun, 17 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163119</guid>
<dc:date>2025-08-17T00:00:00Z</dc:date>
</item>
<item>
<title>Spatiotemporally constrained 3D reconstruction from biplanar digital subtraction angiography</title>
<link>https://hdl.handle.net/1721.1/163118</link>
<description>Spatiotemporally constrained 3D reconstruction from biplanar digital subtraction angiography
Frisken, Sarah; Gopalakrishnan, Vivek; Chlorogiannis, David D.; Haouchine, Nazim; Cafaro, Alexandre; Golby, Alexandra J.; Wells III, William M.; Du, Rose
Purpose Our goal is to reconstruct 3D cerebral vessels from two 2D digital subtraction angiography (DSA) images acquired using a biplane scanner. This could provide intraoperative 3D imaging with 2–5 × spatial and 20 × temporal resolution of 3D magnetic resonance angiography, computed tomography angiography (CTA), or rotational DSA. Because many interventional radiology suites have biplane scanners, our method could be easily integrated into clinical workflows. Methods We present a constrained 3D reconstruction method that utilizes vessel centerlines, radii, and the flow of contrast agent through vessels from DSA. The reconstructed volume samples ‘vesselness’ at each voxel, i.e., its probability of containing a vessel. We present evaluation metrics which we used to optimize reconstruction parameters and evaluate our method on synthetic data. We provide preliminary results on clinical data. To handle clinical data, we developed a software tool for extracting vessel centerlines, radii, and contrast arrival times from clinical DSA. We provide an automated method for registering DSA to CTA which allows us to compare reconstructed vessels with vessels extracted from CTA. Result Our method reduced reconstruction artifacts in vesselness volumes for both synthetic and clinical data. In synthetic DSA, where 3D ground-truth vessel centerlines are available, our constrained reconstruction method improved accuracy, selectivity, and Dice scores with two views compared to existing sparse reconstruction methods with up to 16 views. Conclusion Incorporating additional constraints into 3D reconstruction can successfully reduce artifacts introduced when a complex 3D structure like the brain vasculature is reconstructed from a small number of 2D views.
</description>
<pubDate>Sun, 01 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163118</guid>
<dc:date>2025-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Enhancing the Functionality of Immunoisolated Human SC‐βeta Cell Clusters through Prior Resizing</title>
<link>https://hdl.handle.net/1721.1/163116</link>
<description>Enhancing the Functionality of Immunoisolated Human SC‐βeta Cell Clusters through Prior Resizing
Bochenek, Matthew A; Walters, Ben; Zhang, Jingping; Fenton, Owen S; Facklam, Amanda; Kroneková, Zuzana; Pelach, Michal; Engquist, Elise N; Leite, Nayara C; Morgart, Alex; Lacík, Igor; Langer, Robert; Anderson, Daniel G
The transplantation of immunoisolated stem cell derived beta cell clusters (SC‐β) has the potential to restore physiological glycemic control in patients with type I diabetes. This strategy is attractive as it uses a renewable β‐cell source without the need for systemic immune suppression. SC‐β cells have been shown to reverse diabetes in immune compromised mice when transplanted as ≈300 µm diameter clusters into sites where they can become revascularized. However, immunoisolated SC‐β clusters are not directly revascularized and rely on slower diffusion of nutrients through a membrane. It is hypothesized that smaller SC‐β cell clusters (≈150 µm diameter), more similar to islets, will perform better within immunoisolation devices due to enhanced mass transport. To test this, SC‐β cells are resized into small clusters, encapsulated in alginate spheres, and coated with a biocompatible A10 polycation coating that resists fibrosis. After transplantation into diabetic immune competent C57BL/6 mice, the “resized” SC‐β cells plus the A10 biocompatible polycation coating induced long‐term euglycemia in the mice (6 months). After retrieval, the resized A10 SC‐β cells exhibited the least amount of fibrosis and enhanced markers of β‐cell maturation. The utilization of small SC‐β cell clusters within immunoprotection devices may improve clinical translation in the future.
</description>
<pubDate>Thu, 11 Jan 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163116</guid>
<dc:date>2024-01-11T00:00:00Z</dc:date>
</item>
<item>
<title>Drinkable in situ-forming tough hydrogels for gastrointestinal therapeutics</title>
<link>https://hdl.handle.net/1721.1/163115</link>
<description>Drinkable in situ-forming tough hydrogels for gastrointestinal therapeutics
Liu, Gary W; Pickett, Matthew J; Kuosmanen, Johannes LP; Ishida, Keiko; Madani, Wiam AM; White, Georgia N; Jenkins, Joshua; Park, Sanghyun; Feig, Vivian R; Jimenez, Miguel; Karavasili, Christina; Lal, Nikhil B; Murphy, Matt; Lopes, Aaron; Morimoto, Joshua; Fitzgerald, Nina; Cheah, Jaime H; Soule, Christian K; Fabian, Niora; Hayward, Alison; Langer, Robert; Traverso, Giovanni
Pills are a cornerstone of medicine but can be challenging to swallow. While liquid formulations are easier to ingest, they lack the capacity to localize therapeutics with excipients nor act as controlled release devices. Here we describe drug formulations based on liquid in situ-forming tough (LIFT) hydrogels that bridge the advantages of solid and liquid dosage forms. LIFT hydrogels form directly in the stomach through sequential ingestion of a crosslinker solution of calcium and dithiol crosslinkers, followed by a drug-containing polymer solution of alginate and four-arm poly(ethylene glycol)-maleimide. We show that LIFT hydrogels robustly form in the stomachs of live rats and pigs, and are mechanically tough, biocompatible and safely cleared after 24 h. LIFT hydrogels deliver a total drug dose comparable to unencapsulated drug in a controlled manner, and protect encapsulated therapeutic enzymes and bacteria from gastric acid-mediated deactivation. Overall, LIFT hydrogels may expand access to advanced therapeutics for patients with difficulty swallowing.
</description>
<pubDate>Tue, 27 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163115</guid>
<dc:date>2024-02-27T00:00:00Z</dc:date>
</item>
<item>
<title>AI‐Driven Defect Engineering for Advanced Thermoelectric Materials</title>
<link>https://hdl.handle.net/1721.1/163114</link>
<description>AI‐Driven Defect Engineering for Advanced Thermoelectric Materials
Fu, Chu‐Liang; Cheng, Mouyang; Hung, Nguyen Tuan; Rha, Eunbi; Chen, Zhantao; Okabe, Ryotaro; Carrizales, Denisse Córdova; Mandal, Manasi; Cheng, Yongqiang; Li, Mingda
Thermoelectric materials oﬀer a promising pathway to directly convertwaste heat to electricity. However, achieving high performance remainschallenging due to intrinsic trade-oﬀs between electrical conductivity, theSeebeck coeﬃcient, and thermal conductivity, which are further complicatedby the presence of defects. This review explores how artiﬁcial intelligence (AI)and machine learning (ML) are transforming thermoelectric materials design.Advanced ML approaches including deep neural networks, graph-basedmodels, and transformer architectures, integrated with high-throughputsimulations and growing databases, eﬀectively capture structure-propertyrelationships in a complex multiscale defect space and overcome the “curse ofdimensionality”. This review discusses AI-enhanced defect engineering strate-gies such as composition optimization, entropy and dislocation engineering,and grain boundary design, along with emerging inverse design techniquesfor generating materials with targeted properties. Finally, it outlines futureopportunities in novel physics mechanisms and sustainability, highlightingthe critical role of AI in accelerating the discovery of thermoelectric materials.
</description>
<pubDate>Mon, 23 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163114</guid>
<dc:date>2025-06-23T00:00:00Z</dc:date>
</item>
<item>
<title>Intracellular proteomics and extracellular vesiculomics as a metric of disease recapitulation in 3D-bioprinted aortic valve arrays</title>
<link>https://hdl.handle.net/1721.1/163113</link>
<description>Intracellular proteomics and extracellular vesiculomics as a metric of disease recapitulation in 3D-bioprinted aortic valve arrays
Clift, Cassandra L; Blaser, Mark C; Gerrits, Willem; Turner, Mandy E; Sonawane, Abhijeet; Pham, Tan; Andresen, Jason L; Fenton, Owen S; Grolman, Joshua M; Campedelli, Alesandra; Buffolo, Fabrizio; Schoen, Frederick J; Hjortnaes, Jesper; Muehlschlegel, Jochen D; Mooney, David J; Aikawa, Masanori; Singh, Sasha A; Langer, Robert; Aikawa, Elena
In calcific aortic valve disease (CAVD), mechanosensitive valvular cells respond to fibrosis- and calcification-induced tissue stiffening, further driving pathophysiology. No pharmacotherapeutics are available to treat CAVD because of the paucity of (i) appropriate experimental models that recapitulate this complex environment and (ii) benchmarking novel engineered aortic valve (AV)–model performance. We established a biomaterial-based CAVD model mimicking the biomechanics of the human AV disease-prone fibrosa layer, three-dimensional (3D)–bioprinted into 96-well arrays. Liquid chromatography–tandem mass spectrometry analyses probed the cellular proteome and vesiculome to compare the 3D-bioprinted model versus traditional 2D monoculture, against human CAVD tissue. The 3D-bioprinted model highly recapitulated the CAVD cellular proteome (94% versus 70% of 2D proteins). Integration of cellular and vesicular datasets identified known and unknown proteins ubiquitous to AV calcification. This study explores how 2D versus 3D-bioengineered systems recapitulate unique aspects of human disease, positions multiomics as a technique for the evaluation of high throughput–based bioengineered model systems, and potentiates future drug discovery.
</description>
<pubDate>Wed, 28 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163113</guid>
<dc:date>2024-02-28T00:00:00Z</dc:date>
</item>
<item>
<title>Dual‐Wavelength Vat Photopolymerization With Dissolvable, Recyclable Support Structures</title>
<link>https://hdl.handle.net/1721.1/163112</link>
<description>Dual‐Wavelength Vat Photopolymerization With Dissolvable, Recyclable Support Structures
Diaco, Nicholas S; Thrasher, Carl J; Hughes, Max M; Zhou, Kevin A; Durso, Michael N; Yap, Saechow; Macfarlane, Robert J; Hart, A John
Vat photopolymerization (VP) additive manufacturing (AM) is valued for itsspeed, precision, and material versatility. However, its requirement forsupport structures limits printable geometries, complicates post-processing,and generates non-recyclable waste when typical thermoset resins are used.Here, a wavelength-selective resin system for VP that enables single-vat,multi-material printing with dissolvable supports is introduced. Exposure tovisible light produces a rigid, dissolvable thermoplastic, while UV light formsa crosslinked thermoset resistant to dissolution. This process, termedselective solubility vat photopolymerization (SSVP), eliminates the geometricconstraints imposed by conventional VP methods, facilitating the creation ofcomplex objects with supports that are removable using green and food-safesolvents such as D-limonene and ethyl acetate, as well as mineral oil.Post-print heat treatment tunes crosslink density and solubility. Dissolvedsupports can be recycled into fresh resin and reprinted without mechanicalproperty loss, oﬀering a practical, scalable route to reducing waste.Additionally, SSVP provides spatial control of dissolution kinetics, enablingprogrammable 3D dissolution proﬁles. By enabling the integration ofdissolvable and insoluble regions in a single print, SSVP sets the stage forfully automated and more sustainable AM workﬂows.
</description>
<pubDate>Mon, 02 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163112</guid>
<dc:date>2025-06-02T00:00:00Z</dc:date>
</item>
<item>
<title>Recent advances in nanoparticulate RNA delivery systems</title>
<link>https://hdl.handle.net/1721.1/163111</link>
<description>Recent advances in nanoparticulate RNA delivery systems
Witten, Jacob; Hu, Yizong; Langer, Robert; Anderson, Daniel G
Nanoparticle-based RNA delivery has shown great progress in recent years with the approval of two mRNA vaccines for Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) and a liver-targeted siRNA therapy. Here, we discuss the preclinical and clinical advancement of new generations of RNA delivery therapies along multiple axes. Improvements in cargo design such as RNA circularization and data-driven untranslated region optimization can drive better mRNA expression. New materials discovery research has driven improved delivery to extrahepatic targets such as the lung and splenic immune cells, which could lead to pulmonary gene therapy and better cancer vaccines, respectively. Other organs and even specific cell types can be targeted for delivery via conjugation of small molecule ligands, antibodies, or peptides to RNA delivery nanoparticles. Moreover, the immune response to any RNA delivery nanoparticle plays a crucial role in determining efficacy. Targeting increased immunogenicity without induction of reactogenic side effects is crucial for vaccines, while minimization of immune response is important for gene therapies. New developments have addressed each of these priorities. Last, we discuss the range of RNA delivery clinical trials targeting diverse organs, cell types, and diseases and suggest some key advances that may play a role in the next wave of therapies.
</description>
<pubDate>Mon, 04 Mar 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163111</guid>
<dc:date>2024-03-04T00:00:00Z</dc:date>
</item>
<item>
<title>On-patient medical record and mRNA therapeutics using intradermal microneedles</title>
<link>https://hdl.handle.net/1721.1/163110</link>
<description>On-patient medical record and mRNA therapeutics using intradermal microneedles
Han, Jooli; Kanelli, Maria; Liu, Yang; Daristotle, John L; Pardeshi, Apurva; Forster, Timothy A; Karchin, Ari; Folk, Brandon; Murmann, Lukas; Tostanoski, Lisa H; Carrasco, Sebastian E; Alsaiari, Shahad K; Wang, Erika Yan; Tran, Khanh; Zhang, Linzixuan; Eshaghi, Behnaz; Levy, Lauren; Pyon, Sydney; Sloane, Charles; Lin, Stacey Qiaohui; Lau, Alicia; Perkinson, Collin F; Bawendi, Moungi G; Barouch, Dan H; Durand, Frédo; Langer, Robert; Jaklenec, Ana
Medical interventions often require timed series of doses, thus necessitating accurate medical record-keeping. In many global settings, these records are unreliable or unavailable at the point of care, leading to less effective treatments or disease prevention. Here we present an invisible-to-the-naked-eye on-patient medical record-keeping technology that accurately stores medical information in the patient skin as part of microneedles that are used for intradermal therapeutics. We optimize the microneedle design for both a reliable delivery of messenger RNA (mRNA) therapeutics and the near-infrared fluorescent microparticles that encode the on-patient medical record-keeping. Deep learning-based image processing enables encoding and decoding of the information with excellent temporal and spatial robustness. Long-term studies in a swine model demonstrate the safety, efficacy and reliability of this approach for the co-delivery of on-patient medical record-keeping and the mRNA vaccine encoding severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). This technology could help healthcare workers make informed decisions in circumstances where reliable record-keeping is unavailable, thus contributing to global healthcare equity.
</description>
<pubDate>Mon, 24 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163110</guid>
<dc:date>2025-02-24T00:00:00Z</dc:date>
</item>
<item>
<title>Insights Into Summertime Surface Ozone Formation From Diurnal Variations in Formaldehyde and Nitrogen Dioxide Along a Transect Through New York City</title>
<link>https://hdl.handle.net/1721.1/163109</link>
<description>Insights Into Summertime Surface Ozone Formation From Diurnal Variations in Formaldehyde and Nitrogen Dioxide Along a Transect Through New York City
Tao, Madankui; Fiore, Arlene M; Karambelas, Alexandra; Miller, Paul J; Valin, Lukas C; Judd, Laura M; Tzortziou, Maria; Whitehill, Andrew; Teora, Amanda; Tian, Yuhong; Civerolo, Kevin L; Tong, Daniel; Ma, Siqi; Adamo, Susana B; Holloway, Tracey
Estimating tropospheric ozone (O3) production from observations is challenging but possible given the close coupling of O3 with formaldehyde (HCHO) and nitrogen dioxide (NO2), two remotely sensed air pollutants. The previous reliance on once-daily satellite overpasses highlights the need to study diurnal changes and surface-column relationships. Using surface observations, Pandora spectrometer retrievals, and a high-resolution (1.33 km) air quality model (WRF-CMAQ), we characterize diurnal patterns of HCHO and NO2 at seven locations along an upwind-downwind pathway through New York City during June–August 2018. Diurnal patterns of limited surface HCHO measurements suggest biogenic emission influence, while a bimodal surface NO2 pattern indicates the impact of local anthropogenic nitrogen oxides emissions. Details of these patterns vary by site: an afternoon NO2 spike at New Haven (CT) indicates traffic emissions, while a delayed daily HCHO peak at Westport (CT) relative to other sites likely reflects sea breeze dynamics. Peak column concentrations generally lag surface peaks by about four hours, occurring at 9–10 a.m. for morning NO2 (from Pandora and WRF-CMAQ) and around 4 p.m. for midday HCHO (from WRF-CMAQ). TROPOMI overpass time at 1:30 p.m. misses peak column HCHO and NO2 concentrations. A box model (F0AM) constrained with site-level observations and WRF-CMAQ fields indicates 1–9 ppb hr−1 higher noontime local O3 production rates on three sets of paired high- versus mid-to-low-O3 days. F0AM sensitivity analyses on these six days suggest a predominantly transitional O3 formation regime at urban and downwind sites, differing at some sites from the NOx-saturated regime diagnosed for summertime average conditions via the weekday-weekend effect.
</description>
<pubDate>Mon, 12 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163109</guid>
<dc:date>2025-05-12T00:00:00Z</dc:date>
</item>
<item>
<title>Seeing the Fictional</title>
<link>https://hdl.handle.net/1721.1/163108</link>
<description>Seeing the Fictional
Khoo, Justin
When we see a movie or a play, do we see the fictional entities and events depicted? On the one hand, it seems incredibly natural tothink we do. For instance, it seems obvious that one thing that differentiates Smith, who watches Star Wars, from Bob, who merelyreads the novelization of Star Wars, is that Smith, but not Bob, has seen Darth Vader kill Obi-Wan Kenobi. Yet, no philosophersworking on fiction think this is literally true. And they have good reasons to be skeptical. For, if you have seen Darth Vader killObi-Wan Kenobi, then it seems to follow that Darth Vader must have killed Obi-Wan Kenobi, in which case, it follows that bothwere at one point living, flesh-and-blood, entities. But if Darth Vader is a flesh and blood being, then he must be spatiotemporallylocated, in which case, where is he? In this paper, I argue that we do in fact literally see (and hear) fictional entities when we seefilms. I do so in three stages. First, I argue against various error theories that attempt to account for the intuitions that we do seefictional entities in film. Then, I sketch a metaphysics of fictional entities, which vindicates our genuinely seeing them. Finally, Iexplore some of the interesting controversies and objections raised to this ontology of the fictional.
</description>
<pubDate>Tue, 03 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163108</guid>
<dc:date>2025-06-03T00:00:00Z</dc:date>
</item>
<item>
<title>Why Schonland Failed in His Search for Runaway Electrons From Thunderstorms</title>
<link>https://hdl.handle.net/1721.1/163107</link>
<description>Why Schonland Failed in His Search for Runaway Electrons From Thunderstorms
Chilingarian, A; Williams, E; Hovsepyan, G; Mkrtchyan, H
B.F.J. Schonland, advised and encouraged by C.T.R. Wilson, made two unsuccessful searches forrunaway electrons from thunderstorms in the 1930s. These findings stand in marked contrast with researchresults over the last decade and ironically set this field of research back many decades. Schonland's lack ofsuccess is traced to gamma ray attenuation in the atmosphere above Johannesburg (1,780 m MSL) and to hisrestriction to nine thunderstorms.
</description>
<pubDate>Thu, 22 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163107</guid>
<dc:date>2025-05-22T00:00:00Z</dc:date>
</item>
<item>
<title>Advanced Oral Delivery Systems for Nutraceuticals</title>
<link>https://hdl.handle.net/1721.1/163106</link>
<description>Advanced Oral Delivery Systems for Nutraceuticals
Yang, Xin; Zhang, Linzixuan; Zheng, Zhiling; Langer, Robert; Jaklenec, Ana
Oral delivery is the most preferred route for nutraceuticals due to its convenience and high patient compliance. However, bioavailability is often compromised by poor solubility, instability, and first‐pass metabolism in the gastrointestinal tract. This review examines current and emerging oral delivery platforms designed to overcome these barriers and enhance nutraceutical efficacy. Traditional carriers—proteins, lipids, and carbohydrates—highlighting their delivery mechanisms and limitations, are first explored. Advancements in material science have led to novel platforms such as biodegradable polymers, metal–organic frameworks (MOFs), metal–polyphenol networks (MPNs), and 3D printing technologies. Biodegradable polymers improve stability and enable controlled release of bioactives. MOFs offer high surface area and tunable porosity for encapsulating and protecting sensitive compounds. MPNs provide biocompatible, stimuli‐responsive systems for targeted nutrient delivery. Meanwhile, 3D printing facilitates the fabrication of personalized delivery systems with precise control over composition and release kinetics, especially when integrated with artificial intelligence (AI) for precision nutrition. By comparing traditional and next‐generation strategies, this review outlines key design principles for optimizing oral delivery systems. The transformative potential of these innovations is underscored to improve the bioavailability and therapeutic outcomes of nutraceuticals, ultimately advancing personalized and targeted nutrition solutions.
</description>
<pubDate>Wed, 11 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163106</guid>
<dc:date>2025-06-11T00:00:00Z</dc:date>
</item>
<item>
<title>Voices of Nanomedicine: Blueprint Guidelines for Collaboration in Addressing Global Unmet Medical Needs</title>
<link>https://hdl.handle.net/1721.1/163095</link>
<description>Voices of Nanomedicine: Blueprint Guidelines for Collaboration in Addressing Global Unmet Medical Needs
Prasad, Rajendra; Ghosh, Arnab; Patel, Vinay; Peng, Berney; Mendes, Bárbara B; Win, Eaint Honey Aung; Delogu, Lucia Gemma; Wong, Joyce Y; Pischel, Kristin J; Bellare, Jayesh R; Bar-Shir, Amnon; Thakor, Avnesh S; Parak, Wolfgang J; Bhujwalla, Zaver M; Zhang, Yu Shrike; Kommineni, Nagavendra; Rotello, Vince M; Cai, Weibo; Lammers, Twan; Odom, Teri W; Padmanaban, Govindarajan; Peer, Dan; Lovell, Jonathan F; Srivastava, Rohit; Langer, Robert; Conde, João
The “Voices” under this Perspective underline the importance of interdisciplinary collaboration and partnerships across several disciplines, such as medical science and technology, medicine, bioengineering, and computational approaches, in bridging the gap between research, manufacturing, and clinical applications. Effective communication is key to bridging team gaps, enhancing trust, and resolving conflicts, thereby fostering teamwork and individual growth toward shared goals. Drawing from the success of the COVID-19 vaccine development, we advocate the application of similar collaborative models in other complex health areas such as nanomedicine and biomedical engineering. The role of digital technology and big data in healthcare innovation is highlighted along with the necessity for specialized education in collaborative practices. This approach is decisive in advancing healthcare solutions, leading to improved treatment and patient outcomes.
</description>
<pubDate>Fri, 10 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163095</guid>
<dc:date>2025-01-10T00:00:00Z</dc:date>
</item>
<item>
<title>Biological Cohesion of Sediment Bed Diminishes Net Deposition of Fine Non‐Cohesive Particles Over Bare Bed and Within Model Emergent Canopies</title>
<link>https://hdl.handle.net/1721.1/163094</link>
<description>Biological Cohesion of Sediment Bed Diminishes Net Deposition of Fine Non‐Cohesive Particles Over Bare Bed and Within Model Emergent Canopies
Park, Hyoungchul; Nepf, Heidi
This study investigated how Extracelluar Polymetric Substances (EPS) produced bymicroorganisms influenced particle deposition to a sediment bed. The particle deposition decreased withincreasing EPS, because the EPS filled the pore spaces between individual sediment grains, reducing theporosity of the sediment bed. With decreased porosity, newly deposited particles could not settle in between thegrains of the bed, so that particles were more exposed to the flow, making resuspension easier and leading todecreased deposition. For the same level of bio‐cohesion, increasing the near‐bed turbulence diminisheddeposition. For the vegetated channel, as bio‐cohesion increased, particles were easily resuspended aroundindividual stems due to the enhanced exposure effect, expanding the regions where deposition was excluded andleading to a more heterogeneous spatial distribution of deposition. The effect of EPS was negligible for thesmallest velocity magnitude, for which all particles deposited, and for largest velocity magnitude, for whichmost particles were resuspended.
</description>
<pubDate>Mon, 12 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163094</guid>
<dc:date>2025-05-12T00:00:00Z</dc:date>
</item>
<item>
<title>Record‐High Ozone in the Austral Mid‐Latitude Tropopause Region Driven by Dynamical and Chemical Effects of the 2019 Sudden Stratospheric Warming</title>
<link>https://hdl.handle.net/1721.1/163093</link>
<description>Record‐High Ozone in the Austral Mid‐Latitude Tropopause Region Driven by Dynamical and Chemical Effects of the 2019 Sudden Stratospheric Warming
Zhang, Selena; Solomon, Susan; Zhang, Jun; Kinnison, Douglas
In January 2020, tropopause‐level ozone in the austral mid‐latitudes was the highest everobserved in the available Microwave Limb Sounder data record since 2004. Two extreme events preceded thisanomaly: the Australian Black Summer fires and the 2019 sudden stratospheric warming (SSW), raising thequestion of how these disruptions influenced Southern Hemisphere ozone. Here, we investigate the dynamicaland chemical contributions to the ozone anomaly using a chemistry‐climate model and satellite observations.We find that downward transport of polar ozone‐enriched air due to the SSW later spread equatorward. Suchtransport together with photochemical ozone production from emissions of wildfires (fueled by dry and hotconditions previously attributed to the SSW) increased tropopause‐level ozone by up to 30 ppb, with transport asthe dominant factor (around 80%). While chemical ozone production from wildfires is well‐recognized, ourresults highlight that SSWs can greatly influence mid‐latitude ozone through dynamical effects.
</description>
<pubDate>Sat, 10 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163093</guid>
<dc:date>2025-05-10T00:00:00Z</dc:date>
</item>
<item>
<title>Pure Event Semantics</title>
<link>https://hdl.handle.net/1721.1/163092</link>
<description>Pure Event Semantics
Schwarzschild, Roger
In a pure event semantics for natural language, the domain of quantification and predication is limited to events and states. I offerpure event semantic analyses of several phenomena, some of which have not been treated before in formal semantics. In the pureevent semantics sketched in the second section, nouns are state predicates, and this provides the starting point for the analyses.The phenomena involve grammatical number, the mass-count distinction, adjectival modification, count adjectives, diminutives,lexical plurals, duals, and mass gender. In the conclusion, there is a brief discussion of potential metaphysical or psychologicalramifications of doing semantics this way.
</description>
<pubDate>Wed, 28 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163092</guid>
<dc:date>2025-05-28T00:00:00Z</dc:date>
</item>
<item>
<title>The Grice Is Right: Grice's Non‐Cooperation Problem and the Structure of Conversation</title>
<link>https://hdl.handle.net/1721.1/163091</link>
<description>The Grice Is Right: Grice's Non‐Cooperation Problem and the Structure of Conversation
Berstler, Sam
H. P. Grice seemed to rest his theory of conversational implicature on the assumption that speakers aim to cooperatively exchangeinformation with each other. In the real world, speakers often don’t. Does one of the most influential theories in 20th-centuryphilosophy of language rest on a mistake? Yes—but not in the way that philosophers have thought. I argue that Grice shouldhave rested his theory on a different assumption: that speakers aim to appear to aim to cooperatively exchange informationwith each other. This proposal dissolves Grice’s Non-Cooperation Problem but preserves Grice’s central insights about the natureof conversational implicatures. More generally, it enables the Gricean to illuminate the structure of many non-cooperative orotherwise “non-ideal” conversations.
</description>
<pubDate>Mon, 26 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163091</guid>
<dc:date>2025-05-26T00:00:00Z</dc:date>
</item>
<item>
<title>My Struggles and Dreams as a Chemical Engineer</title>
<link>https://hdl.handle.net/1721.1/163090</link>
<description>My Struggles and Dreams as a Chemical Engineer
Langer, Robert
My career has not been straightforward. Although I am a chemical engineer, and I'm proud of that, I took a path from chemistry and engineering to one that also involved experimental biology and medicine. This was very unusual many decades ago. In so doing, I met with rejection and ridicule early in my career. However, by going down that path, I was able to make discoveries and inventions that I hope have saved and improved lives, and I've been able to train a great number of people who are going down the road I began traveling over many years ago.
</description>
<pubDate>Mon, 03 Mar 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163090</guid>
<dc:date>2025-03-03T00:00:00Z</dc:date>
</item>
<item>
<title>An In Situ Curing, Shear‐Responsive Biomaterial Designed for Durable Embolization of Microvasculature</title>
<link>https://hdl.handle.net/1721.1/163089</link>
<description>An In Situ Curing, Shear‐Responsive Biomaterial Designed for Durable Embolization of Microvasculature
Pham, Quynh P; Groom, Jeffrey V; Sadasivan, Chander; Fiorella, David J; Madoff, David C; Guo, Lee‐Jae; Fornaciari, Michael; Guertin, Courtney; Wiltsey, Craig; Core, Lee; Merlo, Jonathan; Wustenberg, William; Virmani, Renu; Arthur, Adam S; Langer, Robert S; Whitesides, George M; Sharma, Upma
Endovascular embolization is a minimally‐invasive technique whereby blood vessels supplying pathological structures are selectively occluded with various embolic agents. In many scenarios, it is desirable for the embolic to distally penetrate to the level of the microvasculature, which maximizes devascularization. Existing agents exhibit inconsistent distal penetration and have other limitations including tendency for proximal reflux, patient pain during infusion, lack of fluoroscopic radiopacity, potential for catheter adhesion, susceptibility to recanalization, and other usability challenges. NeoCast is an in situ curing, solvent‐free, non‐adhesive biomaterial composed of polydimethylsiloxane, bismuth trioxide, and fumed silica that possesses shear‐responsive properties enabling manual injectability through commercially‐available microcatheters with large and small diameter lumens. Here, embolization performance with and without flow arrest, in both arterial and venous preclinical anatomies is reported. NeoCast reproducibly achieves a rate of distal penetration with microvascular occlusion that is superior to existing agents, exhibits excellent fluoroscopic visibility, and provides durable occlusion. There is mild inflammation when NeoCast is infused into blood vessels and absence of neurotoxicity when implanted directly into brain tissue. The engineered NeoCast material is poised to become a next‐generation, liquid embolic agent for applications in which distal microvascular occlusion is desired.
</description>
<pubDate>Tue, 11 Mar 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163089</guid>
<dc:date>2025-03-11T00:00:00Z</dc:date>
</item>
<item>
<title>A constitutive neural network for incompressible hyperelastic materials</title>
<link>https://hdl.handle.net/1721.1/163088</link>
<description>A constitutive neural network for incompressible hyperelastic materials
Lee, Sanghee; Bathe, Klaus-Jürgen
We propose a B-spline-based constitutive neural network to model the mechanical behavior of incompressible isotropic materials. The theoretical foundation of this network is the Sussman-Bathe model which interpolates tension–compression test data points and recovers the strain energy function. Our neural network uses regression to self-optimize the knot configurations of the B-splines and to determine a twice differentiable curve of the material response that is closely aligned with the given data points. We address datasets displaying physically complicated behaviors. Through the patch test validation of the constitutive model and illustrative example solutions, we highlight the flexibility inherent in spline-based models and the automated approximation capabilities enabled by neural networks.
</description>
<pubDate>Wed, 20 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163088</guid>
<dc:date>2025-08-20T00:00:00Z</dc:date>
</item>
<item>
<title>Bounds on the Ground State Energy of Quantum p-Spin Hamiltonians</title>
<link>https://hdl.handle.net/1721.1/163087</link>
<description>Bounds on the Ground State Energy of Quantum p-Spin Hamiltonians
Anschuetz, Eric R.; Gamarnik, David; Kiani, Bobak T.
We consider the problem of estimating the ground state energy of quantum p-local spin glass random Hamiltonians, the quantum analogues of widely studied classical spin glass models. Our main result shows that the maximum energy achievable by product states has a well-defined limit (for even p) as n → ∞ and is E product ∗ = 2 log p in the limit of large p. This value is interpreted as the maximal energy of a much simpler so-called Random Energy Model, widely studied in the setting of classical spin glasses. The proof of the limit existing follows from an extension of Fekete’s Lemma after we demonstrate near super-additivity of the (normalized) quenched free energy. The proof of the value follows from a second moment method on the number of states achieving a given energy when restricting to an ϵ -net of product states. Furthermore, we relate the maximal energy achieved over all states to a p-dependent constant γ p , which is defined by the degree of violation of a certain asymptotic dependence ansatz over graph matchings. We show that the maximal energy achieved by all states E ∗ p in the limit of large n is at most γ p E product ∗ . We also prove using Lindeberg’s interpolation method that the limiting E ∗ p is robust with respect to the choice of the randomness and, for instance, also applies to the case of sparse random Hamiltonians. This robustness in the randomness extends to a wide range of random Hamiltonian models including SYK and random quantum max-cut.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163087</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Riemannian Adaptive Regularized Newton Methods with Hölder Continuous Hessians</title>
<link>https://hdl.handle.net/1721.1/163086</link>
<description>Riemannian Adaptive Regularized Newton Methods with Hölder Continuous Hessians
Zhang, Chenyu; Jiang, Rujun
This paper presents strong worst-case iteration and operation complexity guarantees for Riemannian adaptive regularized Newton methods, a unified framework encompassing both Riemannian adaptive regularization (RAR) methods and Riemannian trust region (RTR) methods. We comprehensively characterize the sources of approximation in second-order manifold optimization methods: the objective function’s smoothness, retraction’s smoothness, and subproblem solver’s inexactness. Specifically, for a function with a μ -Hölder continuous Hessian, when equipped with a retraction featuring a ν -Hölder continuous differential and a θ -inexact subproblem solver, both RTR and RAR with 2 + α regularization (where α = min { μ , ν , θ } ) locate an ( ϵ , ϵ α / ( 1 + α ) ) -approximate second-order stationary point within at most O ( ϵ - ( 2 + α ) / ( 1 + α ) ) iterations and at most O ~ ( ϵ - ( 4 + 3 α ) / ( 2 ( 1 + α ) ) ) Hessian-vector products with high probability. These complexity results are novel and sharp, and reduce to an iteration complexity of O ( ϵ - 3 / 2 ) and an operation complexity of O ~ ( ϵ - 7 / 4 ) when α = 1 .
</description>
<pubDate>Wed, 21 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163086</guid>
<dc:date>2025-05-21T00:00:00Z</dc:date>
</item>
<item>
<title>Effect of Die Bearing Geometry on Extrudability of High-Strength AA6082 Alloy with Cu</title>
<link>https://hdl.handle.net/1721.1/163085</link>
<description>Effect of Die Bearing Geometry on Extrudability of High-Strength AA6082 Alloy with Cu
Wang, Xiaoying; Khan, Muhammad S.; Wells, Mary A.; Poole, Warren J.; Parson, Nick
This study investigated the impact of die bearing geometry on the surface cracking behavior, of a high strength AA6xxx alloy. Experimental and numerical methods were employed, along with differential scanning calorimetry tests to determine the material’s solidus temperature. Four different die geometries were employed in both the extrusion trial and the simulation. Extrusion trials were conducted for each die geometry over a range of extrusion speeds with the resulting surface defects being examined using SEM. The findings indicate that die bearing geometry significantly affects surface morphology and crack occurrence. Choked dies enabled crack-free extrusion at higher speeds, particularly a 12 mm choked bearing with a 1° angle, outperforming a 25 mm flat bearing and zero-bearing die. The 35 mm choked bearing achieved crack-free extrusion even at maximum extrusion speed, yielding smoother surfaces than the other dies. Numerical simulations demonstrated the differences in stress states using different die bearing geometries, showing that the choked bearings alter the stress state at the die corner to cause a transition from high tensile stress to lower tensile or compressive stress. The extrusion limit diagrams for different die bearings were also constructed based on the extrusion trial data to provide guidance for choosing appropriate extrusion parameters for future studies. This study adds a valuable contribution to the existing literature by shedding light on the role of die bearing geometry in controlling surface morphology and surface crack formation, providing important insights that can be used to optimize the extrusion process.
</description>
<pubDate>Mon, 25 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163085</guid>
<dc:date>2025-08-25T00:00:00Z</dc:date>
</item>
<item>
<title>Single cells are compactly and accurately described as fractional Kelvin-Voigt materials</title>
<link>https://hdl.handle.net/1721.1/163084</link>
<description>Single cells are compactly and accurately described as fractional Kelvin-Voigt materials
Das, Mohua; Waeterloos, Jarno L.; Clasen, Christian; McKinley, Gareth H.
The mechanobiology of single cells plays a crucial role in various biological processes, including embryonic development, cancer treatment, and wound healing. This study highlights the use of the fractional Kelvin-Voigt model (FKVM)—a viscoelastic model consisting of two Scott Blair elements in parallel—to compactly and accurately characterize single-cell rheology. Unlike traditional power law models, which primarily capture the key features of the mechanical response at long timescales, the FKVM effectively captures both short- and long-timescale mechanical responses with a minimal number of constitutive parameters. Experimental small-amplitude oscillatory shear (SAOS) data for dividing canine kidney cells, creep data of human K562 erythroleukemic cells, and creep recovery data of blastomere cytoplasm are all analyzed to showcase the accuracy and versatility of the FKVM. Additionally, for the first time, the continuous relaxation and retardation spectra corresponding to the fractional differential formulation of the FKVM are derived. These results establish a comprehensive framework for predictive analysis of single-cell rheology in both the time and frequency domains. Graphical abstract
</description>
<pubDate>Mon, 25 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163084</guid>
<dc:date>2025-08-25T00:00:00Z</dc:date>
</item>
<item>
<title>Gray matter abnormalities in sight deprivation and sight restoration</title>
<link>https://hdl.handle.net/1721.1/163083</link>
<description>Gray matter abnormalities in sight deprivation and sight restoration
Pedersini, Caterina A.; Fracasso, Alessio; Dogar, Amna; Rokers, Bas; Sinha, Pawan
Blindness provides a unique model for investigating brain plasticity in response to sensory deprivation. While structural changes in both gray and white matter have been widely documented, particularly in cases of early or congenital visual deprivation, gray matter studies have traditionally focused on cortical thickness, often finding cortical thickening in posterior regions. However, other aspects of gray matter integrity, such as cortical myelin content, remain underexplored. In this study, we examined the effects of visual deprivation on cortical structure in a cohort of early blind individuals who received eye surgery during adolescence, expanding beyond conventional measures to include cortical thickness, curvature, and T1-weighted signal intensity. This multi-faceted approach offers a more comprehensive view of cortical adaptations to early sensory deprivation. While blindness offers valuable insights into sensory-driven brain plasticity, an intriguing and unresolved question is whether structural plasticity reverses after sight restoration, enabling typical visual processing circuits to develop despite the initial period of deprivation. To address this, we assessed the effect of sight-recovering eye surgery on gray matter changes. Critically, individuals in this cohort received surgery after the closure of the sensitive period for visual development. We did not find evidence of gray matter changes after surgery. However, in a previous study conducted on the same cohort, we reported that notable plasticity in white matter emerged in this same population. These results suggest that white matter may potentially serve as a biomarker of structural plasticity following sight restoration, even beyond the sensitive developmental window.
</description>
<pubDate>Tue, 12 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163083</guid>
<dc:date>2025-08-12T00:00:00Z</dc:date>
</item>
<item>
<title>Efficacy and Safety of Toludesvenlafaxine Hydrochloride Sustained‐Release Tablets in Depression With Anhedonia: A Single‐Arm, Multicenter Clinical Study</title>
<link>https://hdl.handle.net/1721.1/163082</link>
<description>Efficacy and Safety of Toludesvenlafaxine Hydrochloride Sustained‐Release Tablets in Depression With Anhedonia: A Single‐Arm, Multicenter Clinical Study
Wang, San-wang; Mi, Wei-feng; Hao, Xiao-nan; Liu, Xiao-xing; Wen, Xin; Zhao, Min; Jiang, Hai-feng; Wang, Wen-zheng; Li, Tao; Tan, Zhong-Lin; Chen, Song; Lv, Wen; Ning, Yu-ping; Zhou, Yan-ling; Chen, Ying-mei; Tang, Xiang-dong; Li, Bin; Liu, Yang; Ma, Xian-cang; Dong, Ying–ying; Chen, Yun-chun; Wang, Hui-ling; Huang, Yong-lan; Zhang, Hua; Lu, Lin
Toludesvenlafaxine hydrochloride sustained-release tablets, as China’s first independently developed chemical Class 1 innovative drug with independent intellectual property rights for the treatment of depression and a new molecular entity, represent a novel triple reuptake inhibitor (TRI) with specific target selectivity for serotonin (5-HT), norepinephrine (NE), and dopamine (DA). This single-arm, multicenter clinical study aimed to evaluate the efficacy and safety of toludesvenlafaxine in alleviating anhedonia symptoms in patients with major depressive disorder (MDD). A total of 123 patients aged 18–65 years were enrolled between April 2023 and April 2024 and received an 8-week treatment with toludesvenlafaxine sustained-release tablets (80–160 mg/day). The primary efficacy endpoint was the change in the total score of the Dimensional Anhedonia Rating Scale (DARS) at weeks 2, 4, and 8. Significant improvements in DARS scores were observed, with mean changes from baseline of 8.4 (95% CI [6.4, 10.4], p &lt; 0.0001), 14.1 (95% CI [12.0, 16.2], p &lt; 0.0001), and 20.4 (95% CI [18.0, 22.9], p &lt; 0.0001), respectively. Additionally, after 8 weeks of treatment, plasma levels of neurotrophic factors, including mature brain-derived neurotrophic factor (mBDNF) (t = 28.78, p &lt; 0.0001), pro-BDNF (t = 27.71, p &lt; 0.0001), and vascular endothelial growth factor (VEGF) (t = 31.07, p &lt; 0.0001), were significantly increased, and the plasma level of IGF-1 was not significantly changed (t = 0.35, p = 0.7269). No association was found between the percentage of changes in neurotrophic factors and the percentage of symptom improvements. Toludesvenlafaxine was generally well-tolerated, with treatment-emergent adverse events (AEs) (TEAEs) reported in 83.7% of participants and treatment-related AEs (TRAEs) in 76.4%. These findings indicate that toludesvenlafaxine hydrochloride sustained-release tablets are safe, well-tolerated, and effective in alleviating anhedonia symptoms in patients with depression.
</description>
<pubDate>Tue, 06 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163082</guid>
<dc:date>2025-05-06T00:00:00Z</dc:date>
</item>
<item>
<title>Is Deuterium Sequestering by Reactive Carbon Atoms an Important Mechanism to Reduce Deuterium Content in Biological Water?</title>
<link>https://hdl.handle.net/1721.1/163081</link>
<description>Is Deuterium Sequestering by Reactive Carbon Atoms an Important Mechanism to Reduce Deuterium Content in Biological Water?
Seneff, Stephanie; Nigh, Greg; Kyriakopoulos, Anthony M.
Deuterium is a natural heavy isotope of hydrogen, having a neutron as well as a proton. Deuterium disrupts ATP synthesis inmitochondria, causing increased production of reactive oxygen species and reduced synthesis of ATP. Gut microbes likely playa significant role in providing deuterium depleted short chain fatty acids (SCFAs) to human colonocytes through hydrogengas recycling. The production of deuterium depleted (deupleted) nutrients necessarily leaves behind deuterium enriched water,unless there is a process that can sequester deuterium in small molecules that are excreted through the feces. Here, we provideevidence that a small number of classes of uniquely structured carbon-nitrogen rings and bis-allylic carbon atoms in certainbiologically active small molecules may play a crucial role in sequestering deuterium for export into feces or urine. Specifically,we have identified the imidazole ring present in histidine, histamine, and microbial derivatives of histidine, the tetraterpenoidlutein, bilirubin and the derivatives urobilinogen and stercobilinogen produced by gut microbes, and the bis-allylic carbons inpolyunsaturated fatty acids as likely candidates for sequestering deuterium and thereby reducing the deuterium levels in thewater-based medium. Normally, carbon atoms never exchange their bound protons with deuterons from the medium, but all theabove classes of molecules are important exceptions to this rule, as has been shown experimentally.
</description>
<pubDate>Wed, 14 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163081</guid>
<dc:date>2025-05-14T00:00:00Z</dc:date>
</item>
<item>
<title>Surrogate-Assisted Adaptive Experimentation for Fused Filament Fabrication Process Optimization</title>
<link>https://hdl.handle.net/1721.1/163080</link>
<description>Surrogate-Assisted Adaptive Experimentation for Fused Filament Fabrication Process Optimization
Mojumder, Satyajit; Liao, Shuheng; Liu, Wing K.
Fused Filament Fabrication (FFF) is an advanced manufacturing process that requires precise control of multiple parameters, including nozzle temperature, print speed, and layer height. Due to the complexity of this high-dimensional process design space, experimental evaluations are often constrained. A key challenge in FFF is understanding how these parameters influence print quality and identifying optimal process conditions efficiently. This study addresses this challenge by developing a physics-based thermal model for FFF, implemented using a graphics processing unit-accelerated finite element method. The model is calibrated and validated against experimental thermal data for printing polylactic acid (PLA). It is then used to investigate the effects of nozzle temperature, print speed, bed temperature, and layer thickness on print quality by developing a cooling rate metric. A series of simulations is conducted within the process window using the physics-based model, and the resulting data are analyzed with SHapley Additive exPlanations to understand the influence of process parameters on print quality. The results indicate that layer height is the most critical factor affecting the quality of tensile samples. To enhance process optimization, a surrogate model is trained and optimized using data generated from the physics-based model, enabling the identification of an optimal processing window for PLA. By combining physics-based and data-driven modeling, this approach accelerates thermal prediction in the FFF process, facilitating the study of high-dimensional design spaces and the optimization of material-specific printing parameters. The proposed methodology provides a scalable framework for improving the efficiency and quality of extrusion-based additive manufacturing processes, demonstrating its potential for broader applications in process optimization.
</description>
<pubDate>Mon, 15 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163080</guid>
<dc:date>2025-09-15T00:00:00Z</dc:date>
</item>
<item>
<title>Origins and Alteration of Ediacaran Carbonates Recording the Shuram Excursion in Oman</title>
<link>https://hdl.handle.net/1721.1/163079</link>
<description>Origins and Alteration of Ediacaran Carbonates Recording the Shuram Excursion in Oman
Bergmann, Kristin D; Osburn, Magdalena R; Anderson, Noah T; Hayhow, Claire; Wilcots, Julia; Cantine, Marjorie D; Fischer, Woodward W; Bonifacie, Magali
The Shuram excursion is the largest known negative carbon isotope excursion in Earth's history.Recognized globally, it follows the Ediacaran Gaskiers glaciation and precedes a marked increase in thediversity and complexity of the earliest macroscopic multicellular organisms in the fossil record. A key questionis whether this excursion reflects a primary perturbation to the carbon cycle, which would provide crucialinsights into the environmental conditions shaping the earliest animals, or whether it is largely an artifact of laterdiagenetic alteration. To evaluate the extent of diagenesis in these rocks and constrain how much of theexcursion reflects a primary signal, we investigate the sedimentology and geochemistry of carbonate strata inOman using a variety of techniques spanning multiple spatial and temporal scales. Our multi‐faceted analysisidentifies and characterizes four modes of diagenetic alteration, with sediment‐buffered conditions andauthigenic carbonate precipitation as the dominant processes. However, the degree of alteration is insufficient toaccount for the range of marine sedimentologic and geochemical trends across the carbon isotope excursion.This suggests that, even with evidence of diagenesis, the rocks preserve a measurable record of changingconditions in both terrestrial and marine environments, offering unique insights into Earth's systems during apivotal time in early animal evolution.
</description>
<pubDate>Wed, 14 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163079</guid>
<dc:date>2025-05-14T00:00:00Z</dc:date>
</item>
<item>
<title>Preeclampsia is Associated with Altered Expression of Ferroptosis Biomarkers in Placental but not Maternal Vasculature</title>
<link>https://hdl.handle.net/1721.1/163078</link>
<description>Preeclampsia is Associated with Altered Expression of Ferroptosis Biomarkers in Placental but not Maternal Vasculature
Ng, Shu-Wing; Ng, Allen C.; Ng, Michelle C.; Ng, Shu-Kay; Arcuri, Felice; Genega, Elizabeth M.; Watkins, Jaclyn C.; Roberts, Drucilla J.; House, Michael D.; O’Tierney-Ginn, Perrie F.; Jacobsen, Daniel P.; Staff, Anne C.; Norwitz, Errol R.
Ferroptosis, an iron-dependent mechanism of programmed cell death, has been implicated in the pathogenesis of preeclampsia (PE). Here, we investigate the expression of key ferroptosis biomarkers in placental and decidua basalis tissues. Immunohistochemical (IHC) staining showed high expression of the ferroptosis suppressor, ferroptosis-suppressor protein 1 (FSP1), and the end product malondialdehyde (MDA), in healthy CD31-positive placental endothelium. The staining of all three markers was significantly reduced in PE placentas (P = 0.028). In vitro studies showed that an immortalized endometrial endothelial cell line, and its fetal counterpart, human umbilical vein endothelial cells, are intrinsically highly resistant to erastin-induced ferroptotic cell death compared with trophoblast, endometrial epithelial, and stromal fibroblast cell types. FSP1 was specifically expressed in the endometrial endothelial cells. Both FSP1 and another ferroptosis suppressor protein, GPX4, were degraded when the cells underwent ferroptotic cell death. Interestingly, staining of these same markers in maternal decidua basalis tissues did not show endothelium-specific staining, and no significant difference in staining was noted between healthy and PE tissues. Since previous studies have shown that endometrial cells can activate ferroptosis to produce pro-angiogenic cytokines, we posit that healthy placental endothelial cells activate ferroptosis, as evidenced by high MDA, to promote vasculature development without undergoing cell death, whereas PE placentas show reduced ferroptosis and vasculature underdevelopment. In contrast, both healthy and PE decidua basalis tissues were considered to be in a resting stage with regard to ferroptosis. Further studies are warranted to investigate how ferroptosis is regulated in both healthy and PE pregnancies.
</description>
<pubDate>Wed, 06 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163078</guid>
<dc:date>2025-08-06T00:00:00Z</dc:date>
</item>
<item>
<title>Design, Modeling, and Control of a Soft Robotic Diaphragm‐Assist Device in a Respiratory Simulator</title>
<link>https://hdl.handle.net/1721.1/163077</link>
<description>Design, Modeling, and Control of a Soft Robotic Diaphragm‐Assist Device in a Respiratory Simulator
Quevedo‐Moreno, Diego; Lee, Sang‐Yoep; Tagoe, Jonathan; Emani, Vishnu; Bonnemain, Jean; Roche, Ellen T
The diaphragm is a critical muscle for respiration, responsible for up to 70% ofthe inspiratory effort. Standard treatment for patients with severe diaphragmdysfunction is permanently tethering the airway to a mechanical ventilator, whichgreatly impacts patient autonomy and quality of life. Soft robots are ideal to assistin complex biological functions, such as diaphragm contraction. This articleintroduces a soft robotic diaphragm-assist device designed as a therapeutictreatment for diaphragm dysfunction, moreover a clinically relevant respiratorysimulator is designed and proposed as a validation and testing tool for thistreatment. The device uses fabric-based pneumatic actuators to provide targetedmechanical assistance during inhalation. A two-step control system is imple-mented to optimize synchronization and support: 1) detecting breath intentionfrom the pleural pressure signal to trigger the device and 2) regulating thedevice’s input pressure to assist in inhalation. Using the respiratory simulator,the device demonstrated the ability to restore pleural and abdominal pressuresand signiﬁcantly increased transdiaphragmatic pressure during simulated con-ditions of diaphragm dysfunction. This research advances the ﬁeld of softrobotics in respiratory care, providing a foundational platform for the develop-ment of next-generation therapeutic devices aimed at improving the quality of lifefor patients with diaphragm dysfunction.
</description>
<pubDate>Mon, 28 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163077</guid>
<dc:date>2025-04-28T00:00:00Z</dc:date>
</item>
<item>
<title>MeshModule: A Playful Modular Mesh System for Creative Construction</title>
<link>https://hdl.handle.net/1721.1/163076</link>
<description>MeshModule: A Playful Modular Mesh System for Creative Construction
Youn, Hye Jun; Sara, Serena; Ishii, Hiroshi
MeshModule is a modular construction platform composed of soft, 3D-printed mesh units designed for rapid prototyping of interactive, reconfigurable structures. Each module integrates a flexible mesh body with interlocking connectors, enabling assemblies that are both structurally robust and mechanically compliant. By varying infill patterns, material properties (PLA, TPU, and conductive filament), and geometries, MeshModule supports a range of mechanical behaviors, including bending and folding. The system also accommodates embedded electronics for responsive functionality, making it suitable for applications in wearable computing, education, and interactive art installations. Inspired by tactile learning toolkits, MeshModule fosters hands-on creativity, inclusivity, and scalable interaction design. This work demonstrates how soft digital fabrication can expand the boundaries of modular systems, enabling expressive, accessible, and programmable physical interfaces.
UIST Adjunct ’25, Busan, Republic of Korea
</description>
<pubDate>Sat, 27 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163076</guid>
<dc:date>2025-09-27T00:00:00Z</dc:date>
</item>
<item>
<title>POET: Supporting Prompting Creativity and Personalization with Automated Expansion of Text-to-Image Generation</title>
<link>https://hdl.handle.net/1721.1/163075</link>
<description>POET: Supporting Prompting Creativity and Personalization with Automated Expansion of Text-to-Image Generation
Han, Evans Xu; Zhang, Alice; Zhu, Haiyi; Shen, Hong; Liang, Paul Pu; Hsieh, Jane
State-of-the-art visual generative AI tools hold immense potential to assist users in the early ideation stages of creative tasks — offering the ability to generate (rather than search for) novel and unprecedented (instead of existing) images of considerable quality that also adhere to boundless combinations of user specifications. However, many large-scale text-to-image systems are designed for broad applicability, yielding conventional output that may limit creative exploration. They also employ interaction methods that may be difficult for beginners. Given that creative end-users often operate in diverse, context-specific ways that are often unpredictable, more variation and personalization are necessary. We introduce POET, a real-time interactive tool that (1) automatically discovers dimensions of homogeneity in text-to-image generative models, (2) expands these dimensions to diversify the output space of generated images, and (3) learns from user feedback to personalize expansions. An evaluation with 28 users spanning four creative task domains demonstrated POET’s ability to generate results with higher perceived diversity and help users reach satisfaction in fewer prompts during creative tasks, thereby prompting them to deliberate and reflect more on a wider range of possible produced results during the co-creative process. Focusing on visual creativity, POET offers a first glimpse of how interaction techniques of future text-to-image generation tools may support and align with more pluralistic values and the needs of end-users during the ideation stages of their work.
UIST ’25, Busan, Republic of Korea
</description>
<pubDate>Sat, 27 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163075</guid>
<dc:date>2025-09-27T00:00:00Z</dc:date>
</item>
<item>
<title>Text2Texture: Generating 3D-Printed Models with Textures based on Text and Image Prompts</title>
<link>https://hdl.handle.net/1721.1/163074</link>
<description>Text2Texture: Generating 3D-Printed Models with Textures based on Text and Image Prompts
Yin, Joshua; Faruqi, Faraz; Nisser, Martin
To support users’ understanding of physical properties in 2D images, we propose Text2Texture, a webtool that converts 2D color images into textured 3D objects ready for 3D printing. This is achieved by extracting depth information using a monocular estimator, extracting local texture information using a fine-tuned stable diffusion model, and superimposing these macro- and micro-scale geometries to produce a composite 3D model with color, depth and texture. Images can be uploaded directly or generated via text prompt, and we print a variety of objects generated using each approach to suggest applications in physicallizing virtual worlds, adding haptic cues to photographs, and conveying information about scale in images.
UIST Adjunct ’25, Busan, Republic of Korea
</description>
<pubDate>Sat, 27 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163074</guid>
<dc:date>2025-09-27T00:00:00Z</dc:date>
</item>
<item>
<title>Integrating Motion Sensing into 3D-Printed Bending Structures</title>
<link>https://hdl.handle.net/1721.1/163073</link>
<description>Integrating Motion Sensing into 3D-Printed Bending Structures
Li, Mingming; Li, Jiaji; Chen, Haotian; Cao, Dingning; Sahin, Karla; Mueller, Stefanie
We present a design and fabrication method for converting static 3D models into motion-capable, self-sensing structures using multi-material FDM 3D printing. Our method allows users to configure deformation behaviors, automatically generate printable circuits, and fabricate interactive objects using 3D printing in a single step without post-assembly or manual sensor integration. The 3D-printed circuits enable real-time detection of bending motions through a time-division multiplexing (TDM) circuit scheme. We demonstrate the effectiveness of our approach through sensing performance evaluation and several application examples.
UIST Adjunct ’25, Busan, Republic of Korea
</description>
<pubDate>Sun, 28 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163073</guid>
<dc:date>2025-09-28T00:00:00Z</dc:date>
</item>
<item>
<title>EI-Lite: Electrical Impedance Sensing for Micro-gesture Recognition and Pinch Force Estimation</title>
<link>https://hdl.handle.net/1721.1/163072</link>
<description>EI-Lite: Electrical Impedance Sensing for Micro-gesture Recognition and Pinch Force Estimation
Zhu, Junyi; Xu, Tianyu; Wang, Jiayu; Guan, Emily; Moon, JaeYoung; Morvan, Stiven; Shin, D; Cola?o, Andrea; Mueller, Stefanie; Ahuja, Karan; Luo, Yiyue; Chatterjee, Ishan
Micro-gesture recognition and fine-grain pinch press enables intuitive and discreet control of devices, offering significant potential for enhancing human-computer interaction (HCI). In this paper, we present EI-Lite, a lightweight wrist-worn electrical impedance sensing device for micro-gesture recognition and continuous pinch force estimation. We elicit an optimal and simplified device architecture through an ablation study on electrode placement with 13 users, and implement the elicited designs through 3D printing. We capture data on 15 participants on (1) six common micro-gestures (plus idle state) and (2) index finger pinch forces, then develop machine learning models that interpret the impedance signals generated by these micro-gestures and pinch forces. Our system is capable of accurate recognition of micro-gesture events (96.33% accuracy), as well as continuously estimating the pinch force of the index finger in physical units (Newton), with the mean-squared-error (MSE) of 0.3071 (or mean-force-variance of 0.55 Newtons) over 15 participants. Finally, we demonstrate EI-Lite’s applicability via three applications in AR/VR, gaming, and assistive technologies.
UIST ’25, Busan, Republic of Korea
</description>
<pubDate>Sun, 28 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163072</guid>
<dc:date>2025-09-28T00:00:00Z</dc:date>
</item>
<item>
<title>Empowering Learners with a Low-Barrier Mobile Data Science Toolkit</title>
<link>https://hdl.handle.net/1721.1/163071</link>
<description>Empowering Learners with a Low-Barrier Mobile Data Science Toolkit
Elhashemy, Hanya; Parks, Robert; Kim, David YJ; Patton, Evan; Abelson, Harold
This paper introduces a novel data science toolkit designed specifically for children, enabling them to create mobile apps integrated with data science capabilities. The toolkit showcases new features that simplify the data science process for young users. Additionally, the paper presents a collection of example apps created using the toolkit, highlighting the versatility and potential of this innovative platform. By empowering children to explore data science through app development, this toolkit opens exciting opportunities for hands-on learning and creative expression in the field of citizen science.
</description>
<pubDate>Tue, 01 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163071</guid>
<dc:date>2024-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Top-Down SBP: Turning Graph Clustering Upside Down</title>
<link>https://hdl.handle.net/1721.1/163070</link>
<description>Top-Down SBP: Turning Graph Clustering Upside Down
Wanye, Frank; Gleyzer, Vitaliy; Kao, Edward; Feng, Wu-chun
Stochastic block partitioning (SBP) is a statistical inference-based&#13;
algorithm for clustering vertices within a graph. It has been shown&#13;
to be statistically robust and highly accurate even on graphs with&#13;
a complex structure, but its poor scalability limits its usability to&#13;
smaller-sized graphs. In this manuscript we argue that one reason&#13;
for its poor scalability is the agglomerative, or bottom-up, nature&#13;
of SBP’s algorithmic design; the agglomerative computations cause&#13;
high memory usage and create a large search space that slows&#13;
down statistical inference, particularly in the algorithm’s initial&#13;
iterations. To address this bottleneck, we propose Top-Down SBP, a&#13;
novel algorithm that replaces the agglomerative (bottom-up) block&#13;
merges in SBP with a block-splitting operation. This enables the&#13;
algorithm to start with all vertices in one cluster and subdivide&#13;
them over time into smaller clusters. We show that Top-Down&#13;
SBP is up to 7.7× faster than Bottom-Up SBP without sacrificing&#13;
accuracy and can process larger graphs than Bottom-Up SBP on&#13;
the same hardware due to an up to 4.1× decrease in memory usage.&#13;
Additionally, we adapt existing methods for accelerating BottomUp SBP to the Top-Down approach, leading to up to 13.2× speedup&#13;
over accelerated Bottom-Up SBP and up to 403× speedup over&#13;
sequential Bottom-Up SBP on 64 compute nodes. Thus, Top-Down&#13;
SBP represents substantial improvements to the scalability of SBP,&#13;
enabling the analysis of larger datasets on the same hardware.
HPDC ’25, Notre Dame, IN, USA
</description>
<pubDate>Sun, 20 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163070</guid>
<dc:date>2025-07-20T00:00:00Z</dc:date>
</item>
<item>
<title>Exploring Prompt Engineering for Generative AI-Based App Generation</title>
<link>https://hdl.handle.net/1721.1/163069</link>
<description>Exploring Prompt Engineering for Generative AI-Based App Generation
Shone, Jasmin L.; Liu, Robin; Patton, Evan; Kim, David YJ
We introduce a cutting-edge learning platform powered by large language models that enables students to effortlessly generate mobile applications for smartphones and tablets from natural language descriptions. We further demonstrate that these user-generated apps can be further optimized with minor adjustments to the generative model's input, or, its "prompt." To maximize the efficacy of the prompt in producing a desired application, we explore three different methods of modification: 1) altering the selection mechanism of example pairs, 2) varying the number of example pairs, and 3) changing the order of pairs within the prompt. The prompts are constructed from a collection of example pairs, which comprise a textual description of an example app and its corresponding code, in addition to a description of the desired app. We test the model's performance by evaluating it with 18 different mobile application task descriptions, ranging from basic to complex, and then leveraging BLEU score to compare the model's outputs to manually created apps. Our findings indicate that the method of determining example pair selection and varying the number of examples included can significantly influence the quality of the generated apps. However, reordering the placement of the example pairs within the prompt does not affect the outcome. Finally, we conclude with a discussion on the potential implications for computer science education. The platform we present in this paper aims to further the democratization of app creation through enabling users to create apps with ease, regardless of their technical background.
</description>
<pubDate>Sat, 01 Apr 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163069</guid>
<dc:date>2023-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Boosting hydrogel conductivity via water-dispersible conducting polymers for injectable bioelectronics</title>
<link>https://hdl.handle.net/1721.1/163068</link>
<description>Boosting hydrogel conductivity via water-dispersible conducting polymers for injectable bioelectronics
Montazerian, Hossein; Davoodi, Elham; Wang, Canran; Lorestani, Farnaz; Li, Jiahong; Haghniaz, Reihaneh; Sampath, Rohan R; Mohaghegh, Neda; Khosravi, Safoora; Zehtabi, Fatemeh; Zhao, Yichao; Hosseinzadeh, Negar; Liu, Tianhan; Hsiai, Tzung K; Najafabadi, Alireza Hassani; Langer, Robert; Anderson, Daniel G; Weiss, Paul S; Khademhosseini, Ali; Gao, Wei
Bioelectronic devices hold transformative potential for healthcare diagnostics and therapeutics. Yet, traditional electronic implants often require invasive surgeries and  are mechanically incompatible with biological tissues. Injectable hydrogel bioelectronics offer a minimally invasive alternative that interfaces with soft tissue seamlessly. A major challenge is the low conductivity of bioelectronic systems, stemming from poor dispersibility of conductive additives in hydrogel mixtures. We address this issue by engineering doping conditions with hydrophilic biomacromolecules, enhancing the dispersibility of conductive polymers in aqueous systems. This approach achieves a 5-fold increase in dispersibility and a 20-fold boost in conductivity compared to conventional methods. The resulting conductive polymers are molecularly and in vivo degradable, making them suitable for transient bioelectronics applications. These additives are compatible with various hydrogel systems, such as alginate, forming ionically cross-linkable conductive inks for 3D-printed wearable electronics toward high-performance physiological monitoring. Furthermore, integrating conductive fillers with gelatin-based bioadhesive hydrogels substantially enhances conductivity for injectable sealants, achieving 250% greater sensitivity in pH sensing for chronic wound monitoring. Our findings indicate that hydrophilic dopants effectively tailor conducting polymers for hydrogel fillers, enhancing their biodegradability and expanding applications in transient implantable biomonitoring.
</description>
<pubDate>Tue, 22 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163068</guid>
<dc:date>2025-04-22T00:00:00Z</dc:date>
</item>
<item>
<title>Nanomedicine for targeting brain Neurodegeneration: Critical barriers and circadian rhythm Considerations</title>
<link>https://hdl.handle.net/1721.1/163067</link>
<description>Nanomedicine for targeting brain Neurodegeneration: Critical barriers and circadian rhythm Considerations
Pineiro-Alonso, Laura; Rubio-Prego, Inés; Lobyntseva, Alexandra; González-Freire, Eva; Langer, Robert; Alonso, María José
The development of novel therapies for central nervous system (CNS) diseases, particularly neurodegenerative disorders like Alzheimer's disease (AD), is a critical global health priority. Biotherapeutics, such as monoclonal antibodies (mAbs) and RNA-based therapies, have shown potential for treating brain disorders. However, their clinical progress is limited by their difficult access to their brain targets. At the preclinical level, nanotechnology has been shown, to help these molecules overcome the biological barriers that imped their adequate brain delivery. This review highlights advances in this area and the challenges for the translation to the clinic. Key nanotechnology-based strategies, such as surface modifications utilizing endogenous protein corona, functionalization with targeting ligands, therapeutic ultrasound-mediated microbubble oscillation were particularly analyzed. Additionally, in line with the focus of the Special Issue, this review integrates the concept of chronotherapy, with a focus on AD treatment, highlighting the idea that, by aligning nanoparticle (NP)-based drug delivery with circadian rhythms, it may be possible to improve therapeutic outcomes. Finally, the article analyzes current strategies in CNS drug delivery in clinical trials and provides future directions within this frame, notably in the area of AD.
</description>
<pubDate>Tue, 01 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163067</guid>
<dc:date>2025-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Study on molecular orientation and stratification in RNA-lipid nanoparticles by cryogenic orbitrap secondary ion mass spectrometry</title>
<link>https://hdl.handle.net/1721.1/163066</link>
<description>Study on molecular orientation and stratification in RNA-lipid nanoparticles by cryogenic orbitrap secondary ion mass spectrometry
Kotowska, Anna M; Fay, Michael; Watts, Julie A; Gilmore, Ian S; Scurr, David J; Howe, Alaina; Capka, Vladimir; Perez, Corey E; Doud, Devin; Patel, Siddharth; Umbarger, Mark; Langer, Robert; Alexander, Morgan R
Lipid nanoparticle RNA (LNP-RNA) formulations are used for the delivery of vaccines and other therapies. RNA molecules are encapsulated within their interior through electrostatic interactions with positively charged lipids. The identity of the lipids that present at their surface play a role in how they interact with and are perceived by the body and their resultant potency. Here, we use a model formulation to develop cryogenic sample preparation for molecular depth profiling Orbitrap secondary ion mass spectrometry (Cryo-OrbiSIMS) preceded by morphological characterisation using cryogenic transmission electron microscopy (Cryo-TEM). It is found that the depth distribution of individual lipid components is revealed relative to the surface and the RNA cargo defining the core. A preferential lipid orientation can be determined for the 1,2-Dimyristoyl-glycero-3-methox-polyethylene glycol 2000 (DMG-PEG2k) molecule, by comparing the profiles of PEG to DMG fragments. PEG fragments are found immediately during analysis of the LNP surface, while the DMG fragments are deeper, coincident with RNA ions located in the core, in agreement with established models of LNPs. This laboratory-based de novo analysis technique requires no labelling, providing advantages over large facility neutron scattering characterisation.
</description>
<pubDate>Thu, 22 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163066</guid>
<dc:date>2025-05-22T00:00:00Z</dc:date>
</item>
<item>
<title>Non-Line-of-Sight 3D Object Reconstruction via mmWave Surface Normal Estimation</title>
<link>https://hdl.handle.net/1721.1/163065</link>
<description>Non-Line-of-Sight 3D Object Reconstruction via mmWave Surface Normal Estimation
Dodds, Laura; Boroushaki, Tara; Zhou, Kaichen; Adib, Fadel
This paper presents the design, implementation, and evaluation of&#13;
mmNorm, a new and highly-accurate method for non-line-of-sight&#13;
3D object reconstruction using millimeter wave (mmWave) signals.&#13;
In contrast to past approaches for millimeter-wave-based imaging&#13;
that perform backprojection for 3D object reconstruction, mmNorm&#13;
reconstructs the surface by estimating the object’s surface normals.&#13;
To do this, it introduces a novel algorithm that directly estimates&#13;
the surface normal vector field from mmWave reflections. By then&#13;
inverting the normal field, it can reconstruct structural isosurfaces,&#13;
then solve for the exact surface through a novel mmWave optimization framework.&#13;
We built an end-to-end prototype of mmNorm using a TI IWR1443&#13;
Boost mmWave radar and a UR5e Robotic Arm, and evaluated it&#13;
in over 110 real-world experiments across more than 60 different&#13;
everyday objects. In a head-to-head comparison with state-of-theart baselines, mmNorm achieves 96% reconstruction accuracy (3D&#13;
F-score) compared to 78% for the best-performing baseline. These&#13;
results show that mmNorm is capable of high-accuracy mmWave&#13;
object reconstruction. The codebase and a video demonstration are&#13;
available here: https://github.com/signalkinetics/mmNorm
MobiSys ’25, Anaheim, CA, USA
</description>
<pubDate>Sun, 01 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163065</guid>
<dc:date>2025-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>BioLIG: Functionalizing Biocomposites with Laser-induced Graphene for Bio-Rapid Prototyping of Electronics</title>
<link>https://hdl.handle.net/1721.1/163064</link>
<description>BioLIG: Functionalizing Biocomposites with Laser-induced Graphene for Bio-Rapid Prototyping of Electronics
Li, Yuqing Lucy; Kubu?ov?, Vlasta; Babatain, Wedyan; Labrune, Jean-Baptiste; Widder, Sage; Sun, Bernice; Forman, Jack; Ishii, Hiroshi
In HCI, there is a rapidly growing interest in prototyping with conductive bio-based materials. However, the methods for conductive making of bio-based materials to suit the diverse needs of makers remain underexplored. We introduce BioLIG, a fabrication framework that functionalizes affordable and optimized bio-based substrates with a conventional CO2 laser to create highly conductive traces for sensors and circuits. To illustrate the framework, we first contribute five bio-based materials: three sheets (paper-like, fabric-like, plastic-like) and two paints (lignin-ink, chitosan-stain). A formal electrical characterization of our conductors highlight that they surpass activated charcoal, are on par with carbon black, and one ink is even comparable with the most common synthetic material used for laser-induced graphene. Then, we present three biodegradable coatings that ensure functionality and durability and balance protection with controlled degradation. Next, we build upon our sheets, paints, and coatings to form multifunctional biodegradable biocomposites and implement five end-to-end applications. Lastly, we define three strategies of how the framework supports a circular making culture. BioLIG enables accessible, fast, and bio-rapid prototyping, adding new directions for designing sustainable electronics with environmental integration.
UIST ’25, Busan, Republic of Korea
</description>
<pubDate>Sat, 27 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163064</guid>
<dc:date>2025-09-27T00:00:00Z</dc:date>
</item>
<item>
<title>SustainaPrint: Making the Most of Eco-Friendly Filaments</title>
<link>https://hdl.handle.net/1721.1/163063</link>
<description>SustainaPrint: Making the Most of Eco-Friendly Filaments
Perroni-Scharf, Maxine; Xiao, Jennifer; Paulin, Cole; Wang, Zhi Ray; Sethapakdi, Ticha; Abdullah, Muhammad; Baudisch, Patrick; Mueller, Stefanie
We present SustainaPrint, a system for integrating eco-friendly filaments into 3D printing without compromising structural integrity. While biodegradable and recycled 3D printing filaments offer environmental benefits, there is a trade-off in using them as they may suffer from degraded or unpredictable mechanical properties, which can limit their use in load-bearing applications. SustainaPrint addresses this by strategically assigning eco-friendly and standard filaments to different regions of a multi-material print—reinforcing the areas that are most likely to break with stronger material while maximizing the use of sustainable filament elsewhere. As eco-friendly filaments often do not come with technical datasheets, we also introduce a low-cost, at-home mechanical testing toolkit that enables users to evaluate filament strength before deciding if they want to use that filament in our pipeline. We validate SustainaPrint through real-world fabrication and mechanical testing, demonstrating its effectiveness across a range of functional 3D printing tasks.
UIST ’25, Busan, Republic of Korea
</description>
<pubDate>Sat, 27 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163063</guid>
<dc:date>2025-09-27T00:00:00Z</dc:date>
</item>
<item>
<title>Novel Strategies for Developing Next-Generation Vaccines to Combat Infectious Viral Diseases</title>
<link>https://hdl.handle.net/1721.1/163062</link>
<description>Novel Strategies for Developing Next-Generation Vaccines to Combat Infectious Viral Diseases
Yuan, Fangfeng; Bluth, Martin H.
The development of viral vaccines faces persistent scientific and logistical challenges, particularly in the wake of the COVID-19 pandemic. This review critically examines emerging strategies to overcome key barriers in viral vaccine design and deployment. We focus on four major areas: (1) structure-guided antigen engineering to stabilize conformations; (2) the mRNA platform and its delivery system; (3) advanced adjuvant systems that enhance cellular and humoral immunity; and (4) approaches to mitigate immune imprinting and antigenic variability, such as chimeric antigens and glycan shielding. We also explore anti-idiotypic vaccination strategies and the limitations of current animal models in predicting human immune responses. In addition, to address vaccine hesitancy and inequitable access, we advocate for global collaboration in manufacturing, distribution, and public education to ensure inclusive immunization strategies. By integrating molecular insights with platform technologies, we aim to inform the rational design of future vaccines with improved efficacy and public acceptance.
</description>
<pubDate>Tue, 16 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163062</guid>
<dc:date>2025-09-16T00:00:00Z</dc:date>
</item>
<item>
<title>Biphasic Adaptations of Gastric Epithelial Cells in Chronic H. pylori Infection from Stress to Tolerance</title>
<link>https://hdl.handle.net/1721.1/163061</link>
<description>Biphasic Adaptations of Gastric Epithelial Cells in Chronic H. pylori Infection from Stress to Tolerance
Zhang, Xiulin; He, Yang; Zhang, Xiaolu; Liang, Ziyi; Wang, Wendong; Da, Zhenyu; Lv, Jianyi; Guo, Meng; Huo, Xueyun; Liu, Xin; Lu, Jing; Cao, Lixue; Du, Xiaoyan; Ge, Zhongming; Chen, Zhenwen; Lu, Xuancheng; Zhang, Jianzhong; Li, Changlong
Helicobacter pylori (H. pylori) is a well-known pathogen associated with chronic gastric infection, progressing from gastritis to gastric adenocarcinoma, but the dynamic phenotypic and molecular characteristics of gastric epithelial cells during sustained infection remain unclear. We established a chronic infection model using the human gastric epithelial cell line GES-1, exposed to H. pylori or its lysate across 30 generations, dynamically assessing cell proliferation, migration, invasion, apoptosis, autophagy, and epithelial–mesenchymal transition (EMT) markers, with RNA sequencing for transcriptomic changes and a Mongolian gerbil model to validate chronic pathological progression. Acute H. pylori exposure induced pronounced morphological changes; suppressed proliferation, migration, and invasion; triggered apoptosis; and blocked autophagic flux, while long-term stimulation reversed these effects. EMT markers showed progressive loss of epithelial characteristics with chronic infection. RNA sequencing revealed a dynamic shift from inflammation-driven apoptosis to adaptive survival mechanisms. In vivo, prolonged infection induced dynamic TLR expression alongside progressive gastric pathology, including atrophy and dysplasia. Our study provides new molecular evidence for dynamic cellular and immunological adaptations of gastric epithelial cells under chronic H. pylori infection, highlighting critical intervention windows for preventing gastric carcinogenesis.
</description>
<pubDate>Mon, 15 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163061</guid>
<dc:date>2025-09-15T00:00:00Z</dc:date>
</item>
<item>
<title>Refashion: Reconfigurable Garments via Modular Design</title>
<link>https://hdl.handle.net/1721.1/163060</link>
<description>Refashion: Reconfigurable Garments via Modular Design
Lin, Rebecca; Leake, Mackenzie; Lukáč, Michal
While bodies change over time and trends vary, most store-bought clothing comes in fixed sizes and styles and fails to adapt to these changes. Alterations can enable small changes to otherwise static garments, but these changes often require sewing and are non-reversible. We propose a modular approach to garment design that considers resizing, restyling, and reuse earlier in the design process. Our contributions include a compact set of modules and connectors that form the building blocks of modular garments, a method to decompose a garment into modules via integer linear programming, and a digital design tool that supports modular garment design and simulation. Our user evaluation suggests that our approach to modular design can support the creation of a wide range of garments and can help users transform them across sizes and styles while reusing the same building blocks.
UIST ’25, Busan, Republic of Korea
</description>
<pubDate>Sat, 27 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163060</guid>
<dc:date>2025-09-27T00:00:00Z</dc:date>
</item>
<item>
<title>Probabilistic Deliverability Assessment of Distributed Energy Resources via Scenario-Based AC Optimal Power Flow</title>
<link>https://hdl.handle.net/1721.1/163059</link>
<description>Probabilistic Deliverability Assessment of Distributed Energy Resources via Scenario-Based AC Optimal Power Flow
Anton, Laurenţiu L.; Ilić, Marija D.
As electric grids decarbonize and distributed energy resources (DERs) become increasingly prevalent, interconnection assessments must evolve to reflect operational variability and control flexibility. This paper highlights key modeling limitations observed in practice and reviews approaches for modeling uncertainty. It then introduces a Probabilistic Deliverability Assessment (PDA) framework designed to complement and extend existing procedures. The framework integrates scenario-based AC optimal power flow (AC OPF), corrective dispatch, and optional multi-temporal constraints. Together, these form a structured methodology for quantifying DER utilization, deliverability, and reliability under uncertainty in load, generation, and topology. Outputs include interpretable metrics with confidence intervals that inform siting decisions and evaluate compliance with reliability thresholds across sampled operating conditions. A case study on Puerto Rico&amp;rsquo;s publicly available bulk power system model demonstrates the framework&amp;rsquo;s application using minimal input data, consistent with current interconnection practice. Across staged fossil generation retirements, the PDA identifies high-value DER sites and regions requiring additional reactive power support. Results are presented through mean dispatch signals, reliability metrics, and geospatial visualizations, demonstrating how the framework provides transparent, data-driven siting recommendations. The framework&amp;rsquo;s modular design supports incremental adoption within existing workflows, encouraging broader use of AC OPF in interconnection and planning contexts.
</description>
<pubDate>Wed, 10 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163059</guid>
<dc:date>2025-09-10T00:00:00Z</dc:date>
</item>
<item>
<title>Stable Natural Iron Complex Micronutrient Powder for Enhanced Cellular Uptake</title>
<link>https://hdl.handle.net/1721.1/163058</link>
<description>Stable Natural Iron Complex Micronutrient Powder for Enhanced Cellular Uptake
Alsaiari, Shahad K; Zhang, Linzixuan; Yang, Xin; Duan, Aranda R; Daristotle, John L; Straeten, Aurelien vander; Weinstock, Shelley B; Langer, Robert; Jaklenec, Ana
Iron deficiency anemia (IDA) is a persistent global health challenge, particularly in low- and middle-income countries, necessitating effective iron fortification strategies. In this study, we developed FeC-4-1, a novel iron complex composed of ferrous sulfate, vitamin C (VC), and histidine, to enhance iron stability, cellular iron uptake, and compatibility with food matrices. FeC-4-1 exhibited high stability across a broad pH range (3–12). Under simulated gastric conditions, FeC-4-1 released nearly 100% of its iron and VC within 10 min, ensuring efficient cellular iron uptake. FeC-4-1 also demonstrated superior oxidation resistance compared to FeSO4, exhibiting 2.5-fold lower color change in polyphenol-rich banana milk after 2-h treatment. Long-term storage studies revealed that FeC-4-1 maintained 60% of its initial total iron content with the ferrous iron fraction remaining at ∼80% after 12 months, indicating minimal oxidation over time. Bioaccessibility studies following an established INFOGEST protocol showed that FeC-4-1 provided about 2-fold higher bioaccessible iron compared to FeSO4 under room temperature conditions. In addition, FeC-4-1 resulted in approximately a 3.2-fold increase in total intracellular iron compared to FeSO4 in Caco-2 cells. Sensory evaluation results demonstrated that FeC-4-1 fortification at 16 mg per serving (50% RDA of iron) in bouillon soup did not alter flavor or mouthfeel. These findings suggest that FeC-4-1 is a technically feasible and effective iron fortificant, offering enhanced stability, bioaccessibility, and consumer acceptability for in-home iron fortification.
</description>
<pubDate>Mon, 21 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163058</guid>
<dc:date>2025-07-21T00:00:00Z</dc:date>
</item>
<item>
<title>Polyanhydride-Based Microparticles for Programmable Pulsatile Release of Diphtheria Toxoid (DT) for Single-Injection Self-Boosting Vaccines</title>
<link>https://hdl.handle.net/1721.1/163057</link>
<description>Polyanhydride-Based Microparticles for Programmable Pulsatile Release of Diphtheria Toxoid (DT) for Single-Injection Self-Boosting Vaccines
Zhang, Linzixuan; Xiao, Ruiqing; Gao, Wenhao; Garcia, Johnny; Pan, Xinyan; Daristotle, John L; Forster, Timothy; Han, Jooli; Chaddah, Mehr; Varshney, Dhruv; Menon, Nandita; McHugh, Kevin J; Pedretti, Benjamin J; Yeo, Jing Ying; Yang, Xin; MacDonald, Sydney; Langer, Robert; Jaklenec, Ana
Single‐Injection Self‐Boosting Vaccines A single‐injection platform for self‐boosting vaccines is developed using a polyanhydride‐based delivery system. The platform enables pulsatile antigen release, protects pH‐sensitive cargo, and elicits immune responses comparable to traditional multi‐dose regimens. Machine learning enhances design by accurately predicting release profiles, offering a promising solution to improve global vaccine coverage and reduce under‐immunization. More details can be found in article number 2501168 by Robert Langer, Ana Jaklenec, and co‐workers.
</description>
<pubDate>Thu, 14 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163057</guid>
<dc:date>2025-08-14T00:00:00Z</dc:date>
</item>
<item>
<title>Gastrointestinal neuroprosthesis for motility and metabolic neuromodulation</title>
<link>https://hdl.handle.net/1721.1/163056</link>
<description>Gastrointestinal neuroprosthesis for motility and metabolic neuromodulation
Srinivasan, Shriya; Antonini, Marc-Joseph; Alshareef, Amro; Sahasrabudhe, Atharva; Jenkins, Josh; Ishida, Keiko; Kuosmanen, Johannes; Hayward, Alison; Min, Seokkee; Langer, Robert; Anikeeva, Polina; Traverso, Giovanni
Gastrointestinal (GI) dysmotility and associated conditions affect over 20% of population, yet pharmacological, behavioural, and surgical interventions offer limited therapeutic efficacy. Targeted electrical stimulation addressing underlying neuromuscular pathology stands to transform our ability to treat dysmotility. Here, we developed a closed-loop GI neuroprosthesis which activates or relaxes GI tract musculature through electrochemical stimulation in response to sensed food stimuli. We additionally describe a tool supporting minimally invasive endoscopically guided implantation that can penetrate the mucosa, accurately localize the submucosa, and safely deploy this device to directly interface with the enteric nervous system. The neuroprosthesis enables generation of coordinated peristaltic waves, significantly increasing the motility rate in a swine model of oesophageal and stomach dysmotility (p &lt; 0.05, student’s t-test). Further, by directly modulating the myenteric plexus and thus mimicking meal ingestion, we induce peristalsis in a fasted state and achieve a metabolic response commensurate with a fed or satiated state. This neuroprosthesis and implantation platform expand opportunities in fundamental studies and treatments of metabolic and neuromuscular pathologies affecting the GI tract.
</description>
<pubDate>Sun, 10 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/163056</guid>
<dc:date>2025-08-10T00:00:00Z</dc:date>
</item>
<item>
<title>Mining the CD4 antigen repertoire for next-generation tuberculosis vaccines</title>
<link>https://hdl.handle.net/1721.1/162903</link>
<description>Mining the CD4 antigen repertoire for next-generation tuberculosis vaccines
Vidal, Samuel J; Lasrado, Ninaad; Tostanoski, Lisa H; Chaudhari, Jayeshbhai; Mbiwan, Esther R; Neka, Ganad D; Strutton, Ellis A; Espinosa Perez, Alejandro A; Sellers, Daniel; Barrett, Julia; Lifton, Michelle; Wakabayashi, Shoko; Eshaghi, Behnaz; Borducchi, Erica N; Aid, Malika; Li, Wenjun; Scriba, Thomas J; Jaklenec, Ana; Langer, Robert; Barouch, Dan H
Tuberculosis (TB) is the leading cause of death from infectious disease worldwide, and Bacillus Calmette-Guérin (BCG) remains the only clinically approved vaccine. An enduring challenge in TB vaccine development is systematic antigen selection from a large repertoire of potential candidates. We performed an efficacy screen in mice of antigens that are targets of CD4 T cells in humans. We found striking heterogeneity in protective efficacy, and most of the top protective antigens are not currently in clinical development. We observed immunologic cross-reactivity among phylogenetically clustered antigens, reflecting common CD4 epitopes. We developed a trivalent mRNA vaccine consisting of PPE20 (Rv1387), EsxG (Rv0287), and PE18 (Rv1788), which augmented and exceeded BCG protection in multiple mouse models. Finally, we observed cellular immune responses to these antigens in 84% of humans exposed to M. tuberculosis. These data advance our understanding of TB vaccine immunology and define a vaccine concept for clinical development.
</description>
<pubDate>Mon, 15 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162903</guid>
<dc:date>2025-09-15T00:00:00Z</dc:date>
</item>
<item>
<title>Biotechnology in materials science: A storied past and a bold future</title>
<link>https://hdl.handle.net/1721.1/162902</link>
<description>Biotechnology in materials science: A storied past and a bold future
Sharma, Shonit Nair; Witten, Jacob; Das, Rishi; Anderson, R Rox; Anderson, Daniel G; Langer, Robert
The intersection of biotechnology and materials science has driven medical and scientific innovation for decades and is poised to make similar transformative impacts over the next 50 years. Advanced drug delivery systems, including nanoparticles and larger delivery material platforms, are enhancing therapeutic precision, while tissue engineering and regenerative medicine are laying the groundwork for bioprinting complex organs, offering new possibilities for transplantation and repair. Nanotechnology and biomedical devices are reshaping diagnostics and therapeutics, enabling real-time monitoring essential for personalized health care. Additionally, emerging fields such as space biotechnology and machine learning-driven biomaterials design hold potential for cutting-edge discoveries. This article examines the historical trajectory, current state-of-the-art applications, and bold future directions of biotechnology in materials science, emphasizing its impact on human health and its untapped potential yet to be explored.
</description>
<pubDate>Wed, 01 Oct 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162902</guid>
<dc:date>2025-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Implementation of Sub‐Grid Scale Temperature Perturbations Induced by Non‐Orographic Gravity Waves in WACCM6</title>
<link>https://hdl.handle.net/1721.1/162901</link>
<description>Implementation of Sub‐Grid Scale Temperature Perturbations Induced by Non‐Orographic Gravity Waves in WACCM6
Yook, Simchan; Solomon, Susan; Weimer, Michael; Kinnison, Douglas E; Garcia, Rolando; Stone, Kane
Atmospheric gravity waves can play a significant role on atmospheric chemistry throughtemperature fluctuations. A recent modeling study introduced a method to implement subgrid‐scale orographicgravity‐wave‐induced temperature perturbations in the Whole Atmosphere Community Climate Model(WACCM). The model with a wave‐induced temperature parameterization was able to reproduce for example,the influence of mountain wave events on atmospheric chemistry, as highlighted in previous literature. Here weextend the subgrid‐scale wave‐induced temperature parameterization to also include non‐orographic gravitywaves arising from frontal activity and convection. We explore the impact of these waves on middle atmospherechemistry, particularly focusing on reactions that are strongly sensitive to temperature. The non‐orographicgravity waves increase the variability of chemical reaction rates, especially in the lower mesosphere. As anexample, we show that this, in turn, leads to increases in the daytime ozone variability. To demonstrate anotherimpact, we briefly investigate the role of non‐orographic gravity waves in cirrus cloud formation in this model.Consistent with findings from the previous study focusing on orographic gravity waves, non‐orographic wavesalso enhance homogeneous nucleation and increase cirrus clouds. The updated method used enables the globalchemistry‐climate model to account for both orographic and non‐orographic gravity‐wave‐induced subgrid‐scale dynamical perturbations in a consistent manner.
</description>
<pubDate>Mon, 21 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162901</guid>
<dc:date>2025-04-21T00:00:00Z</dc:date>
</item>
<item>
<title>Reply to: Comments on “Fisher–Schultz Lecture: Generic Machine Learning Inference on Heterogeneous Treatment Effects in Randomized Experiments, With an Application to Immunization in India”</title>
<link>https://hdl.handle.net/1721.1/162900</link>
<description>Reply to: Comments on “Fisher–Schultz Lecture: Generic Machine Learning Inference on Heterogeneous Treatment Effects in Randomized Experiments, With an Application to Immunization in India”
Chernozhukov, Victor; Demirer, Mert; Duflo, Esther; Fernández-Val, Iván
We warmly thank Kosuke Imai, Michael Lingzhi Li, and Stefan Wager for their gracious and insightful comments. We are particularly encouraged that both pieces recognize the importance of the research agenda the lecture laid out, which we see as critical for applied researchers. It is also great to see that both underscore the potential of the basic approach we propose—targeting summary features of the CATE after proxy estimation with sample splitting.&#13;
&#13;
We are also happy that both papers push us (and the reader) to continue thinking about the inference problem associated with sample splitting. We recognize that our current paper is only scratching the surface of this interesting agenda. Our proposal is certainly not the only option, and it is exciting that both papers provide and assess alternatives. Hopefully, this will generate even more work in this area.
</description>
<pubDate>Wed, 30 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162900</guid>
<dc:date>2025-07-30T00:00:00Z</dc:date>
</item>
<item>
<title>Fisher–Schultz Lecture: Generic Machine Learning Inference on Heterogeneous Treatment Effects in Randomized Experiments, With an Application to Immunization in India</title>
<link>https://hdl.handle.net/1721.1/162899</link>
<description>Fisher–Schultz Lecture: Generic Machine Learning Inference on Heterogeneous Treatment Effects in Randomized Experiments, With an Application to Immunization in India
Chernozhukov, Victor; Demirer, Mert; Duflo, Esther; Fernández-Val, Iván
We propose strategies to estimate and make inference on key features of heteroge-neous effects in randomized experiments. These key features include best linear predic-tors of the effects using machine learning proxies, average effects sorted by impact groups,and average characteristics of most and least impacted units. The approach is valid inhigh-dimensional settings, where the effects are proxied (but not necessarily consis-tently estimated) by predictive and causal machine learning methods. We post-processthese proxies into estimates of the key features. Our approach is generic; it can beused in conjunction with penalized methods, neural networks, random forests, boostedtrees, and ensemble methods, both predictive and causal. Estimation and inference arebased on repeated data splitting to avoid overﬁtting and achieve validity. We use quan-tile aggregation of the results across many potential splits, in particular taking mediansof p-values and medians and other quantiles of conﬁdence intervals. We show thatquantile aggregation lowers estimation risks over a single split procedure, and establishits principal inferential properties. Finally, our analysis reveals ways to build provablybetter machine learning proxies through causal learning: we can use the objective func-tions that we develop to construct the best linear predictors of the effects, to obtainbetter machine learning proxies in the initial step. We illustrate the use of both infer-ential tools and causal learners with a randomized ﬁeld experiment that evaluates acombination of nudges to stimulate demand for immunization in India.
</description>
<pubDate>Wed, 30 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162899</guid>
<dc:date>2025-07-30T00:00:00Z</dc:date>
</item>
<item>
<title>Limited Validity of Breath‐Counting as a Measure of Mindfulness in Ruminative Adolescents</title>
<link>https://hdl.handle.net/1721.1/162898</link>
<description>Limited Validity of Breath‐Counting as a Measure of Mindfulness in Ruminative Adolescents
Treves, Isaac N.; Tierney, Anna O.; Goldberg, Simon B.; Rouleau, Nancie; Carson, Nicholas; Schuman‐Olivier, Zev; Webb, Christian A.
Objective measurement of mindfulness could help us understand the mechanisms of meditation interventions and how indi-viduals vary in their disposition to be mindful. One proposed measure is the breath-counting task (BCT), which measures howaccurately one can count cycles of their breath. Breath counting, which involves sustained attention, meta-awareness, and an in-ternal locus of attention, has been shown in adults to be related to measures of mindfulness even when controlling for establishedattentional measures. In this study, we test the psychometrics of the BCT in a convenience sample of 78 adolescents with elevatedrumination. In preregistered analyses, we related breath-counting measures, including novel objective respiration measures, toa suite of self-report measures as well as the sustained attention to response task (SART). While breath-counting performanceshowed fair split-half reliability and similar distributions to studies in adults, it did not show the expected positive associationswith self-reported mindfulness measures (neither trait nor EMA). Surprisingly, breath-counting accuracy showed negative cor-relations with a subscale measuring observing of emotions and body sensations, negative correlations with nonreactivity, andperformance decrements were larger for individuals scoring more highly on mindfulness in general. The SART showed a smallnegative correlation with breath-counting resets (an index of mind-wandering). Finally, breath-counting performance was notrelated to other theoretically relevant clinical, personality, and executive functioning criteria. Our results suggest that, at least inruminative adolescents, breath-counting may measure a very narrow, contextual form of sustained attention, may not captureother qualities of mindfulness, and may lack predictive validity.
</description>
<pubDate>Tue, 06 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162898</guid>
<dc:date>2025-05-06T00:00:00Z</dc:date>
</item>
<item>
<title>Polyanhydride‐Based Microparticles for Programmable Pulsatile Release of Diphtheria Toxoid (DT) for Single‐Injection Self‐Boosting Vaccines</title>
<link>https://hdl.handle.net/1721.1/162897</link>
<description>Polyanhydride‐Based Microparticles for Programmable Pulsatile Release of Diphtheria Toxoid (DT) for Single‐Injection Self‐Boosting Vaccines
Zhang, Linzixuan; Xiao, Ruiqing; Gao, Wenhao; Garcia, Johnny; Pan, Xinyan; Daristotle, John L; Forster, Timothy; Han, Jooli; Chaddah, Mehr; Varshney, Dhruv; Menon, Nandita; McHugh, Kevin J; Pedretti, Benjamin J; Yeo, Jing Ying; Yang, Xin; MacDonald, Sydney; Langer, Robert; Jaklenec, Ana
Vaccination remains a critical tool in preventing infectious diseases, yet itseﬀectiveness is undermined by under-immunization, particularly for vaccinesrequiring multiple doses that patients fail to complete. To address this chal-lenge, the development of single-injection platforms delivering self-boostingvaccines has gained signiﬁcant attention. Despite some advances, translatingthese platforms into clinical applications has been limited. In this study, anovel polyanhydride-based polymeric delivery platform is introduced, designedfor single-injection self-boosting vaccines, replacing multiple doses. Over20 polyanhydride polymers are synthesized and screened, ultimately downselecting to 6 for in vitro studies, and 2 for in vivo studies. Using diphtheriatoxoid (DT) as a model antigen, programmed pulsatile release with a narrowwindow is demonstrated, ideal for self-boosting immunization. The platformeﬀectively protects the pH-sensitive antigen before release, achieving recoveryrate of 39.7% to 89.7%. The system’s tunability is further enhanced by machinelearning algorithms, which accurately predict release proﬁles, conﬁrmedthrough experimental validation. In vivo studies in a mouse model revealsthat the platform induces DT-speciﬁc antibody responses comparable to thosegenerated by traditional multi-dose regimens. Collectively, these ﬁndingshighlight the potential of this platform to deliver various vaccines, oﬀering apotentially promising solution to the global challenge of under-immunization.
</description>
<pubDate>Thu, 15 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162897</guid>
<dc:date>2025-05-15T00:00:00Z</dc:date>
</item>
<item>
<title>Reduction in Global Lightning Activity During the COVID Pandemic</title>
<link>https://hdl.handle.net/1721.1/162896</link>
<description>Reduction in Global Lightning Activity During the COVID Pandemic
Liu, Yakun; Williams, Earle; Guha, Anirban; Satori, Gabriella; Neto, Osmar Pinto; Said, Ryan; Holzworth, Robert; Virts, Katrina; Lang, Timothy; Zhu, Yanan; LaPierre, Jeff; DiGangi, Elizabeth
The effect of anthropogenic aerosols on lightning is one of the least understood aspects of human‐induced climate change. Global aerosol clearly diminished during the COVID pandemic by 7.6%. A pronounceddecrease in global lightning activity in the range 3.0%–5.8% is identified from various detection systems duringthis natural experiment. The Maritime Continent lightning chimney shows the largest reduction of 7.0% inaerosol accompanied by a lightning drop of 15%. The COVID period in 2020 also experiences a transition frompre‐COVID El Niño to a strong and sustained La Niña. Compensation for ENSO forcing of lightning activity isimplemented to disclose the distinct responses of three global lightning chimneys to competing thermodynamicand aerosol effects. Our observational findings indicate a marked influence of aerosol on a global scale by virtueof the extraordinary COVID‐induced aerosol alteration.
</description>
<pubDate>Mon, 28 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162896</guid>
<dc:date>2025-04-28T00:00:00Z</dc:date>
</item>
<item>
<title>Engineered prime editors with minimal genomic errors</title>
<link>https://hdl.handle.net/1721.1/162895</link>
<description>Engineered prime editors with minimal genomic errors
Chauhan, Vikash P; Sharp, Phillip A; Langer, Robert
Prime editors make programmed genome modifications by writing new sequences into extensions of nicked DNA 3′ ends1. These edited 3′ new strands must displace competing 5′ strands to install edits, yet a bias towards retaining the competing 5′ strands hinders efficiency and can cause indel errors2. Here we discover that nicked end degradation, consistent with competing 5′ strand destabilization, can be promoted by Cas9-nickase mutations that relax nick positioning. We exploit this mechanism to engineer efficient prime editors with strikingly low indel errors. Combining this error-suppressing strategy with the latest efficiency-boosting architecture, we design a next-generation prime editor (vPE). Compared with previous editors, vPE features comparable efficiency yet up to 60-fold lower indel errors, enabling edit:indel ratios as high as 543:1.
</description>
<pubDate>Wed, 17 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162895</guid>
<dc:date>2025-09-17T00:00:00Z</dc:date>
</item>
<item>
<title>Defining Nanostores: Cybernetic Insights on Independent Grocery Micro-Retailers’ Identity and Transformations</title>
<link>https://hdl.handle.net/1721.1/162894</link>
<description>Defining Nanostores: Cybernetic Insights on Independent Grocery Micro-Retailers’ Identity and Transformations
Salinas-Navarro, David Ernesto; Vilalta-Perdomo, Eliseo; Herron, Rebecca Michell; Mejía-Argueta, Christopher
Nanostores—micro, independent grocery retailers—are often defined overlooking their socioeconomic roles and relational significance in favour of their primary functional aspects. To close this gap, this study adopts a systemic perspective to examine how multiple stakeholders (owners, customers, and suppliers) shape nanostore identity. Accordingly, this study proposes a framework of X-Y-Z identity statements, along with the use of the TASCOI tool, to examine nanostore descriptions and map their roles, expectations, and transformation processes. This systemic framework, rooted in management cybernetics, enabled the collection and analysis of 168 survey responses from 34 stores in Mexico City. The results show that nanostore identities are varied and context-dependent, operating as grocery stores, family projects, community anchors, economic lifelines, and competitors. This diversity influences stakeholder engagement, resource utilisation, and operational decisions. Overall, this study provides a transferable framework for analysing micro-business identity and transformation, with implications for problem-solving, decision-making, and policy development. Future research should address the current limitations of this study, including its geographical cross-sectional design, limited sampling method, reliance on self-reported perceptions, and lack of generalisability to other populations. Future work will involve exploring other urban contexts, utilising longitudinal data, expanding the sample, and adopting a participatory research approach to gain a deeper understanding of identity dynamics and their implications for nanostore resilience and survivability.
</description>
<pubDate>Tue, 02 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162894</guid>
<dc:date>2025-09-02T00:00:00Z</dc:date>
</item>
<item>
<title>Verifying Online Safety Properties for Safe Deep Reinforcement Learning</title>
<link>https://hdl.handle.net/1721.1/162893</link>
<description>Verifying Online Safety Properties for Safe Deep Reinforcement Learning
Marzari, Luca; Cicalese, Ferdinando; Farinelli, Alessandro; Amato, Christopher; Marchesini, Enrico
Ensuring safety in reinforcement learning (RL) is critical for deploying agents in real-world applications. During training, current safe RL approaches often rely on indicator cost functions that provide sparse feedback, resulting in two key limitations: (i) poor sample efficiency due to the lack of safety information in neighboring states, and (ii) dependence on cost-value functions, leading to brittle convergence and suboptimal performance. After training, safety is guaranteed via formal verification methods for deep neural networks (FV), whose computational complexity hinders their application during training.  We address the limitations of using cost functions via verification by proposing a safe RL method based on a violation value---the risk associated with policy decisions in a portion of the state space. Our approach verifies safety properties (i.e., state-action pairs) that may lead to unsafe behavior, and quantifies the size of the state space where properties are violated. This violation value is then used to penalize the agent during training to encourage safer policy behavior. Given the NP-hard nature of FV, we propose an efficient, sample-based approximation with probabilistic guarantees to compute the violation value.   Extensive experiments on standard benchmarks and real-world robotic navigation tasks show that violation-augmented approaches significantly improve safety by reducing the number of unsafe states encountered while achieving superior performance compared to existing methods.
</description>
<pubDate>Tue, 30 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162893</guid>
<dc:date>2025-09-30T00:00:00Z</dc:date>
</item>
<item>
<title>A Novel Prediction Model for Multimodal Medical Data Based on Graph Neural Networks</title>
<link>https://hdl.handle.net/1721.1/162892</link>
<description>A Novel Prediction Model for Multimodal Medical Data Based on Graph Neural Networks
Zhang, Lifeng; Li, Teng; Cui, Hongyan; Zhang, Quan; Jiang, Zijie; Li, Jiadong; Welsch, Roy E.; Jia, Zhongwei
Multimodal medical data provides a wide and real basis for disease diagnosis. Computer-aided diagnosis (CAD) powered by artificial intelligence (AI) is becoming increasingly prominent in disease diagnosis. CAD for multimodal medical data requires addressing the issues of data fusion and prediction. Traditionally, the prediction performance of CAD models has not been good enough due to the complicated dimensionality reduction. Therefore, this paper proposes a fusion and prediction model&amp;mdash;EPGC&amp;mdash;for multimodal medical data based on graph neural networks. Firstly, we select features from unstructured multimodal medical data and quantify them. Then, we transform the multimodal medical data into a graph data structure by establishing each patient as a node, and establishing edges based on the similarity of features between the patients. Normalization of data is also essential in this process. Finally, we build a node prediction model based on graph neural networks and predict the node classification, which predicts the patients&amp;rsquo; diseases. The model is validated on two publicly available datasets of heart diseases. Compared to the existing models that typically involve dimensionality reduction, classification, or the establishment of complex deep learning networks, the proposed model achieves outstanding results with the experimental dataset. This demonstrates that the fusion and diagnosis of multimodal data can be effectively achieved without dimension reduction or intricate deep learning networks. We take pride in exploring unstructured multimodal medical data using deep learning and hope to make breakthroughs in various fields.
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162892</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Designing Competitive Nanostore Networks for Enhanced Food Accessibility: Insights from a Competitive Facility Location Model</title>
<link>https://hdl.handle.net/1721.1/162891</link>
<description>Designing Competitive Nanostore Networks for Enhanced Food Accessibility: Insights from a Competitive Facility Location Model
da Silva-Ovando, Agatha Clarice; Granados-Rivera, Daniela; Mejía, Gonzalo; Mejía-Argueta, Christopher; Gutiérrez-Franco, Edgar
Background: Access to healthy food in emerging-economy cities is challenged by last-mile constraints and poor infrastructure. Aligned with the UN SDGs on Zero Hunger and Sustainable Cities, this study examines how a strategically located nanostores network can help close these gaps while fostering local resilience. Focusing on Colombia’s Sabana Centro region, we designed a nanostore network that maximizes spatial coverage, proximity, and affordability. Methods: A competitive facility-location model combined with a discrete choice model captures consumer heterogeneity in price and location preferences. Results: Results show that locating nanostores in peripheral rather than central areas improves equity: the proposed network meets about 65,400 kg of weekly demand—51% fruit, 36% vegetables, 13% tubers—representing 16% of total regional demand and reaching underserved municipalities. This is notable given that existing nanostores already satisfy roughly 37% of household needs. Conclusions: By linking consumer behavior with sustainable spatial planning, the research offers both theoretical insight and practical tools for equitable distribution. Future work should evaluate supportive policies and supply chain innovations to secure nanostores’ long-term viability and community impact.
</description>
<pubDate>Mon, 18 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162891</guid>
<dc:date>2025-08-18T00:00:00Z</dc:date>
</item>
<item>
<title>Evaluating Prompt Injection Attacks with LSTM-Based Generative Adversarial Networks: A Lightweight Alternative to Large Language Models</title>
<link>https://hdl.handle.net/1721.1/162890</link>
<description>Evaluating Prompt Injection Attacks with LSTM-Based Generative Adversarial Networks: A Lightweight Alternative to Large Language Models
Rashid, Sharaf; Bollis, Edson; Pellicer, Lucas; Rabbani, Darian; Palacios, Rafael; Gupta, Aneesh; Gupta, Amar
Generative Adversarial Networks (GANs) using Long Short-Term Memory (LSTM) provide a computationally cheaper approach for text generation compared to large language models (LLMs). The low hardware barrier of training GANs poses a threat because it means more bad actors may use them to mass-produce prompt attack messages against LLM systems. Thus, to better understand the threat of GANs being used for prompt attack generation, we train two well-known GAN architectures, SeqGAN and RelGAN, on prompt attack messages. For each architecture, we evaluate generated prompt attack messages, comparing results with each other, with generated attacks from another computationally cheap approach, a 1-billion-parameter Llama 3.2 small language model (SLM), and with messages from the original dataset. This evaluation suggests that GAN architectures like SeqGAN and RelGAN have the potential to be used in conjunction with SLMs to readily generate malicious prompts that impose new threats against LLM-based systems such as chatbots. Analyzing the effectiveness of state-of-the-art defenses against prompt attacks, we also find that GAN-generated attacks can deceive most of these defenses with varying levels of success with the exception of Meta&amp;rsquo;s PromptGuard. Further, we suggest an improvement of prompt attack defenses based on the analysis of the language quality of the prompts, which we found to be the weakest point of GAN-generated messages.
</description>
<pubDate>Tue, 05 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162890</guid>
<dc:date>2025-08-05T00:00:00Z</dc:date>
</item>
<item>
<title>Optimal Transmission Switching and Grid Reconfiguration for Transmission Systems via Convex Relaxations</title>
<link>https://hdl.handle.net/1721.1/162889</link>
<description>Optimal Transmission Switching and Grid Reconfiguration for Transmission Systems via Convex Relaxations
Jagadeesan Nair, Vineet
In this paper, we formulate optimization problems and successive convex relaxations to perform optimal transmission switching (OTS) in order to operate power transmission grids more efficiently. OTS may be crucial in future power grids with much higher penetrations of renewable energy sources, which will introduce more variability and intermittency in generation. Similarly, OTS can potentially help mitigate the effects of unpredictable demand fluctuations (e.g., due to extreme weather). We explore and compare several different formulations for the OTS problem in terms of the computational performance and optimality. In particular, we build upon the literature by considering more complex and accurate power flow formulations for OTS and introducing novel convex relaxations. This allows us to model the grid physics more accurately than prior works and generalize to several different types of networks. We also apply our methods to small transmission test cases as a proof of concept to determine the effects of applying OTS.
</description>
<pubDate>Wed, 02 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162889</guid>
<dc:date>2025-07-02T00:00:00Z</dc:date>
</item>
<item>
<title>FabObscura: Computational Design and Fabrication for Interactive Barrier-Grid Animations</title>
<link>https://hdl.handle.net/1721.1/162888</link>
<description>FabObscura: Computational Design and Fabrication for Interactive Barrier-Grid Animations
Sethapakdi, Ticha; Perroni-Scharf, Maxine; Li, Mingming; Li, Jiaji; Solomon, Justin; Satyanarayan, Arvind; Mueller, Stefanie
We present FabObscura: a system for creating interactive barrier-grid animations, a classic technique that uses occlusion patterns to create the illusion of motion. Whereas traditional barrier-grid animations are constrained to simple linear occlusion patterns, FabObscura introduces a parameterization that represents patterns as mathematical functions. Our parameterization offers two key advantages over existing barrier-grid animation design methods: first, it has a high expressive ceiling by enabling the systematic design of novel patterns; second, it is versatile enough to represent all established forms of barrier-grid animations.&#13;
Using this parameterization, our computational design tool enables an end-to-end workflow for authoring, visualizing, and fabricating these animations without domain expertise. Our applications demonstrate how FabObscura can be used to create animations that respond to a range of user interactions, such as translations, rotations, and changes in viewpoint. By formalizing barrier-grid animation as a computational design material, FabObscura extends its expressiveness as an interactive medium.
</description>
<pubDate>Sat, 27 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162888</guid>
<dc:date>2025-09-27T00:00:00Z</dc:date>
</item>
<item>
<title>Spaces of Polynomials as Grassmanians for Immersions and Embeddings</title>
<link>https://hdl.handle.net/1721.1/162887</link>
<description>Spaces of Polynomials as Grassmanians for Immersions and Embeddings
Katz, Gabriel
: Let Y be a smooth compact n-manifold. We studied smooth embeddings and&#13;
immersions β : M → R × Y of compact n-manifolds M such that β(M) avoids some priory&#13;
chosen closed poset Θ of tangent patterns to the fibers of the obvious projection π : R × Y → Y.&#13;
Then, for a fixed Y, we introduced an equivalence relation between such β’s; creating a crossover&#13;
between pseudo-isotopies and bordisms. We called this relation quasitopy. In the presented&#13;
study of quasitopies, the spaces P&#13;
cΘ&#13;
d&#13;
of real univariate polynomials of degree d with real&#13;
divisors, whose combinatorial patterns avoid a given closed poset Θ, play the classical role of&#13;
Grassmanians. We computed the quasitopy classes Qemb&#13;
d&#13;
(Y, cΘ) of Θ-constrained embeddings&#13;
β in terms of homotopy/homology theory of spaces Y and P&#13;
cΘ&#13;
d&#13;
. We proved also that the&#13;
quasitopies of embeddings stabilize, as d → ∞.
</description>
<pubDate>Tue, 24 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162887</guid>
<dc:date>2025-06-24T00:00:00Z</dc:date>
</item>
<item>
<title>Direct and Indirect Mass Flow Rate Measurements for Ionic Liquid Ion Sources</title>
<link>https://hdl.handle.net/1721.1/162885</link>
<description>Direct and Indirect Mass Flow Rate Measurements for Ionic Liquid Ion Sources
Shaik, Saba Z.; Lozano, Paulo C.
The dominant performance loss in ionic liquid ion sources is thought to be the mass&#13;
utilization efficiency, where an electrospray source appears to shed neutral propellant mass&#13;
that does not appear its exhaust. The underlying cause of this phenomenon is presently&#13;
unclear. Investigating and characterizing potential utilization losses requires accurate measurements of electrospray mass flow rates, which is difficult due to the extremely small flow&#13;
rates that are processed by individual sources, particularly those that operate in the pureion regime. In this work, we present an experimental platform that allows for simultaneous,&#13;
rapid, and in-situ measurements of both supply and exhaust mass flow rates, allowing for&#13;
measurements of the mass utilization efficiency for single electrospray emitters. Supply&#13;
flow rates are measured directly using an optical approach that provides ng/s level resolution. Exhaust flow rates are measured indirectly using a time-of-flight mass spectrometer.&#13;
This platform is employed to measure mass flow rates for a 3 µm internally fed emitter using&#13;
the ionic liquid EMI-BF4 at emission currents ranging from 100 to 500 nA. At all currents,&#13;
there is a major discrepancy between the direct and indirect flow rates, with the direct&#13;
value being greater in almost all cases. Component efficiency estimates confirm that the&#13;
mass utilization is the most significant performance loss at low flow rates when the source&#13;
is working in the pure-ion regime.
39th International Electric Propulsion Conference, Imperial College London, London, United Kingdom 14-19 September 2025
</description>
<pubDate>Tue, 16 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162885</guid>
<dc:date>2025-09-16T00:00:00Z</dc:date>
</item>
<item>
<title>Graffiti: Enabling an Ecosystem of Personalized and Interoperable Social Applications</title>
<link>https://hdl.handle.net/1721.1/162883</link>
<description>Graffiti: Enabling an Ecosystem of Personalized and Interoperable Social Applications
Henderson, Theia; Karger, David; Clark, David D
Most social applications, from Twitter to Wikipedia, have rigid one-size-fits-all designs, but building new social applications is both technically challenging and results in applications that are siloed away from existing communities. We present Graffiti, a system that can be used to build a wide variety of personalized social applications with relative ease that also interoperate with each other. People can freely move between a plurality of designs—each with its own aesthetic, feature set, and moderation—all without losing their friends or data.&#13;
Our concept of total reification makes it possible for seemingly contradictory designs, including conflicting moderation rules, to interoperate. Conversely, our concept of channels prevents interoperation from occurring by accident, avoiding context collapse.&#13;
Graffiti applications interact through a minimal client-side API, which we show admits at least two decentralized implementations. Above the API, we built a Vue plugin, which we use to develop applications similar to Twitter, Messenger, and Wikipedia using only client-side code. Our case studies explore how these and other novel applications interoperate, as well as the broader ecosystem that Graffiti enables.
UIST ’25, Busan, Republic of Korea
</description>
<pubDate>Sat, 27 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162883</guid>
<dc:date>2025-09-27T00:00:00Z</dc:date>
</item>
<item>
<title>PeelFab: Designing 3D Printed Peelable Structures for 3D Masking</title>
<link>https://hdl.handle.net/1721.1/162882</link>
<description>PeelFab: Designing 3D Printed Peelable Structures for 3D Masking
Ni, Yongbo; Ji, Junzhe; Yang, Yue; Chen, Chuang; Li, Jiaji; Tao, Ye; Wang, Guanyun
Desktop 3D printers are capable of fabricating structures with complex geometries, thus enhancing the functionality and interactivity of printed objects. Peelable structures represent an important application in 3D printing, as the supports and brims demonstrate, offering more possibilities for printing. However, existing tools are limited in their ability to effectively assist users in designing and customizing such structures, and their broader application potential remains underexplored. In traditional artistic practices, masks also exhibit the characteristics of a peelable design and serve as creative tools. However, within the field of human-computer interaction, no prior work has investigated the use of 3D-printed peelable structures for mask creation. To address this gap, we present PeelFab, a fabrication method and accompanying design tool for generating custom peelable structures directly within modeling software. Through the use of a built-in structure library and an interactive interface, users can create peelable structures based on points, lines, and surfaces, allowing the design of various 3D printed masking geometries. We also demonstrate several application cases that showcase the potential of 3D-printed masking using peelable structures.
UIST Adjunct ’25, Busan, Republic of Korea
</description>
<pubDate>Sat, 27 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162882</guid>
<dc:date>2025-09-27T00:00:00Z</dc:date>
</item>
<item>
<title>Asynchronous Training of Mixed-Role Human Actors in a Partially Observable Environment</title>
<link>https://hdl.handle.net/1721.1/162881</link>
<description>Asynchronous Training of Mixed-Role Human Actors in a Partially Observable Environment
Chestnut Chang, Kimberlee; Jensen, Reed; Paleja, Rohan; Polk, Sam; Seater, Rob; Steilberg, Jackson; Schiefelbein, Curran; Scheldrup, Melissa; Gombolay, Matthew; Ramirez, Mabel
In cooperative training, humans within a team coordinate on complex tasks, building mental models of their teammates and learning to adapt to teammates' actions in real-time. To reduce the often prohibitive scheduling constraints associated with cooperative training, this article introduces a paradigm for cooperative asynchronous training of human teams in which trainees practice coordination with autonomous teammates rather than humans. We introduce a novel experimental design for evaluating autonomous teammates for use as training partners in cooperative training. We apply this design to a human-subjects experiment where humans are trained with either another human or an autonomous teammate and are evaluated with a new human subject in a new, partially observable, cooperative game developed for this study. Importantly, we employ an unsupervised sequential clustering methodology to partition teammate trajectories from demonstrations performed in the experiment to form a smaller number of training conditions. This results in a simpler experiment design, enabling us to conduct a complex cooperative training human-subjects study in a reasonable amount of time. Through a demonstration of the proposed experimental design, we provide takeaways and design recommendations for future research in the development of cooperative asynchronous training systems utilizing robot surrogates for human teammates.
</description>
<pubDate>Wed, 17 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162881</guid>
<dc:date>2025-09-17T00:00:00Z</dc:date>
</item>
<item>
<title>Generic Pan Tilt: Open Source Motion Control Platform for Entertainment and Research</title>
<link>https://hdl.handle.net/1721.1/162880</link>
<description>Generic Pan Tilt: Open Source Motion Control Platform for Entertainment and Research
Naseck, Perry; Mayton, Brian; Blanchard, Lancelot; Paradiso, Joseph
We introduce the Generic Pan Tilt, an open-source, two-axis motion control platform designed for use in entertainment, art, and research. Combining affordable off-the-shelf hardware, 3D-printed parts, and custom electronics, the system enables rapid development and flexible integration of kinetic movement into small-scale performances and installations. The Generic Pan Tilt adheres to industry standards for connectivity and control, supporting DMX512-A and modular payloads. Demonstrated in a live AI-augmented musical performance, the platform allows for a new music and performance interfaces that feature expressive motion.
UIST Adjunct ’25, Busan, Republic of Korea
</description>
<pubDate>Sat, 27 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162880</guid>
<dc:date>2025-09-27T00:00:00Z</dc:date>
</item>
<item>
<title>Sound2Haptic: A Toolkit for Portable Multi-Channel Haptic Integration Across Multiple Form Factors and Devices</title>
<link>https://hdl.handle.net/1721.1/162879</link>
<description>Sound2Haptic: A Toolkit for Portable Multi-Channel Haptic Integration Across Multiple Form Factors and Devices
Chin, Sam; Fitz-Gibbon, Emmie; Huang, Bingjian; Tims, Carter; Orzech, Gabrielle; Thoo, Yong-Joon; Paradiso, Joseph
Existing multi-actuator vibrotactile systems often require external hardware such as sound cards and haptic amplifiers, which limits portability and creates complexity for non-technical users. This presents a significant barrier for researchers and designers in fields like human factors and healthcare. We present Sound2Haptic, an vibrotactile toolkit that integrates a sound card and haptic amplifiers into a single device. The toolkit connects to laptops, phones, and XR headsets, enabling portable eight-channel multi-actuator interaction accessible to non-technical users. The toolkit features a novel mechanical design that reduces cross-actuator interference and enables form factor customization. We demonstrate the toolkit’s functional efficacy through psychophysical evaluation across three form factors, and its ease of use through three case studies: (1) a clinical application for tinnitus research (2) a human factors study on speech prosody conducted with human factors researcher, and (3) an exploration of spatial neglect rehabilitation using XR and haptics.
UIST Adjunct ’25, Busan, Republic of Korea
</description>
<pubDate>Sat, 27 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162879</guid>
<dc:date>2025-09-27T00:00:00Z</dc:date>
</item>
<item>
<title>Effects of MOF linker rotation and functionalization on methane uptake and diffusion</title>
<link>https://hdl.handle.net/1721.1/162877</link>
<description>Effects of MOF linker rotation and functionalization on methane uptake and diffusion
Yue, Shuwen; Oh, Changhwan; Nandy, Aditya; Terrones, Gianmarco G; Kulik, Heather J
The flexible degrees of freedom in metal–organic frameworks (MOFs) can have significant effects on guest molecule behavior. However, in the majority of studies applying molecular simulations to MOFs, the framework is assumed to be rigid in order to minimize computational cost. Here we assess the significance of this assumption on a representative example of methane uptake and diffusion in UiO-66. We introduce an open-source code to modify MOFs through functionalization and linker rotation and we perform Grand Canonical Monte Carlo and molecular dynamics simulations of methane in each of the functionalized and linker-rotated derivatives of UiO-66. We find that linker rotation moderately influences methane uptake and significantly influences methane diffusion. Our assessment provides ranges of property values that serve as measures of uncertainty of these two properties associated with linker rotation. We further determine that void volume fraction and minimum pore size are the features that govern methane uptake and diffusion, respectively. These findings illustrate the impact of linker rotation on MOFs and provide design principles to guide future investigations.
</description>
<pubDate>Mon, 02 Jan 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162877</guid>
<dc:date>2023-01-02T00:00:00Z</dc:date>
</item>
<item>
<title>Synthesis and Ring-Opening Metathesis Polymerization of a Strained trans-Silacycloheptene and Single-Molecule Mechanics of Its Polymer</title>
<link>https://hdl.handle.net/1721.1/162876</link>
<description>Synthesis and Ring-Opening Metathesis Polymerization of a Strained trans-Silacycloheptene and Single-Molecule Mechanics of Its Polymer
Wakefield, Herbert; Kevlishvili, Ilia; Wentz, Kelsie E; Yao, Yunxin; Kouznetsova, Tatiana B; Melvin, Sophia J; Ambrosius, Em G; Herzog-Arbeitman, Abraham; Siegler, Maxime A; Johnson, Jeremiah A; Craig, Stephen L; Kulik, Heather J; Klausen, Rebekka S
The cis- and trans-isomers of a silacycloheptene were selectively synthesized by the alkylation of a silyl dianion, a novel approach to strained cycloalkenes. The trans-silacycloheptene (trans-SiCH) was significantly more strained than the cis isomer, as predicted by quantum chemical calculations and confirmed by crystallographic signatures of a twisted alkene. Each isomer exhibited distinct reactivity toward ring-opening metathesis polymerization (ROMP), where only trans-SiCH afforded high-molar-mass polymer under enthalpy-driven ROMP. Hypothesizing that the introduction of silicon might result in increased molecular compliance at large extensions, we compared poly(trans-SiCH) to organic polymers by single-molecule force spectroscopy (SMFS). Force-extension curves from SMFS showed that poly(trans-SiCH) is more easily overstretched than two carbon-based analogues, polycyclooctene and polybutadiene, with stretching constants that agree well with the results of computational simulations.
</description>
<pubDate>Wed, 05 Apr 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162876</guid>
<dc:date>2023-04-05T00:00:00Z</dc:date>
</item>
<item>
<title>SESAMI APP: An Accessible Interface for Surface AreaCalculation of Materials from Adsorption Isotherms</title>
<link>https://hdl.handle.net/1721.1/162875</link>
<description>SESAMI APP: An Accessible Interface for Surface AreaCalculation of Materials from Adsorption Isotherms
Terrones, Gianmarco G; Chen, Yu; Datar, Archit; Lin, Li-Chiang; Kulik, Heather J; Chung, Yongchul G
Accurate characterization of surface area is critical for understanding a material’s properties and&#13;
performance. The most widely used approach to calculate a material’s gravimetric surface area,&#13;
i.e. surface area per unit mass, is the Brunauer-Emmett-Teller (BET) method (Brunauer et al.,&#13;
1938). The BET method computes the surface area of a material given the adsorption isotherm&#13;
of a probe gas (i.e. N2 or Ar) in that material. Many researchers either obtain the BET area&#13;
from commercial software that comes with measurement equipment, or perform the analyses&#13;
manually on a spreadsheet, which is time-consuming and nearly impossible for some types&#13;
of isotherms. Furthermore, these two approaches lead to large variability in BET-calculated&#13;
areas (Osterrieth et al., 2022). These challenges have motivated the development of programs&#13;
for the automated and standardized calculation of BET areas (Datar et al., 2020; Iacomi &amp;&#13;
Llewellyn, 2019; Osterrieth et al., 2022; Sadeghi et al., 2020; Sinha et al., 2019).
</description>
<pubDate>Fri, 09 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162875</guid>
<dc:date>2023-06-09T00:00:00Z</dc:date>
</item>
<item>
<title>Seasonal Salinification of the US Northeast Continental Shelf Cold Pool Driven by Imbalance Between Cross‐Shelf Fluxes and Vertical Mixing</title>
<link>https://hdl.handle.net/1721.1/162874</link>
<description>Seasonal Salinification of the US Northeast Continental Shelf Cold Pool Driven by Imbalance Between Cross‐Shelf Fluxes and Vertical Mixing
Taenzer, Lukas L.; Chen, Ke; Plueddemann, Albert J.; Gawarkiewicz, Glen G.
The US Northeast continental shelf “cold pool” comprises winter‐cooled Shelf Water that istrapped below the warm surface layer during the stratified season. The regional ecosystem relies on thepreservation of winter temperatures within the cold pool throughout the year. Here, we present first evidence ofa significant increase in the cold pool's salt content on the US Northeast continental shelf throughout thestratified season, suggesting that shelfbreak exchange contributes strongly to the seasonal erosion of the coldpool. Cold pool salinification rates of 0.18 PSU/month remain steady throughout the stratified season, leadingto salinity differences of over 1 PSU between April and October. A cold‐pool salinity budget reveals that theobserved salinification is caused by an imbalance between cross‐shelf salt fluxes, which deposit salt into thecold pool at all times of year, and the strong seasonal cycle of vertical mixing. During the stratified season,vertical mixing is inhibited and no longer counteracts the cross‐shelf flux, leading to net salinification of the coldpool over the summer. Along‐shelf freshwater advection from upstream is only present in the fall andcontributes some additional freshening to shut down the salinification trend. Seasonal variability in the positionof the US Northeast shelfbreak front is too small and out of phase to contribute to the salinity increase. Thestrong relationship between the seasonal cycle of cold pool modification and seasonal stratification pointstoward the importance of the timing of spring re‐ and fall de‐stratification on near‐bottom continental shelftemperature and salinity.
</description>
<pubDate>Wed, 14 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162874</guid>
<dc:date>2025-05-14T00:00:00Z</dc:date>
</item>
<item>
<title>Tailoring dynamic hydrogels by controlling associative exchange rates</title>
<link>https://hdl.handle.net/1721.1/162873</link>
<description>Tailoring dynamic hydrogels by controlling associative exchange rates
Zhang, Vivian; Accardo, Joseph V; Kevlishvili, Ilia; Woods, Eliot F; Chapman, Steven J; Eckdahl, Christopher T; Stern, Charlotte L; Kulik, Heather J; Kalow, Julia A
Dithioalkylidenes are a newly developed class of conjugate acceptors that undergo thiol exchange via an associative mechanism, enabling decoupling of key material properties for sustainability, biomedical, and sensing applications. Here, we show that the exchange rate is highly sensitive to the structure of the acceptor and tunable over four orders of magnitude in aqueous environments. Cyclic acceptors exchange rapidly, from 0.95 to 15.6 M−1s−1, whereas acyclic acceptors exchange between 3.77 × 10−3 and 2.17 × 10−2 M−1s−1. Computational, spectroscopic, and structural data suggest that cyclic acceptors are more reactive than their acyclic counterparts because of resonance stabilization of the tetrahedral exchange intermediate. We parametrize molecular reactivity with respect to computed descriptors of the electrophilic site and leverage this insight to design a compound with intermediate characteristics. Lastly, we incorporate this dynamic bond into hydrogels and demonstrate that the characteristic stress relaxation time (τ) is directly proportional to molecular kex.
</description>
<pubDate>Thu, 10 Aug 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162873</guid>
<dc:date>2023-08-10T00:00:00Z</dc:date>
</item>
<item>
<title>Heterogeneity of continuous glucose monitoring features and their clinical associations in a type 2 diabetes population</title>
<link>https://hdl.handle.net/1721.1/162872</link>
<description>Heterogeneity of continuous glucose monitoring features and their clinical associations in a type 2 diabetes population
Healey, Elizabeth; Morato, Carlos; Murillo, Jaime; Kohane, Isaac
Objective: Data from continuous glucose monitors (CGM) enable the extraction of fea-tures descriptive of glycemic dynamics that may provide insight into underlying healthstatus. In this work, we analyse CGM data from a large population of individuals withtype 2 diabetes (T2D) and study the association of features with clinical covariates.Methods: We retrospectively analysed CGM and electronic health record data froma large population of individuals with T2D. We extracted 25 daily CGM features foreach individual over a 30-day period and performed statistical association tests onthe features and clinical findings from medical claims data and laboratory records.Results: Our final analysis was performed on 6533 individuals. When clustering theCGM features across the population of individuals with T2D, four distinct clusters offeatures emerged. Further, the CGM features had heterogeneous discriminatorypower with clinical covariates, including laboratory values and the presence of claimsfor diabetic complications. Features related to glycemic variability, such as coefficientof variation, showed markedly lower p-values in many association tests for the pres-ence of diabetic complications than mean glucose.Conclusions: In examining the characteristics of different features extracted fromCGM data in a large population of individuals with T2D, we found that the featureswere heterogeneously associated with different clinical comorbidities related to dia-betes. This work motivates further research to investigate the relationship betweenCGM features and health outcomes in T2D to enable precision medicine.
</description>
<pubDate>Mon, 19 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162872</guid>
<dc:date>2025-05-19T00:00:00Z</dc:date>
</item>
<item>
<title>Development and Characterization of Electrochemically Machined Tungsten Extractor Electrodes for Electrospray Thrusters</title>
<link>https://hdl.handle.net/1721.1/162871</link>
<description>Development and Characterization of Electrochemically Machined Tungsten Extractor Electrodes for Electrospray Thrusters
Gale, Alex E.; Shaik, Saba Z.; Lozano, Paulo C.
This work explores electrochemically machined (ECM) tungsten extractors as an alternative to microfabricated silicon, in order to benefit from manufacturability, improved&#13;
ion optics through chamfered apertures, reduced secondary electron emission, and the potential for thinner geometries. A custom ECM fabrication process employing a linearly&#13;
oscillating cathodic paddle in sodium hydroxide was designed to manufacture extractors&#13;
and increase aperture uniformity. Using through-mask ECM, a 76.2 µm thick tungsten&#13;
extractor was fabricated, achieving a mean aperture diameter of 368 µm with a standard&#13;
deviation of 29 µm. The extractor was integrated with a modified version of the MIT&#13;
ion electrospray propulsion system (iEPS) to form a complete thruster. Characterization&#13;
included current-voltage sweeps, angular beam scans, and retarding potential analysis.&#13;
Measured efficiencies are comparable to previous iEPS thrusters, with intercepted currents ranging approximately between 1–2% of emitted current. These results demonstrate&#13;
that ECM tungsten extractors can deliver at least similar performance to existing designs&#13;
while offering improved manufacturability and scalability for future electrospray propulsion&#13;
systems.
39th International Electric Propulsion Conference, Imperial College London, London, United Kingdom 14-19 September 2025
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162871</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Cytosolic Delivery of Functional Ubiquitin</title>
<link>https://hdl.handle.net/1721.1/162870</link>
<description>Cytosolic Delivery of Functional Ubiquitin
Giancola, JoLynn B; Okon, Aniekan; Li, Yanfeng; Strieter, Eric R; Raines, Ronald T
The proteostasis network involves complex protein signaling cascades. The tagging of proteins with ubiquitin is central to thedegradation of cellular proteins, but understanding its exact role in processing proteins is complicated by the complexity andextent of its utilization within cells. Here, we describe the application of a traceless protein delivery strategy to effect the uptakeof exogenous ubiquitin into the cytosol of human cells. We find that coadministration of the endosomolytic peptides L17E and,especially, L17ER 4 provides not only cytosolic access to ubiquitin but also its functional incorporation into endogenous proteins.By enabling the study of semisynthetic ubiquitin variants in the human cytosol, this strategy could advance the field of ubiquitinbiology.
</description>
<pubDate>Thu, 08 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162870</guid>
<dc:date>2025-05-08T00:00:00Z</dc:date>
</item>
<item>
<title>Discrete Simulations of Fluid‐Driven Transport of Naturally Shaped Sediment Particles</title>
<link>https://hdl.handle.net/1721.1/162869</link>
<description>Discrete Simulations of Fluid‐Driven Transport of Naturally Shaped Sediment Particles
Zhang, Qiong; Deal, Eric; Perron, J Taylor; Venditti, Jeremy G; Benavides, Santiago J; Rushlow, Matthew; Kamrin, Ken
The particles in natural bedload transport processes are usually aspherical and span a range ofshapes and sizes, which is challenging to be represented in numerical simulations. We assemble existingnumerical methods to simulate the transport of natural gravel (NG). Starting with computerized tomographicscans of natural grains, our method approximates the shapes of these grains by “gluing” spheres (SP) ofdifferent sizes together with overlaps. The conglomerated SP move using a Discrete Element Method which iscoupled with a Lattice Boltzmann Method fluid solver, forming the first complete workflow from particleshape measurement to high‐resolution simulations with hundreds of distinct shapes. The simulations arequantitatively benchmarked by flume experiments. Beyond the flume, in a more generalized wide wall‐freegeometry, the numerical tool is used to further test a recently proposed modified sediment transport relation,which takes particle shape effects into account, including the competition between hydrodynamic drag andmaterial friction. Unlike a physical experiment, our simulations allow us to vary the hydrodynamic dragcoefficient of the NG independently of the material friction. The results support the modified sedimenttransport relation. The simulations also provide insights into particle‐level kinematics, such as particleorientations. Though particles below the bed surface prefer to orient with their shortest axes perpendicular tothe bed surface, with a decaying tendency with an increasing height above the bed surface, the orientationalpreferences in transport processes are much weaker than those in settling processes. NG rotates relativelyfreely during bedload transport.
</description>
<pubDate>Tue, 29 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162869</guid>
<dc:date>2025-04-29T00:00:00Z</dc:date>
</item>
<item>
<title>Finite Element Modeling of Abdominal Near‐Infrared Spectroscopy for Infant Splanchnic Oximetry</title>
<link>https://hdl.handle.net/1721.1/162868</link>
<description>Finite Element Modeling of Abdominal Near‐Infrared Spectroscopy for Infant Splanchnic Oximetry
Emani, Vishnu S; Ozturk, Caglar; Singh, Manisha; Long, Carly; Duffy, Summer; Sen, Danielle Gottlieb; Roche, Ellen T; Baker, Wesley B
Abdominal near-infrared spectroscopy (NIRS) holds promise for early detection of necrotizing enterocolitis and other infantpathologies prior to irreversible injury, but the optimal NIRS sensor design is not well defined. In this study, we develop anddemonstrate a computational method to evaluate NIRS sensor designs for infant splanchnic oximetry. We used a finite element(FE) approach to simulate near-infrared light transport through a 3D model of the infant abdomen constructed from computedtomography (CT) images. The simulations enable the measurement of the contrast-to-noise ratio (CNR) for splanchnic oximetry,given a specific NIRS sensor design. A key design criterion is the sensor's source–detector distance (SDD). We calculated the CNRas a function of SDD for two sensor positions near the umbilicus. Contrast-to-noise was maximal at SDDs between 4 and 5 cm,and comparable between sensor positions. Sensitivity to intestinal tissue also exceeded sensitivity to superficial adipose tissue inthe 4–5 cm range. FE modeling of abdominal NIRS signals provides a means for rapid and thorough evaluation of sensor designsfor infant splanchnic oximetry. By informing optimal NIRS sensor design, the computational methods presented here can im-prove the reliability and applicability of infant splanchnic oximetry.
</description>
<pubDate>Tue, 15 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162868</guid>
<dc:date>2025-04-15T00:00:00Z</dc:date>
</item>
<item>
<title>A Bayesian Proof of the Spread Lemma</title>
<link>https://hdl.handle.net/1721.1/162867</link>
<description>A Bayesian Proof of the Spread Lemma
Mossel, Elchanan; Niles‐Weed, Jonathan; Sun, Nike; Zadik, Ilias
A key set-theoretic “spread” lemma has been central to two recent celebrated results in combinatorics: the recentimprovements on the sunflower conjecture by Alweiss, Lovett, Wu, and Zhang; and the proof of the fractionalKahn–Kalai conjecture by Frankston, Kahn, Narayanan, and Park. In this work, we present a new proof of the spreadlemma, that—perhaps surprisingly—takes advantage of an explicit recasting of the proof in the language of Bayesianinference. We show that from this viewpoint the reasoning proceeds in a straightforward and principled probabilisticmanner, leading to a truncated second moment calculation which concludes the proof.
</description>
<pubDate>Fri, 06 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162867</guid>
<dc:date>2025-06-06T00:00:00Z</dc:date>
</item>
<item>
<title>Large-scale comparison of Fe and Ru polyolefin C–H activation catalysts</title>
<link>https://hdl.handle.net/1721.1/162866</link>
<description>Large-scale comparison of Fe and Ru polyolefin C–H activation catalysts
Adamji, Husain; Kevlishvili, Ilia; Nandy, Aditya; Román-Leshkov, Yuriy; Kulik, Heather J
We performed a large-scale density functional theory comparison of polyolefin C–H hydroxylation trends across over 200 Fe and Ru catalysts that are identical except for their metal centers for the radical-rebound conversion of propane to propanol. We observed a strong spin-state dependence: higher-spin states had more favorable metal-oxo formation and isopropanol release in Ru catalysts, while hydrogen atom transfer (HAT) was more favorable in Fe catalysts. While the widely studied metal-oxo formation vs. HAT linear free-energy relationship held for Ru, it was more easily disrupted for Fe. Ru catalysts have a spin-forbidden C–H hydroxylation pathway, while Fe catalysts favor a spin-allowed, intermediate-spin pathway. Calculation of reaction coordinates on representative catalysts corroborated these spin–reactivity trends and showed comparable energetic spans for Fe and Ru analogues, as well as strong Brønsted–Evans–Polanyi relationships for both the metal-oxo formation and HAT steps, motivating expanded study of Fe catalysts.
</description>
<pubDate>Fri, 01 Mar 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162866</guid>
<dc:date>2024-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>How Do Differences in Electronic Structure Affect the Use of Vanadium Intermediates as Mimics in Nonheme Iron Hydroxylases?</title>
<link>https://hdl.handle.net/1721.1/162865</link>
<description>How Do Differences in Electronic Structure Affect the Use of Vanadium Intermediates as Mimics in Nonheme Iron Hydroxylases?
Vennelakanti, Vyshnavi; Jeon, Mugyeom; Kulik, Heather J
We study active-site models of nonheme iron hydroxylases and their vanadium-based mimics using density functional theory to determine if vanadyl is a faithful structural mimic. We identify crucial structural and energetic differences between ferryl and vanadyl isomers owing to the differences in their ground electronic states, i.e., high spin (HS) for Fe and low spin (LS) for V. For the succinate cofactor bound to the ferryl intermediate, we predict facile interconversion between monodentate and bidentate coordination isomers for ferryl species but difficult rearrangement for vanadyl mimics. We study isomerization of the oxo intermediate between axial and equatorial positions and find the ferryl potential energy surface to be characterized by a large barrier of ca. 10 kcal/mol that is completely absent for the vanadyl mimic. This analysis reveals even starker contrasts between Fe and V in hydroxylases than those observed for this metal substitution in nonheme halogenases. Analysis of the relative bond strengths of coordinating carboxylate ligands for Fe and V reveals that all of the ligands show stronger binding to V than Fe owing to the LS ground state of V in contrast to the HS ground state of Fe, highlighting the limitations of vanadyl mimics of native nonheme iron hydroxylases.
</description>
<pubDate>Fri, 01 Mar 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162865</guid>
<dc:date>2024-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Lanmodulin‐Decorated Microbes for Efficient Lanthanide Recovery</title>
<link>https://hdl.handle.net/1721.1/162862</link>
<description>Lanmodulin‐Decorated Microbes for Efficient Lanthanide Recovery
Gut, Melanie; Wilhelm, Tatum; Beniston, Olivia; Ogundipe, Safiyyah; Kuo, Chao‐Chi; Nguyen, Kristine; Furst, Ariel
Rare earth elements (REEs) are essential for many clean energy technologies.Yet, they are a limited resource currently obtained through carbon-intensivemining. Here, bio-scaﬀolded proteins serve as simple, eﬀective materials forthe recovery of REEs. Surface expression of the protein lanmodulin (LanM) onE. coli, followed by freeze-drying of the microbes, yields a displayed proteinmaterial for REE recovery. Four REE cations (Y3+, La 3+, Gd3+, and Tb3+) arecaptured eﬃciently, with over 80% recovery even in the presence ofcompetitive ions at one-hundred-fold excess. Moreover, these materials arereadily integrated into a ﬁlter with high capture capacity (12 mg g−1 dry cellweight) for the selective isolation and recovery of REEs from complexmatrices. Further, the proteins in the ﬁlter remain stable over tenbind-and-release cycles and a week of storage. To improve the deployability ofthis ﬁlter material, a simple colorimetric assay with the dyealizarin-3-methyliminodiacetic acid is incorporated. The assay can beperformed in under 5 min, enabling rapid monitoring of REE recovery andﬁlter eﬃciency. Overall, this low-cost, robust material will enableenvironmentally friendly recycling and recovery of critical elements.
</description>
<pubDate>Thu, 16 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162862</guid>
<dc:date>2025-01-16T00:00:00Z</dc:date>
</item>
<item>
<title>Exchange Bias in La0.67Sr0.33MnO3/YFeO3 Ferromagnet/Antiferromagnet Multilayer Heterostructures</title>
<link>https://hdl.handle.net/1721.1/162861</link>
<description>Exchange Bias in La0.67Sr0.33MnO3/YFeO3 Ferromagnet/Antiferromagnet Multilayer Heterostructures
Fourmont, Paul; Cho, Eunsoo; Cloutier, Sylvain G; Ross, Caroline A
Exchange bias (EB), manifested as a hysteresis-loop oﬀset after ﬁeld-cooling,is demonstrated in perovskite-structured ferromagnet/antiferromagnet(La 0.67 Sr 0.33 MnO3 /YFeO3 )n heterostructures grown on (100) SrTiO3substrates. Bilayer samples show an EB of 306 Oe at 50 K, whereas multilayerswith ﬁve layers exhibit an exchange bias of up to 424 Oe at 50 K. A spin valveconsisting of La 0.67 Sr 0.33 MnO3 /SrTiO3 /La 0.67 Sr 0.33 MnO3 /YFeO3 shows stableremanent conﬁgurations resulting from pinning of the upper La0.67 Sr 0.33 MnO3layer by the YFeO3 . In contrast, EB is not observed on (111)-oriented SrTiO3substrates due to interface roughening. These results demonstrate YFeO3 asan alternative orthoferrite antiferromagnet compared to BiFeO 3 and LaFeO3for incorporation into exchange-biased heterostructures.
</description>
<pubDate>Sun, 13 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162861</guid>
<dc:date>2025-04-13T00:00:00Z</dc:date>
</item>
<item>
<title>Enhanced Electrochemical Response and Device Speed in Diketopyrrolopyrrole/PEO Composite Channels</title>
<link>https://hdl.handle.net/1721.1/162860</link>
<description>Enhanced Electrochemical Response and Device Speed in Diketopyrrolopyrrole/PEO Composite Channels
Cunin, Camille E; Winther, Sara; Matthews, James R; He, Mingqian; Gumyusenge, Aristide
Achieving eﬃcient charge conduction in organic electrochemical transistor(OECT) channel materials requires a delicate balance between electronicconduction and ion uptake. Common approaches to this challenge focus ontethering hydrophilic side chains to conjugated backbones, often resulting incomplex synthetic routes. Herein, an alternative strategy is presented usingcomposite mixed-conductive materials. Speciﬁcally, polyethylene oxide (PEO),a hydrophilic polymer, and a diketopyrrolopyrrole-based semiconductor,renowned for electronic conduction and processability, are used in varyingratios to form composite ﬁlms with tunable mixed conduction and enhancedOECT performance. The eﬀect of incorporating PEO on the composite’smorphology and OECT performance in both aqueous and non-aqueouselectrolytes is investigated. At the nanoscale, PEO is found to not onlyenhance channel hydrophilicity and ion uptake but also electrochemical gatingspeed, leading to improved OECT performance. These enhancements inelectrochemical performance are correlated with the morphological propertiesof the composite via structural and in-situ spectro-electrochemicalcharacterizations. Furthermore, the composite’s response is found to varywith the electrolyte environment: in organic electrolytes such as1-ethyl-3-methylimidazolium bis(triﬂuoromethylsulfonyl)imide (EMIM-TFSI),it exhibits high-speed performance suitable for neuromorphic applications,while in aqueous electrolytes, it achieves robust ion uptake ideal forbioelectronics. These ﬁndings highlight the potential of composite designs foroptimized OECT functionality across applications.
</description>
<pubDate>Thu, 03 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162860</guid>
<dc:date>2025-04-03T00:00:00Z</dc:date>
</item>
<item>
<title>Synthesis, characterization, and interfacial adhesion of titania iodine‐doped nanotubes architectures on additively manufactured Ti‐6Al‐4V implant</title>
<link>https://hdl.handle.net/1721.1/162859</link>
<description>Synthesis, characterization, and interfacial adhesion of titania iodine‐doped nanotubes architectures on additively manufactured Ti‐6Al‐4V implant
Taweekitikul, P.; Aliyu, A. A.; Decha‐Umphai, D.; Tantavisut, S.; Khamwannah, J.; Puncreobutr, C.; Lohwongwatana, B.
This study aimed to synthesize, characterize, and evaluate the adhesionstrength of titania nanotubes (titania nanotubes) and iodine-doped titaniananotubes (I-titania nanotubes) architectures on the additively manufacturedTi-6Al-4 V (Ti64) implant surface. The titania nanotubes and I-titania nano-tubes were synthesized through two stages of electrochemical anodization,whereby titania nanotubes are anodically fabricated through a conventionalapproach and then modified by replacing the ethylene glycol electrolyte withpotassium iodide solution. The characterization results revealed the formationof α-Ti, β-Ti, and titanium iodide (TiI2) phases on the titania nanotubes and I-titania nanotubes surfaces. The morphology of titania nanotubes exhibits aconsistent diameter, evenly distributed, well-ordered array, and denselypacked nanotubular structures. Formation of a water-soluble fluoride-rich[TiF6]2 complexes in the inner titania nanotubes surface and incessant nano-tube’s sidewall etching resulted in poor interfacial titania nanotubes adhesionto the titanium-substrate surface. Iodine doping on the titania nanotubes isbelieved to reduce the [TiF6]2 complexes accumulation and the titania nano-tubes sidewall etching. This facilitates the adhesion and interfacial mechan-ical anchorage between the titania nanotubes and the surface of the Ti64 im-plant. The hardness and adhesion strength of the titania nanotubes increasedby more than 50 %, due to the formation of a hard titanium iodide film at thetitania nanotubes/I-titania nanotubes surfaces and interfaces.
</description>
<pubDate>Tue, 18 Mar 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162859</guid>
<dc:date>2025-03-18T00:00:00Z</dc:date>
</item>
<item>
<title>Perfusion‐Based Production of rAAV via an Intensified Transient Transfection Process</title>
<link>https://hdl.handle.net/1721.1/162858</link>
<description>Perfusion‐Based Production of rAAV via an Intensified Transient Transfection Process
Nguyen, Tam NT; Park, Damdae; Canova, Christopher T; Sangerman, Jose; Srinivasan, Prasanna; Ou, Rui Wen; Barone, Paul W; Neufeld, Caleb; Wolfrum, Jacqueline M; Springs, Stacy L; Sinskey, Anthony J; Braatz, Richard D
Increasing demand for recombinant adeno‐associated virus (rAAV)‐based gene therapies necessitates increased manufacturingproduction. Transient transfection of mammalian cells remains the most commonly used method to produce clinical‐graderAAVs due to its ease of implementation. However, transient transfection processes are often characterized by suboptimal yieldsand low fractions of full‐to‐total capsids, both of which contribute to the high cost of goods of many rAAV‐based gene therapies.Our previously developed mechanistic model for rAAV2/5 production indicated that the inadequate capsid filling is due to atemporal misalignment between viral DNA replication and capsid synthesis within the cells and the repression of later phasecapsid formation by Rep proteins. We experimentally validated this prediction and showed that performing multiple, time‐separated doses of plasmid increases the production of rAAV. In this study, we use the insights generated by our mechanisticmodel to develop an intensified process for rAAV production that combines perfusion with high cell density re‐transfection. Wedemonstrate that performing multiple, time‐separated doses at high cell density boosts both cell‐specific and volumetricproductivity and improves plasmid utilization when compared to a single bolus at standard operating conditions. Our resultsestablish a new paradigm for continuously manufacturing rAAV via transient transfection that improves productivity andreduces manufacturing costs.
</description>
<pubDate>Tue, 18 Mar 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162858</guid>
<dc:date>2025-03-18T00:00:00Z</dc:date>
</item>
<item>
<title>A 2D/3D Heterostructure Perovskite Solar Cell with a Phase‐Pure and Pristine 2D Layer</title>
<link>https://hdl.handle.net/1721.1/162857</link>
<description>A 2D/3D Heterostructure Perovskite Solar Cell with a Phase‐Pure and Pristine 2D Layer
Shih, Meng‐Chen; Tan, Shaun; Lu, Yongli; Kodalle, Tim; Lee, Do‐Kyoung; Dong, Yifan; Larson, Bryon W; Park, Soyeon; Zhang, Ruiqi; Grotevent, Matthias J; Sverko, Tara; Zhu, Hua; Lin, Yu‐Kuan; Sutter‐Fella, Carolin M; Zhu, Kai; Beard, Matthew C; Bulović, Vladimir; Bawendi, Moungi G
Interface engineering plays a critical role in advancing the performance ofperovskite solar cells. As such, 2D/3D perovskite heterostructures are ofparticular interest due to their optoelectrical properties and their furtherpotential improvements. However, for conventional solution-processed 2Dperovskites grown on an underlying 3D perovskite, the reaction stoichiometryis normally unbalanced with excess precursors. Moreover, the formed 2Dperovskite is impure, leading to unfavorable energy band alignment at theinterface. Here a simple method is presented that solves both issuessimultaneously. The 2D formation reaction is taken ﬁrst to completion, fullyconsuming excess PbI2 . Then, isopropanol is utilized to remove excessorganic ligands, control the 2D perovskite thickness, and obtain a phase-pure,n = 2, 2D perovskite. The outcome is a pristine (without residual 2Dprecursors) and phase-pure 2D perovskite heterostructure with improvedsurface passivation and charge carrier extraction compared to theconventional solution process. PSCs incorporating this treatmentdemonstrate a notable improvement in both stability and power conversioneﬃciency, with negligible hysteresis, compared to the conventionalprocess.
</description>
<pubDate>Tue, 18 Mar 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162857</guid>
<dc:date>2025-03-18T00:00:00Z</dc:date>
</item>
<item>
<title>Living in the Paraindustrial</title>
<link>https://hdl.handle.net/1721.1/162856</link>
<description>Living in the Paraindustrial
Walley, Christine J
This article is an autoethnographic exploration of life in the former steel mill regionof Southeast Chicago in the ‘Rust Belt’ of the Midwestern United States. It challengesassumptions about deindustrialization that depict one discrete historical stage follow-ing another (i.e., the postindustrial following the industrial) in favor of what is heredefined as the ‘paraindustrial’ (or a setting in which active industry with minimal num-bers of workers exists alongside defunct industry and toxic brownfields). This accountcenters upon the experiences of women who have too often been neglected in researchon deindustrialized regions. In particular, it focuses on the author’s elderly motherArlene who has spent her entire life in Southeast Chicago. From her wheelchair ona backyard porch, Arlene observes this damaged landscape built out of the formerCalumet wetlands. The article considers the relationships of care, centered aroundwomen, that continue to bind together and support the living despite decades ofeconomic and environmental rupture and degradation. Utilizing the concept of a‘palimpsest,’ the piece considers how different historical, ecological, and social reali-ties and temporalities are both layered on top of each other and intermingle to createthe complex landscape found in this former wetland region.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162856</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Simulated radiation levels and patterns of MRI without a Faraday shielded room</title>
<link>https://hdl.handle.net/1721.1/162855</link>
<description>Simulated radiation levels and patterns of MRI without a Faraday shielded room
Kazemivalipour, Ehsan; Guerin, Bastien; Wald, Lawrence L.
Purpose: We characterize electromagnetic (EM) radiation patterns and levelsin conventional MRI systems as a function of field strength and load symmetry,providing a framework for mitigation strategies allowing operation without ashielded room.&#13;
Methods: We simulated the far-field radiation pattern and fields at a 10 mradius (|E|10m and |B|10m ) for a solenoidal superconducting MRI with abody birdcage coil operated between 0.25T and 6.5T. Five load configura-tions probed the impact of load-symmetry, ranging from a sphere to a bodyload (least-symmetric). We also assessed simple layered EM absorbers at thebore-ends.&#13;
Results: All configurations exceeded regulatory limits for realistic transmit lev-els. At 1.5T, a 300 V rms RF-pulse is 2700-fold the |E|10m limit. Field strengthand load symmetry strongly modulate radiation patterns and levels. The radi-ated power increased by more than four orders of magnitude from 0.25T to6.5T. Spherical load radiation transitioned from a peak gain at the bore-ends(0.25–0.5T) to a donut-shaped pattern, suggesting current loops around the bore(1 T–1.5T), back to bore-axis-directed gain, suggesting propagating waves alongthe bore (2T–6.5T). Transition patterns were seen between these regimes; uni-form radiation at 0.75T and a combined donut/bore-directed pattern at 1.75T.Load asymmetry increased both strength and pattern asymmetry, with the bodyload having the highest and least symmetric radiation with the legs facilitat-ing wave propagation at high-fields. A simple optimized layered absorber atscanner’s service-end reduced 3T peak radiation by 11 dB.&#13;
Conclusion: Radiation from unshielded scanners far exceeds regulatory lim-its, particularly at high-field. Mitigation strategies must address load-symmetry,field strength, and wave effects.
</description>
<pubDate>Mon, 17 Mar 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162855</guid>
<dc:date>2025-03-17T00:00:00Z</dc:date>
</item>
<item>
<title>Comparison of tight-fitting 7T parallel-transmit head array designs using excitation uniformity and local specific absorption rate metrics</title>
<link>https://hdl.handle.net/1721.1/162854</link>
<description>Comparison of tight-fitting 7T parallel-transmit head array designs using excitation uniformity and local specific absorption rate metrics
Kazemivalipour, Ehsan; Wald, Lawrence L.; Guerin, Bastien
Purpose: We model the performance of parallel transmission (pTx) arrays with8, 16, 24, and 32 channels and varying loop sizes built on a close-fitting helmetfor brain imaging at 7 T and compare their local specific absorption rate (SAR)and flip-angle performances to that of birdcage coil (used as a baseline) andcylindrical 8-channel and 16-channel pTx coils (single-row and dual-row).&#13;
Methods: We use the co-simulation approach along with MATLAB scriptingfor batch-mode simulation of the coils. For each coil, we extracted B 1+ mapsand SAR matrices, which we compressed using the virtual observation pointsalgorithm, and designed slice-selective RF shimming pTx pulses with multiplelocal SAR and peak power constraints to generate L-curves in the transverse,coronal, and sagittal orientations.&#13;
Results: Helmet designs outperformed cylindrical pTx arrays at a constant num-ber of channels in the flip-angle uniformity at a constant local SAR metric: up to29% for 8-channel arrays, and up to 34% for 16-channel arrays, depending on theslice orientation. For all helmet arrays, increasing the loop diameter led to betterlocal SAR versus flip-angle uniformity tradeoffs, although this effect was morepronounced for the 8-channel and 16-channel systems than the 24-channel and32-channel systems, as the former have more limited degrees of freedom andtherefore benefit more from loop-size optimization.&#13;
Conclusion: Helmet pTx arrays significantly outperformed cylindrical arrayswith the same number of channels in local SAR and flip-angle uniformitymetrics. This improvement was especially pronounced for non-transverse sliceexcitations. Loop diameter optimization for helmets appears to favor large loops,compatible with nearest-neighbor decoupling by overlap.
</description>
<pubDate>Mon, 06 Nov 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162854</guid>
<dc:date>2023-11-06T00:00:00Z</dc:date>
</item>
<item>
<title>Enhanced Electrochemical Properties of Biobased Activated Carbon for Supercapacitors</title>
<link>https://hdl.handle.net/1721.1/162853</link>
<description>Enhanced Electrochemical Properties of Biobased Activated Carbon for Supercapacitors
Zhou, Shengfei; Tai‐Chieh Wan, Charles; Chanut, Nicolas; Brushett, Fikile R; Buehler, Markus J
Supercapacitors are great candidates for energy boosting, power, and memory backup. However, they suffer from low-energy density, relatively high cost, and carbon footprint problems due to their electrode materials, such as commonly used activated carbons (ACs). To prepare better renewable ACs, 11 biomass materials are pretreated with hydrothermal processing and then activated at high temperature with potassium hydroxide (KOH) in the present study. The prepared ACs are characterized for scanning electron microscopy images, atomic concentration, specific surface areas, electrical conductivity, cyclic voltammograms, and specific capacitance to determine their potential for supercapacitor application. The electrical conductivity reaches 0.47–1.23 S cm−1, and specific capacitance reaches 250–360 F g−1 (at current density 20 A g−1), which are much higher than previously reported literature values (conductivity &lt;0.3 S cm−1, capacitance 40–160 F g−1) for biobased ACs, indicating great potential for supercapacitor application of our biobased ACs.
</description>
<pubDate>Fri, 04 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162853</guid>
<dc:date>2025-04-04T00:00:00Z</dc:date>
</item>
<item>
<title>Nonlinear Ion Dynamics Enable Spike Timing Dependent Plasticity of Electrochemical Ionic Synapses</title>
<link>https://hdl.handle.net/1721.1/162852</link>
<description>Nonlinear Ion Dynamics Enable Spike Timing Dependent Plasticity of Electrochemical Ionic Synapses
Huang, Mantao; Xu, Longlong; del Alamo, Jesús A; Li, Ju; Yildiz, Bilge
Programmable synaptic devices that can achieve timing-dependent weightupdates are key components to implementing energy-eﬃcient spiking neuralnetworks (SNNs). Electrochemical ionic synapses (EIS) enable theprogramming of weight updates with very low energy consumption and lowvariability. Here, the strongly nonlinear kinetics of EIS, arising from nonlineardynamics of ions and charge transfer reactions in solids, are leveraged toimplement various forms of spike-timing-dependent plasticity (STDP). Inparticular, protons are used as the working ion. Diﬀerent forms of the STDPfunction are deterministically predicted and emulated by a linearsuperposition of appropriately designed pre- and post-synaptic neuronsignals. Heterogeneous STDP is also demonstrated within the array tocapture diﬀerent learning rules in the same system. STDP timescales arecontrollable, ranging from milliseconds to nanoseconds. The STDP resultingfrom EIS has lower variability than other hardware STDP implementations,due to the deterministic and uniform insertion of charge in the tunablechannel material. The results indicate that the ion and charge transferdynamics in EIS can enable bio-plausible synapses for SNN hardware withhigh energy eﬃciency, reliability, and throughput.
</description>
<pubDate>Wed, 29 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162852</guid>
<dc:date>2025-01-29T00:00:00Z</dc:date>
</item>
<item>
<title>Unraveling Polymer–Ion Interactions in Electrochromic Polymers for their Implementation in Organic Electrochemical Synaptic Devices</title>
<link>https://hdl.handle.net/1721.1/162851</link>
<description>Unraveling Polymer–Ion Interactions in Electrochromic Polymers for their Implementation in Organic Electrochemical Synaptic Devices
Roh, Heejung; Yue, Shuwen; Hu, Hang; Chen, Ke; Kulik, Heather J; Gumyusenge, Aristide
Owing to low-power, fast and highly adaptive operability, as well as scalability, electrochemical random-access memory (ECRAM) technology is one of the most promising approaches for neuromorphic computing based on artificial neural networks. Despite recent advances, practical implementation of ECRAMs remains challenging due to several limitations including high write noise, asymmetric weight updates, and insufficient dynamic ranges. Here, inspired by similarities in structural and functional requirements between electrochromic devices and ECRAMs, high-performance, single-transistor and neuromorphic devices based on electrochromic polymers (ECPs) are demonstrated. To effectively translate electrochromism into electrochemical ion memory in polymers, this study systematically investigates polymer–ion interactions, redox activity, mixed ionic–electronic conduction, and stability of ECPs both experimentally and computationally using select electrolytes. The best-performing ECP-electrolyte combination is then implemented into an ECRAM device to further explore synaptic plasticity behaviors. The resulting ECRAM exhibits high linearity and symmetric conductance modulation, high dynamic range (≈1 mS or ≈6x), and high training accuracy (&gt;84% within five training cycles on a standard image recognition dataset), comparable to existing state-of-the-art ECRAMs. This study offers a promising approach to discover and design novel polymer materials for organic ECRAMs and demonstrates potential applications, taking advantage of mature knowledge basis on electrochromic materials and devices.
</description>
<pubDate>Thu, 02 Nov 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162851</guid>
<dc:date>2023-11-02T00:00:00Z</dc:date>
</item>
<item>
<title>Reversible O–O Bond Scission and O2 Evolution at MOF-Supported Tetramanganese Clusters</title>
<link>https://hdl.handle.net/1721.1/162850</link>
<description>Reversible O–O Bond Scission and O2 Evolution at MOF-Supported Tetramanganese Clusters
He, Xin; Iliescu, Andrei; Yang, Tzuhsiung; Arguilla, Maxx Q; Chen, Tianyang; Kulik, Heather J; Dincă, Mircea
The scission of the O–O bond in O2 during respiration and the formation of the O–O bond during photosynthesis are the engines of aerobic life. Likewise, the reduction of O2 and the oxidation of reduced oxygen species to form O2 are indispensable components for emerging renewable technologies, including energy storage and conversion, yet discrete molecule-like systems that promote these fundamental reactions are rare. Herein, we report a square-planar tetramanganese cluster formed by self-assembly within a metal–organic framework that reversibly reduces O2 by four electrons, facilitating the interconversion between molecular O2 and metal-oxo species. The tetranuclear cluster spontaneously cleaves the O–O bond of O2 at room temperature to generate a tetramanganese-bis(μ2-oxo) species, which, in turn, is competent for O–O bond reformation and O2 evolution at elevated temperatures, enabled by the head-to-head orientation of two oxo species. This study demonstrates the viability of four-electron interconversion between molecular O2 and metal-oxo species and highlights the importance of site isolation for achieving multi-electron chemistry at polynuclear metal clusters.
</description>
<pubDate>Thu, 20 Jul 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162850</guid>
<dc:date>2023-07-20T00:00:00Z</dc:date>
</item>
<item>
<title>Systematic Investigation of Silicon Substitution on Single Macromolecule Mechanics</title>
<link>https://hdl.handle.net/1721.1/162849</link>
<description>Systematic Investigation of Silicon Substitution on Single Macromolecule Mechanics
Wentz, Kelsie E; Yao, Yunxin; Kevlishvili, Ilia; Kouznetsova, Tatiana B; Mediavilla, Braden A; Kulik, Heather J; Craig, Stephen L; Klausen, Rebekka S
Four unsaturated poly­(carbooligosilane)­s (P1–P4) were prepared via acyclic diene metathesis polycondensation of new oligosilane diene monomers (1–4). These novel polymers with varying main-chain Si incorporation have high trans internal olefin stereochemistry (ca. 80%) and molecular weights (9500–21,700 g mol–1). Postpolymerization epoxidation converted all alkene moieties to epoxides and rendered the polymers (P5–P8) more electrophilic, which allowed for single-molecule force spectroscopy studies via a modified atomic force microscope setup with a silicon tip and cantilever. The single-chain elasticity of the polycarbooligosilanes decreased with increasing numbers of Si–Si bonds, a finding reproduced by quantum chemical calculations.
</description>
<pubDate>Fri, 18 Aug 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162849</guid>
<dc:date>2023-08-18T00:00:00Z</dc:date>
</item>
<item>
<title>Protein3D: Enabling analysis and extraction of metal‐containing sites from the Protein Data Bank with molSimplify</title>
<link>https://hdl.handle.net/1721.1/162848</link>
<description>Protein3D: Enabling analysis and extraction of metal‐containing sites from the Protein Data Bank with molSimplify
Edholm, Freya; Nandy, Aditya; Reinhardt, Clorice R; Kastner, David W; Kulik, Heather J
Metalloenzymes catalyze a wide range of chemical transformations, with the active site residues playing a key role in modulating chemical reactivity and selectivity. Unlike smaller synthetic catalysts, a metalloenzyme active site is embedded in a larger protein, which makes interrogation of electronic properties and geometric features with quantum mechanical calculations challenging. Here we implement the ability to fetch crystallographic structures from the Protein Data Bank and analyze the metal binding sites in the program molSimplify. We show the usefulness of the newly created protein3D class to extract the local environment around non‐heme iron enzymes containing a two histidine motif and prepare 372 structures for quantum mechanical calculations. Our implementation of protein3D serves to expand the range of systems molSimplify can be used to analyze and will enable high‐throughput study of metal‐containing active sites in proteins.
</description>
<pubDate>Tue, 05 Mar 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162848</guid>
<dc:date>2024-03-05T00:00:00Z</dc:date>
</item>
<item>
<title>Angle-strained sila-cycloalkynes</title>
<link>https://hdl.handle.net/1721.1/162847</link>
<description>Angle-strained sila-cycloalkynes
Wakefield, Herbert; Melvin, Sophia J; Jiang, Jennifer; Kevlishvili, Ilia; Siegler, Maxime A; Craig, Stephen L; Kulik, Heather J; Klausen, Rebekka S
Second row elements in small- and medium-rings modulate strain. Herein we report the synthesis of two novel oligosilyl-containing cycloalkynes that exhibit angle-strain, as observed by X-ray crystallography. However, the angle-strained sila-cyclooctynes are sluggish participants in cycloadditions with benzyl azide. A distortion-interaction model analysis based on density functional theory calculations was performed.
</description>
<pubDate>Fri, 05 Apr 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162847</guid>
<dc:date>2024-04-05T00:00:00Z</dc:date>
</item>
<item>
<title>An Open-Source Modular Bioreactor Platform for Cultivation of Synechocystis sp. PCC 6803 and Extraction of Intracellular Glucose</title>
<link>https://hdl.handle.net/1721.1/162843</link>
<description>An Open-Source Modular Bioreactor Platform for Cultivation of Synechocystis sp. PCC 6803 and Extraction of Intracellular Glucose
Baho, Ingie; Tseo, Yitong; Zu, Yuexuan; Padia, Vineet; Hunter, Ian
Synechocystis sp. PCC 6803 is a photosynthetic microbe with high potential for capturing excessive atmospheric carbon while generating valuable bioproducts, like glucose. Current cultivation technologies remain expensive, closed-source, and poorly suited for downstream processing. This study presents a low-cost, open-source bioreactor platform with integrated modules for Synechocystis cultivation and glucose extraction. The system incorporates a photobioreactor, a lysis module, and a pressure-driven filtration setup. Optical density was continuously monitored using a custom-built module, and glucose was quantified using high-performance liquid chromatography (HPLC). Under an incident light intensity of approximately 400 μmol&#13;
 m−2&#13;
 s−1&#13;
, cultures reached a biomass productivity of 90 mg L−1 day−1&#13;
, with a specific growth rate of 0.166 day−1&#13;
 and glucose concentrations up to 5.08&#13;
 mg L−1&#13;
. A model was developed to predict the growth based on measured environmental parameters, achieving a strong predictive accuracy with a mean absolute error and variance of 0.0009±0.0003&#13;
. The system demonstrates up to 65% reduction in cost compared to commercial alternatives. This modular platform provides an accessible solution for biomanufacturing research and serves as a template for sustainable cyanobacteria-derived glucose production.
</description>
<pubDate>Thu, 18 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162843</guid>
<dc:date>2025-09-18T00:00:00Z</dc:date>
</item>
<item>
<title>Fact-based Counter Narrative Generation to Combat Hate Speech</title>
<link>https://hdl.handle.net/1721.1/162842</link>
<description>Fact-based Counter Narrative Generation to Combat Hate Speech
Wilk, Brian; Shomee, Homaira Huda; Maity, Suman Kalyan; Medya, Sourav
Online hatred has become an increasingly pervasive issue, affecting individuals and communities across various digital platforms. To combat hate speech in such platforms, counter narratives (CNs) are regarded as an effective method. In recent years, there has been growing interest in using generative AI tools to construct CNs. However, most of the generative models produce generic responses to hate speech and can hallucinate, reducing their effectiveness. To address the above limitations, we propose a counter narrative generation method that enhances CNs by providing non-aggressive, fact-based narratives with relevant background knowledge from two distinct sources, including a web search module. Furthermore, we conduct a comprehensive evaluation using multiple metrics, including LLM-based measures for persuasion, factuality, and informativeness, along with human and traditional NLP evaluations. Our method significantly outperforms baselines, achieving an average factuality score of 0.915, compared to 0.741, 0.701, and 0.69 for competitive baselines, and performs well in human evaluations.
WWW ’25, April 28-May 2, 2025, Sydney, NSW, Australia
</description>
<pubDate>Tue, 22 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162842</guid>
<dc:date>2025-04-22T00:00:00Z</dc:date>
</item>
<item>
<title>Diagrammatization and Abduction to Improve AI Interpretability With Domain-Aligned Explanations for Medical Diagnosis</title>
<link>https://hdl.handle.net/1721.1/162841</link>
<description>Diagrammatization and Abduction to Improve AI Interpretability With Domain-Aligned Explanations for Medical Diagnosis
Lim, Brian; Cahaly, Joseph; Sng, Chester; Chew, Adam
Many visualizations have been developed for explainable AI (XAI), but they often require further reasoning by users to interpret. Investigating XAI for high-stakes medical diagnosis, we propose improving domain alignment with diagrammatic and abductive reasoning to reduce the interpretability gap. We developed DiagramNet to predict cardiac diagnoses from heart auscultation, select the best-fitting hypothesis based on criteria evaluation, and explain with clinically-relevant murmur diagrams. The ante-hoc interpretable model leverages domain-relevant ontology, representation, and reasoning process to increase trust in expert users. In modeling studies, we found that DiagramNet not only provides faithful murmur shape explanations, but also has better performance than baseline models. We demonstrate the interpretability and trustworthiness of diagrammatic, abductive explanations in a qualitative user study with medical students, showing that clinically-relevant, diagrammatic explanations are preferred over technical saliency map explanations. This work contributes insights into providing domain-aligned explanations for user-centric XAI in complex domains.
CHI ’25, Yokohama, Japan
</description>
<pubDate>Fri, 25 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162841</guid>
<dc:date>2025-04-25T00:00:00Z</dc:date>
</item>
<item>
<title>TelePulse: Enhancing the Teleoperation Experience through Biomechanical Simulation-Based Electrical Muscle Stimulation in Virtual Reality</title>
<link>https://hdl.handle.net/1721.1/162840</link>
<description>TelePulse: Enhancing the Teleoperation Experience through Biomechanical Simulation-Based Electrical Muscle Stimulation in Virtual Reality
Hwang, Seokhyun; Kang, Seongjun; Oh, Jeongseok; Park, Jeongju; Shin, Semoo; Luo, Yiyue; DelPreto, Joseph; Lee, Sangbeom; Lee, Kyoobin; Matusik, Wojciech; Rus, Daniela; Kim, SeungJun
This paper introduces TelePulse, a system integrating biomechanical simulation with electrical muscle stimulation (EMS) to provide precise haptic feedback for robot teleoperation tasks in virtual reality (VR). TelePulse has two components: a physical simulation part that calculates joint torques based on real-time force data from remote manipulators, and an electrical stimulation part that converts these torques into muscle stimulation. Two experiments were conducted to evaluate the system. The first experiment assessed the accuracy of EMS generated through biomechanical simulations by comparing it with electromyography (EMG) data during force-directed tasks, while the second experiment evaluated the impact of TelePulse on teleoperation performance during sanding and drilling tasks. The results suggest that TelePulse provided more accurate stimulation across all arm muscles, thereby enhancing task performance and user experience in the teleoperation environment. In this paper, we discuss the effect of TelePulse on teleoperation, its limitations, and areas for future improvement.
CHI ’25, Yokohama, Japan
</description>
<pubDate>Fri, 25 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162840</guid>
<dc:date>2025-04-25T00:00:00Z</dc:date>
</item>
<item>
<title>TactStyle: Generating Tactile Textures with Generative AI for Digital Fabrication</title>
<link>https://hdl.handle.net/1721.1/162839</link>
<description>TactStyle: Generating Tactile Textures with Generative AI for Digital Fabrication
Faruqi, Faraz; Perroni-Scharf, Maxine; Walia, Jaskaran; Zhu, Yunyi; Feng, Shuyue; Degraen, Donald; Mueller, Stefanie
Recent work in Generative AI enables the stylization of 3D models based on image prompts. However, these methods do not incorporate tactile information, leading to designs that lack the expected tactile properties. We present TactStyle, a system that allows creators to stylize 3D models with images while incorporating the expected tactile properties. TactStyle accomplishes this using a modified image-generation model fine-tuned to generate heightfields for given surface textures. By optimizing 3D model surfaces to embody a generated texture, TactStyle creates models that match the desired style and replicate the tactile experience. We utilize a large-scale dataset of textures to train our texture generation model. In a psychophysical experiment, we evaluate the tactile qualities of a set of 3D-printed original textures and TactStyle’s generated textures. Our results show that TactStyle successfully generates a wide range of tactile features from a single image input, enabling a novel approach to haptic design.
CHI ’25, April 26–May 01, 2025, Yokohama, Japan
</description>
<pubDate>Fri, 25 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162839</guid>
<dc:date>2025-04-25T00:00:00Z</dc:date>
</item>
<item>
<title>Xstrings: 3D Printing Cable-Driven Mechanism for Actuation, Deformation, and Manipulation</title>
<link>https://hdl.handle.net/1721.1/162838</link>
<description>Xstrings: 3D Printing Cable-Driven Mechanism for Actuation, Deformation, and Manipulation
Li, Jiaji; Feng, Shuyue; Perroni-Scharf, Maxine; Liu, Yujia; Guan, Emily; Wang, Guanyun; Mueller, Stefanie
In this paper, we present Xstrings, a method for designing and fabricating 3D printed objects with integrated cable-driven mechanisms that can be printed in one go without the need for manual assembly. Xstrings supports four types of cable-driven interactions—bend, coil, screw and compress—which are activated by applying an input force to the cables. To facilitate the design of Xstrings objects, we present a design tool that allows users to embed cable-driven mechanisms into object geometries based on their desired interactions by automatically placing joints and cables inside the object. To assess our system, we investigate the effect of printing parameters on the strength of Xstrings objects and the extent to which the interactions are repeatable without cable breakage. We demonstrate the application potential of Xstrings through examples such as manipulable gripping, bionic robot manufacturing, and dynamic prototyping.
CHI ’25, Yokohama, Japan
</description>
<pubDate>Fri, 25 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162838</guid>
<dc:date>2025-04-25T00:00:00Z</dc:date>
</item>
<item>
<title>Understanding the LLM-ification of CHI: Unpacking the Impact of LLMs at CHI through a Systematic Literature Review</title>
<link>https://hdl.handle.net/1721.1/162837</link>
<description>Understanding the LLM-ification of CHI: Unpacking the Impact of LLMs at CHI through a Systematic Literature Review
Pang, Rock Yuren; Schroeder, Hope; Smith, Kynnedy; Barocas, Solon; Xiao, Ziang; Tseng, Emily; Bragg, Danielle
Large language models (LLMs) have been positioned to revolutionize HCI, by reshaping not only the interfaces, design patterns, and sociotechnical systems that we study, but also the research practices we use. To-date, however, there has been little understanding of LLMs’ uptake in HCI. We address this gap via a systematic literature review of 153 CHI papers from 2020-24 that engage with LLMs. We taxonomize: (1) domains where LLMs are applied; (2) roles of LLMs in HCI projects; (3) contribution types; and (4) acknowledged limitations and risks. We find LLM work in 10 diverse domains, primarily via empirical and artifact contributions. Authors use LLMs in five distinct roles, including as research tools or simulated users. Still, authors often raise validity and reproducibility concerns, and overwhelmingly study closed models. We outline opportunities to improve HCI research with and on LLMs, and provide guiding questions for researchers to consider the validity and appropriateness of LLM-related work.
CHI ’25, Yokohama, Japan
</description>
<pubDate>Fri, 25 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162837</guid>
<dc:date>2025-04-25T00:00:00Z</dc:date>
</item>
<item>
<title>Need Help? Designing Proactive AI Assistants for Programming</title>
<link>https://hdl.handle.net/1721.1/162836</link>
<description>Need Help? Designing Proactive AI Assistants for Programming
Chen, Valerie; Zhu, Alan; Zhao, Sebastian; Mozannar, Hussein; Sontag, David; Talwalkar, Ameet
While current chat-based AI assistants primarily operate reactively, responding only when prompted by users, there is significant potential for these systems to proactively assist in tasks without explicit invocation, enabling a mixed-initiative interaction. This work explores the design and implementation of proactive AI assistants powered by large language models. We first outline the key design considerations for building effective proactive assistants. As a case study, we propose a proactive chat-based programming assistant that automatically provides suggestions and facilitates their integration into the programmer’s code. The programming context provides a shared workspace enabling the assistant to offer more relevant suggestions. We conducted a randomized experimental study examining the impact of various design elements of the proactive assistant on programmer productivity and user experience. Our findings reveal significant benefits of incorporating proactive chat assistants into coding environments, while also uncovering important nuances that influence their usage and effectiveness.
CHI ’25, Yokohama, Japan
</description>
<pubDate>Fri, 25 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162836</guid>
<dc:date>2025-04-25T00:00:00Z</dc:date>
</item>
<item>
<title>Synthetic Human Memories: AI-Edited Images and Videos Can Implant False Memories and Distort Recollection</title>
<link>https://hdl.handle.net/1721.1/162835</link>
<description>Synthetic Human Memories: AI-Edited Images and Videos Can Implant False Memories and Distort Recollection
Pataranutaporn, Pat; Archiwaranguprok, Chayapatr; Chan, Samantha; Loftus, Elizabeth; Maes, Pattie
CHI ’25, Yokohama, Japan
</description>
<pubDate>Fri, 25 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162835</guid>
<dc:date>2025-04-25T00:00:00Z</dc:date>
</item>
<item>
<title>Evaluating amenity access of new and repurposed housing within the 15-Minute City framework in Amsterdam</title>
<link>https://hdl.handle.net/1721.1/162834</link>
<description>Evaluating amenity access of new and repurposed housing within the 15-Minute City framework in Amsterdam
Aksoy, Esma S.; Venverloo, Titus; Benson, Tom; Duarte, Fabio
Amsterdam has a housing shortage issue. To address this, the Municipality aims to provide 73,660 housing units by 2028, either by constructing new housing buildings or by repurposing existing buildings with other functions such as offices, schools or industrial spaces. The comparison between these two strategies in past research primarily focuses on lower construction costs, reduced raw material usage, and decreased energy consumption associated with demolition and new construction processes; on the other hand, comparisons of locational characteristics between new and repurposed housing projects have seldom been studied. In this paper, we compare access to amenities, specifically the number and diversity, between new and repurposed housing buildings based on their location in the city. Using the 15-Minute City concept as both a theoretical framework and a practical tool, we evaluate the amenities within a 15-min walking isochrone for 38,061 housing units (554 residential buildings) constructed between 2015 and 2019. By aggregating these results at district level, we deepen the analysis and provide insights that could support the development of locally tailored policies.
</description>
<pubDate>Wed, 30 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162834</guid>
<dc:date>2025-04-30T00:00:00Z</dc:date>
</item>
<item>
<title>What-if Analysis for Business Professionals: Current Practices and Future Opportunities</title>
<link>https://hdl.handle.net/1721.1/162833</link>
<description>What-if Analysis for Business Professionals: Current Practices and Future Opportunities
Gathani, Sneha; Liu, Zhicheng; Haas, Peter J.; Demiralp, ?a?atay
What-if analysis (WIA) is essential for data-driven decision-making, allowing users to assess how changes in variables impact outcomes and explore alternative scenarios. Existing WIA research primarily supports the workflows of data scientists and analysts, and largely overlooks business professionals who engage in WIA through non-technical means. To bridge this gap, we conduct a two-part user study with 22 business professionals across marketing, sales, product, and operations roles. The first study examines their existing WIA practices, tools, and challenges. Findings reveal that business professionals perform many WIA techniques independently using rudimentary tools due to various constraints. We then implement representative WIA techniques in a visual analytics prototype and use it as a probe to conduct a follow-up study evaluating business professionals’ practical use of the techniques. Results show that these techniques improve decision-making efficiency and confidence while underscoring the need for better support in data preparation, risk assessment, and domain knowledge integration. Finally, we offer design recommendations to enhance future business analytics systems.
CHI ’25, Yokohama, Japan
</description>
<pubDate>Fri, 25 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162833</guid>
<dc:date>2025-04-25T00:00:00Z</dc:date>
</item>
<item>
<title>Combined dendritic cell and anti-TIGIT immunotherapy potentiates adaptive NK cells against HIV-1</title>
<link>https://hdl.handle.net/1721.1/162832</link>
<description>Combined dendritic cell and anti-TIGIT immunotherapy potentiates adaptive NK cells against HIV-1
Sánchez-Cerrillo, Ildefonso; Agudo-Lera, María; Popova, Olga; Tsukalov, Ilya; Calvet-Mirabent, Marta; de los Santos, Ignacio; García-Fraile, Lucio; Fuentes, Patricia; Delgado-Arévalo, Cristina; Alcain, Juan; Sánchez-Gaona, Nerea; Grau-Expósito, Judith; Lázaro-Díez, María
Natural Killer (NK) cells are promising candidates for targeting persistently infected CD4 + T cells in people with HIV-1 (PWH). However, chronicity of HIV-1 infection impairs NK cell functionality, requiring additional strategies to potentiate their cytotoxic activity. This study demonstrates that dendritic cells primed with nanoparticles containing Poly I:C (Nano-PIC-MDDC) enhance the natural cytotoxic function of NK cells from effective responder PWH. These NK cells exhibit increased proportions of NKG2C+ cell subsets capable of eliminating HIV-1 infected CD4 + T cells through the TRAIL receptor. In contrast, in non-responder PWH, elevated expression of the inhibitory receptor TIGIT is associated with reduced frequencies of NKG2C + NK cells and diminished TRAIL expression. TIGIT blockade restores cytotoxicity of NK cells from non-responder PWH against HIV-1-infected cells by upregulating TRAIL. Furthermore, combining Nano-PIC-MDDC-primed NK cells with anti-TIGIT immunotherapy in humanized NSG mice reduces the expansion of HIV-1 infected cells, preserves NKG2C + NK cell precursors and increases TRAIL expression in tissue. Collectively, these findings support the combined use of Nano-PIC-MDDC and TIGIT blockade as a promising immunotherapeutic strategy toward an HIV-1 cure.
</description>
<pubDate>Tue, 01 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162832</guid>
<dc:date>2025-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>An asymmetric nautilus-like HflK/C assembly controls FtsH proteolysis of membrane proteins</title>
<link>https://hdl.handle.net/1721.1/162831</link>
<description>An asymmetric nautilus-like HflK/C assembly controls FtsH proteolysis of membrane proteins
Ghanbarpour, Alireza; Telusma, Bertina; Powell, Barrett M.; Zhang, Jia J.; Bolstad, Isabella; Vargas, Carolyn; Keller, Sandro; Baker, Tania A.; Sauer, Robert T.; Davis, Joseph H.
The AAA protease FtsH associates with HflK/C subunits to form a megadalton-size complex that spans the inner membrane and extends into the periplasm of E. coli. How this bacterial complex and homologous assemblies in eukaryotic organelles recruit, extract, and degrade membrane-embedded substrates is unclear. Following the overproduction of protein components, recent cryo-EM structures showed symmetric HflK/C cages surrounding FtsH in a manner proposed to inhibit the degradation of membrane-embedded substrates. Here, we present structures of native protein complexes, in which HflK/C instead forms an asymmetric nautilus-shaped assembly with an entryway for membrane-embedded substrates to reach and be engaged by FtsH. Consistent with this nautilus-like structure, proteomic assays suggest that HflK/C enhances FtsH degradation of certain membrane-embedded substrates. Membrane curvature in our FtsH•HflK/C complexes is opposite that of surrounding membrane regions, a property that correlates with lipid scramblase activity and possibly with FtsH’s function in the degradation of membrane-embedded proteins.
</description>
<pubDate>Thu, 13 Mar 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162831</guid>
<dc:date>2025-03-13T00:00:00Z</dc:date>
</item>
<item>
<title>Building health systems capable of leveraging AI: applying Paul Farmer’s 5S framework for equitable global health</title>
<link>https://hdl.handle.net/1721.1/162830</link>
<description>Building health systems capable of leveraging AI: applying Paul Farmer’s 5S framework for equitable global health
McCoy, Liam G.; Bihorac, Azra; Celi, Leo A.; Elmore, Matthew; Kewalramani, Divya; Kwaga, Teddy; Martinez-Martin, Nicole; Prôa, Renata; Schamroth, Joel; Shaffer, Jonathan D.; Youssef, Alaa; Fiske, Amelia
The development of artificial intelligence (AI) applications in healthcare is often positioned as a solution to the greatest challenges facing global health. Advocates propose that AI can bridge gaps in care delivery and access, improving healthcare quality and reducing inequity, including in resource-constrained settings. A broad base of critical scholarship has highlighted important issues with healthcare AI, including algorithmic bias and inequitable and inaccurate model outputs. While such criticisms are valid, there exists a much more fundamental challenge that is often overlooked in global health policy debates: the dangerous mismatch between AI’s imagined benefits and the material realities of healthcare systems globally. AI cannot be deployed effectively or ethically in contexts lacking sufficient social and material infrastructure and resources to provide effective healthcare services. Continued investments in AI within unprepared, under-resourced contexts risk misallocating resources and potentially causing more harm than good. The article concludes by providing concrete questions to assess AI systemic capacity and socio-technical readiness in global health.
</description>
<pubDate>Fri, 02 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162830</guid>
<dc:date>2025-05-02T00:00:00Z</dc:date>
</item>
<item>
<title>Mediating The Marginal: A Quantitative Analysis of Curated LGBTQ+ Content on Instagram</title>
<link>https://hdl.handle.net/1721.1/162829</link>
<description>Mediating The Marginal: A Quantitative Analysis of Curated LGBTQ+ Content on Instagram
Souza, Garrett; Lutz, Nina; Turner, Katlyn
Control and curation of dominant visual culture – rendering who and what is visible – is central to identity formation, particularly for LGBTQ+ communities relying on digital spaces for safe self-expression. In this work, we analyze Instagram as a site of algorithmic visual curation, performing a quantitative analysis of algorithmically mediated image feeds delivered to a gay-coded user. Our persona account exclusively followed #gay and #instagay feeds, and engaged in content within these discursive spaces to seed algorithmic content promotion to a normative gay user. We present an analysis of skin tone presentations, emoji usage, and engagement metrics alongside analysis of generative outputs of dominant visual trends within the #gay search and Explore feeds. We observe content depicting darker-skinned individuals has higher engagement yet less algorithmic promotion relative to lighter skin tones, while hypermasculine and homonormative content is heavily promoted. These results suggest that, while marginalized positionalities have certainly been rendered more visible through social media platforms, this visibility is increasingly contingent on assimilation to normative ideals through algorithmically determined modes that are not necessarily consistent with user choices, preferences, or realities.
CHI ’25, Yokohama, Japan
</description>
<pubDate>Fri, 25 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162829</guid>
<dc:date>2025-04-25T00:00:00Z</dc:date>
</item>
<item>
<title>Mechanistic basis for the emergence of EPS1 as a catalyst in salicylic acid biosynthesis of Brassicaceae</title>
<link>https://hdl.handle.net/1721.1/162828</link>
<description>Mechanistic basis for the emergence of EPS1 as a catalyst in salicylic acid biosynthesis of Brassicaceae
Torrens-Spence, Michael P; Matos, Jason O; Li, Tianjie; Kastner, David W; Kim, Colin Y; Wang, Ziqi; Glinkerman, Christopher M; Sherk, Jennifer; Kulik, Heather J; Wang, Yi; Weng, Jing-Ke
Salicylic acid (SA) production in Brassicaceae plants is uniquely accelerated from isochorismate by EPS1, a newly identified enzyme in the BAHD acyltransferase family. We present crystal structures of EPS1 from Arabidopsis thaliana in both its apo and substrate-analog-bound forms. Integrating microsecond-scale molecular dynamics simulations with quantum mechanical cluster modeling, we propose a pericyclic rearrangement lyase mechanism for EPS1. We further reconstitute the isochorismate-derived SA biosynthesis pathway in Saccharomyces cerevisiae, establishing an in vivo platform to examine the impact of active-site residues on EPS1 functionality. Moreover, stable transgenic expression of EPS1 in soybean increases basal SA levels, highlighting the enzyme’s potential to enhance defense mechanisms in non-Brassicaceae plants lacking an EPS1 ortholog. Our findings illustrate the evolutionary adaptation of an ancestral enzyme’s active site to enable a novel catalytic mechanism that boosts SA production in Brassicaceae plants.
</description>
<pubDate>Mon, 01 Jan 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162828</guid>
<dc:date>2024-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Nested non-covalent interactions expand the functions of supramolecular polymer networks</title>
<link>https://hdl.handle.net/1721.1/162827</link>
<description>Nested non-covalent interactions expand the functions of supramolecular polymer networks
Lundberg, David J; Brown, Christopher M; Bobylev, Eduard O; Oldenhuis, Nathan J; Alfaraj, Yasmeen S; Zhao, Julia; Kevlishvili, Ilia; Kulik, Heather J; Johnson, Jeremiah A
Supramolecular polymer networks contain non-covalent cross-links that enable access to broadly tunable mechanical properties and stimuli-responsive behaviors; the incorporation of multiple unique non-covalent cross-links within such materials further expands their mechanical responses and functionality. To date, however, the design of such materials has been accomplished through discrete combinations of distinct interaction types in series, limiting materials design logic. Here we introduce the concept of leveraging “nested” supramolecular crosslinks, wherein two distinct types of non-covalent interactions exist in parallel, to control bulk material functions. To demonstrate this concept, we use polymer-linked Pd&lt;jats:sub&gt;2&lt;/jats:sub&gt;L&lt;jats:sub&gt;4&lt;/jats:sub&gt; metal–organic cage (polyMOC) gels that form hollow metal–organic cage junctions through metal–ligand coordination and can exhibit well-defined host-guest binding within their cavity. In these “nested” supramolecular network junctions, the thermodynamics of host-guest interactions within the junctions affect the metal–ligand interactions that form those junctions, ultimately translating to substantial guest-dependent changes in bulk material properties that could not be achieved in traditional supramolecular networks with multiple interactions in series.
</description>
<pubDate>Fri, 10 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162827</guid>
<dc:date>2024-05-10T00:00:00Z</dc:date>
</item>
<item>
<title>Direct air capture-assisted sustainable fuel solution in maritime sector: a carbon footprint perspective</title>
<link>https://hdl.handle.net/1721.1/162826</link>
<description>Direct air capture-assisted sustainable fuel solution in maritime sector: a carbon footprint perspective
Li, Shuangjun; Du, Zhenyu; Wang, Junyao; Wang, Hao; Cao, Xiangkun E.; Chen, Runkai; Pang, Yujia; Deng, Shuai; Mašek, Ondřej; Yuan, Xiangzhou; Lee, Ki B.
Carbon emissions reduction within the maritime sector is pivotal for realizing zero-carbon goals and mitigating climate impacts. Adopting renewable carbon fuels presents a potent strategy. It is necessary to have a comprehensive understanding of its negative carbon attributes and enduring contributions to future development based on carbon footprint assessment. By using the CO2 captured through direct air capture (DAC) technology and the H2 obtained via water electrolysis as feedstock, electro-methanol (e-methanol) can be produced under renewable energy-driven conditions. Owing to the environmental benefits and economic feasibility of e-methanol, we highlight its potential as a practical alternative to traditional fossil fuel-based technical scenarios. A quantitative analysis of this integrated system from a carbon footprint perspective allows for an environmental sustainability assessment. According to predictions, scaled-up usage of the system can reduce the maritime sector's contribution to global carbon emissions by half by 2050.
</description>
<pubDate>Fri, 16 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162826</guid>
<dc:date>2025-05-16T00:00:00Z</dc:date>
</item>
<item>
<title>Infants Recognize the Negative Impact of Phone Distraction on Performance</title>
<link>https://hdl.handle.net/1721.1/162822</link>
<description>Infants Recognize the Negative Impact of Phone Distraction on Performance
Cao, Qiong; Mears, Anna; Feigenson, Lisa
Seeing adults use cellphones is a common daily experience for infants, yet little is known about how infants think about others’cellphone use. Do infants recognize that phone usage can affect the user’s behavior? Here we asked whether infants expect aperson’s task performance to be impaired by phone use. Twenty‐month‐old infants watched adults building block towers. Oneadult did this while also using a phone, either looking at the screen and scrolling (Experiment 1; N = 24) or simply talking(Experiment 2; N = 24). Across both experiments, infants looked longer when the person who had been using the phone built ataller tower than the person who had not been using the phone, compared to the reverse. This suggests that infants expectedphone usage to negatively impact performance. Thus, early in development, children recognize that cell phone use can affectpeople's goal‐directed actions; this may be one example of a broader understanding of the impact of multitasking onperformance.
</description>
<pubDate>Fri, 21 Mar 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162822</guid>
<dc:date>2025-03-21T00:00:00Z</dc:date>
</item>
<item>
<title>Regio‐Selective Mechanical Enhancement of Polymer‐Grafted Nanoparticle Composites via Light‐Mediated Crosslinking</title>
<link>https://hdl.handle.net/1721.1/162821</link>
<description>Regio‐Selective Mechanical Enhancement of Polymer‐Grafted Nanoparticle Composites via Light‐Mediated Crosslinking
Kim, Kyungtae; Grummon, Benjamin C.; Thrasher, Carl J.; Macfarlane, Robert J.
Polymer-brush-grafted nanoparticles (PGNPs) that can be covalentlycrosslinked post-processing enable the fabrication of mechanically robust andchemically stable polymer nanocomposites with high inorganic ﬁller content.Modifying PGNP brushes to append UV-activated crosslinkers along the poly-mer chains would permit a modular crosslinking strategy applicable to a diverserange of nanocomposite compositions. Further, light-activated crosslinkingreactions enable spatial control of crosslink density to program intentionallyinhomogeneous mechanical responses. Here, a method of synthesizingcomposites using UV-crosslinkable brush-coated nanoparticles (referred to asUV-XNPs) is introduced that can be applied to various monomer compositionsby incorporating photoinitiators into the polymer brushes. UV crosslinking ofprocessed UV-XNP structures can increase their tensile modulus up to 15-foldwithout any noticeable alteration to their appearance or shape. By usingphotomasks to alter UV intensity across a sample, intentionally designedinhomogeneities in crosslink density result in predetermined anisotropic shapechanges under strain. This unique capability of UV-XNP materials is applied tostiﬀness-patterned ﬂexible electronic substrates that prevent the delaminationof rigid components under deformation. The potential of UV-XNPsas functional, soft device components is further demonstrated by wearabledevices that can be modiﬁed post-fabrication to customize their performance,permitting the ability to add functionality to existing device architectures.
</description>
<pubDate>Tue, 28 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162821</guid>
<dc:date>2025-01-28T00:00:00Z</dc:date>
</item>
<item>
<title>Co-designing Large Language Model Tools for Project-Based Learning with K12 Educators</title>
<link>https://hdl.handle.net/1721.1/162820</link>
<description>Co-designing Large Language Model Tools for Project-Based Learning with K12 Educators
Ravi, Prerna; Masla, John; Kakoti, Gisella; Lin, Grace; Anderson, Emma; Taylor, Matt; Ostrowski, Anastasia; Breazeal, Cynthia; Klopfer, Eric; Abelson, Hal
The emergence of generative AI, particularly large language models (LLMs), has opened the door for student-centered and active learning methods like project-based learning (PBL). However, PBL poses practical implementation challenges for educators around project design and management, assessment, and balancing student guidance with student autonomy. The following research documents a co-design process with interdisciplinary K-12 teachers to explore and address the current PBL challenges they face. Through teacher-driven interviews, collaborative workshops, and iterative design of wireframes, we gathered evidence for ways LLMs can support teachers in implementing high-quality PBL pedagogy by automating routine tasks and enhancing personalized learning. Teachers in the study advocated for supporting their professional growth and augmenting their current roles without replacing them. They also identified affordances and challenges around classroom integration, including resource requirements and constraints, ethical concerns, and potential immediate and long-term impacts. Drawing on these, we propose design guidelines for future deployment of LLM tools in PBL.
CHI ’25, Yokohama, Japan
</description>
<pubDate>Fri, 25 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162820</guid>
<dc:date>2025-04-25T00:00:00Z</dc:date>
</item>
<item>
<title>High-resolution direct thrust characterization of electrospray thrusters with EMI-BF4 at different temperatures and polarities</title>
<link>https://hdl.handle.net/1721.1/162819</link>
<description>High-resolution direct thrust characterization of electrospray thrusters with EMI-BF4 at different temperatures and polarities
Neunzig, O.; Lozano, P.; Tajmar, M.
Electrospray thrusters have garnered significant attention throughout the years as an exceptional propulsion technology for nano- and picosatellites due to their efficiency and precise thrust control. They operate on the principle of electrostatically accelerating charged particles (liquid droplets, pure ions or their mixtures) from ionic liquids and other low-volatility propellants, which are extracted from a Taylor-cone formation on top of porous emitter arrays. In this work we characterized the thrust performance of electrospray thrusters with the ionic liquid 1-ethyl-3-methylimidazoliumtetrafluoroborate (EMI-BF4) as well as an attempt with an acetate-based ionic liquid. The arrays were operated at different polarities and at elevated temperatures of up to 43 °C which led to a decrease in viscosity and enhanced current emission for EMI-BF4 with a factor of 1.43 at equal voltage levels. Temperature related effects resulted in a thrust difference of 3% between the maximum and minimum temperature throughout the tested current range. Thrust measurements for emission currents between 10 µA and 200 µA revealed a detectable and temperature independent difference between the positive and negative mode in favor of the negative polarity, indicating different ion-regimes compared to most data found in literature. The paper presents a novel thrust measurement setup for micro-propulsion systems based on a counterbalanced double pendulum thrust balance that achieves nanonewton resolution with the option to heat several thrusters. A comprehensive overview of the test setup and calculations of obtained electrospray parameters from experimental data is presented.
</description>
<pubDate>Mon, 23 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162819</guid>
<dc:date>2025-06-23T00:00:00Z</dc:date>
</item>
<item>
<title>Mucus-derived glycans are inhibitory signals for Salmonella Typhimurium SPI-1-mediated invasion</title>
<link>https://hdl.handle.net/1721.1/162818</link>
<description>Mucus-derived glycans are inhibitory signals for Salmonella Typhimurium SPI-1-mediated invasion
Wheeler, Kelsey M.; Gold, Michaela A.; Stevens, Corey A.; Tedin, Karsten; Wood, Amanda M.; Uzun, Deniz; Cárcamo-Oyarce, Gerardo; Turner, Bradley S.; Fulde, Marcus; Song, Jeongmin; Kramer, Jessica R.; Ribbeck, Katharina
Mucus forms a critical barrier against enteric pathogens like Salmonella enterica serovar Typhimurium. While in vivo studies indicate that secreted, gel-forming mucins and specifically core 3 glycosylation are protective against S. Typhimurium, the molecular mechanisms involved remain unclear. Here, we demonstrate that native intestinal mucins inhibit Salmonella invasion of colonic epithelial cells by downregulating the type 3 secretion system through suppression of the key virulence regulator, HilD. Our study identifies mucin glycans and specific mucin sugars, namely N-acetyl galactosamine and N-acetyl glucosamine, as the components responsible for mucin’s anti-virulence effect, likely via functional or direct interaction with HilD’s putative carbohydrate-binding domain. Notably, we find that the native presentation of these sugars is important for activity. These insights provide a mechanistic foundation for mucin-based strategies to combat enteric infections and, given the prevalence of homologous AraC-type regulators in other pathogens, suggest mucins’ potential as broad-spectrum anti-virulence agents.
</description>
<pubDate>Tue, 23 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162818</guid>
<dc:date>2025-09-23T00:00:00Z</dc:date>
</item>
<item>
<title>Revenue Management to Maximize Global Network Revenue for a Satellite Communication Operator</title>
<link>https://hdl.handle.net/1721.1/162817</link>
<description>Revenue Management to Maximize Global Network Revenue for a Satellite Communication Operator
Eiskowitz, Skylar; Cameron, Bruce G; Crawley, Edward F; Belobaba, Peter
The satellite communication (SatCom) industry is rapidly expanding, with supply growing much faster than demand, potentiallystraining market prices and company stability. Effective revenue management (RM) can help operators optimize the use of lim-ited and expensive satellite resources. Current SatCom RM methods fail to account for both the temporal and spatial nature ofsatellite services. This paper presents a multizone displacement-adjusted virtual nesting (DAVN) RM method to create bookinglimits that guide operators in determining which products to accept to maximize revenue. By incorporating spatial interzoneeffects, the multizone method improves revenue compared to the separate zones method by 2%–10%. The results demonstratethat under varying pricing structures, the multizone approach increases the acceptance of high-revenue mobile products by ap-proximately 10%, with a corresponding reduction in the sale of longer duration stationary products.
</description>
<pubDate>Fri, 21 Mar 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162817</guid>
<dc:date>2025-03-21T00:00:00Z</dc:date>
</item>
<item>
<title>Solving Large‐Scale Weapon Target Assignment Problems in Seconds Using Branch‐Price‐And‐Cut</title>
<link>https://hdl.handle.net/1721.1/162816</link>
<description>Solving Large‐Scale Weapon Target Assignment Problems in Seconds Using Branch‐Price‐And‐Cut
Bertsimas, Dimitris; Paskov, Alex
This paper proposes a framework based on branch-price-and-cut to solve the weapon target assignment (WTA) problem, a popularclass of non-linear assignment problems that has received significant attention over the past several decades. We first reformulatethe WTA into a form amenable to column generation and then derive efficient algorithms for initializing the column generation,solving the pricing problem, generating clique cuts, and managing the branch-and-bound. Through significant experimentation,we display the framework’s efficiency – which scales to solve problems with 10000 targets and weapons on a laptop and exactlysolves problems in seconds, which previously took hours to solve. We also discuss extensions to common WTA variants and moregeneral non-linear assignment problems in hopes of motivating algorithmic developments.
</description>
<pubDate>Mon, 27 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162816</guid>
<dc:date>2025-01-27T00:00:00Z</dc:date>
</item>
<item>
<title>First‐Order Empirical Interpolation Method for Real‐Time Solution of Parametric Time‐Dependent Nonlinear PDEs</title>
<link>https://hdl.handle.net/1721.1/162815</link>
<description>First‐Order Empirical Interpolation Method for Real‐Time Solution of Parametric Time‐Dependent Nonlinear PDEs
Nguyen, Ngoc Cuong
We present a model reduction approach for the real-time solution of time-dependent nonlinear partial differential equations(PDEs) with parametric dependencies. A major challenge in constructing efficient and accurate reduced-order models for nonlin-ear PDEs is the efficient treatment of nonlinear terms. We address this by unifying the implementation of hyperreduction methodsto deal with nonlinear terms. Furthermore, we introduce a first-order empirical interpolation method (EIM) to provide an effi-cient approximation of the nonlinear terms in time-dependent PDEs. We demonstrate the effectiveness of our approach on theAllen–Cahn equation, which models phase separation, and the Buckley–Leverett equation, which describes two-phase fluid flowin porous media. Numerical results highlight the accuracy, efficiency, and stability of the proposed method compared with boththe Galerkin–Newton approach and hyper-reduced models using the standard EIM.
</description>
<pubDate>Mon, 31 Mar 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162815</guid>
<dc:date>2025-03-31T00:00:00Z</dc:date>
</item>
<item>
<title>Propagation of Slow Slip Events on Rough Faults: Clustering, Back Propagation, and Re‐Rupturing</title>
<link>https://hdl.handle.net/1721.1/162814</link>
<description>Propagation of Slow Slip Events on Rough Faults: Clustering, Back Propagation, and Re‐Rupturing
Sun, Yudong; Cattania, Camilla
Seismic and geodetic observations show that slow slip events (SSEs) in subduction zones canhappen at all temporal and spatial scales and propagate at various velocities. Observation of rapid tremorreversals indicates back‐propagating fronts traveling much faster than the main rupture front. Heterogeneity offault properties, such as fault roughness, is a ubiquitous feature often invoked to explain this complex behavior,but how roughness affects SSEs is poorly understood. Here we use quasi‐dynamic seismic cycle simulations tomodel SSEs on a rough fault, using normal stress perturbations as a proxy for roughness and assuming rate‐and‐state friction, with velocity‐weakening friction at low slip rate and velocity‐strengthening at high slip rate. SSEsexhibit temporal clustering, large variations in rupture length and propagation speed, and back‐propagatingfronts at different scales. We identify a mechanism for back propagation: as ruptures propagate through low‐normal stress regions, a rapid increase in slip velocity combined with rate‐strengthening friction induces stressoscillations at the rupture tip, and the subsequent “delayed stress drop” induces secondary back‐propagatingfronts. Moreover, on rough faults with fractal elevation profiles, the transition from pulse to crack can also leadto the re‐rupture of SSEs due to local variations in the level of heterogeneity. Our study provides a possiblemechanism for the complex evolution of SSEs inferred from geophysical observations and its link to faultroughness.
</description>
<pubDate>Tue, 04 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162814</guid>
<dc:date>2025-02-04T00:00:00Z</dc:date>
</item>
<item>
<title>The Impact of 2022 Hunga Tonga‐Hunga Ha'apai (Hunga) Eruption on Stratospheric Circulation and Climate</title>
<link>https://hdl.handle.net/1721.1/162813</link>
<description>The Impact of 2022 Hunga Tonga‐Hunga Ha'apai (Hunga) Eruption on Stratospheric Circulation and Climate
Yook, Simchan; Solomon, Susan; Wang, Xinyue
The Hunga Tonga‐Hunga Ha'apai (Hunga) volcanic eruption in January 2022 injected asubstantial amount of water vapor and a moderate amount of SO2 into the stratosphere. Both satelliteobservations in 2022 and subsequent chemistry‐climate model simulations forced by realistic Hungaperturbations reveal large‐scale cooling in the Southern Hemisphere (SH) tropical to subtropical stratospherefollowing the Hunga eruption. This study analyzes the drivers of this cooling, including the distinctive role ofanomalies in water vapor, ozone, and sulfate aerosol concentration on the simulated climate response to theHunga volcanic forcing, based on climate simulations with prescribed chemistry/aerosol. Simulated circulationand temperature anomalies based on specified‐chemistry simulations show good agreement with previouscoupled‐chemistry simulations and indicate that each forcing of ozone, water vapor, and sulfate aerosol from theHunga volcanic eruption contributed to the circulation and temperature anomalies in the SH stratosphere. Ourresults also suggest that (a) the large‐scale stratospheric cooling during the austral winter was mainly induced bychanges in dynamical processes, not by radiative processes, and that (b) the radiative feedback from negativeozone anomalies contributed to the prolonged cold temperature anomalies in the lower stratosphere (∼70 hPalevel) and hence to long lasting cold conditions of the polar vortex.
</description>
<pubDate>Mon, 17 Mar 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162813</guid>
<dc:date>2025-03-17T00:00:00Z</dc:date>
</item>
<item>
<title>The Status of Vernier Acuity Following Late Sight Onset</title>
<link>https://hdl.handle.net/1721.1/162812</link>
<description>The Status of Vernier Acuity Following Late Sight Onset
Vogelsang, Lukas; Gupta, Priti; Vogelsang, Marin; Shah, Pragya; Tiwari, Kashish; Verma, Dhun; Yadav, Mrinalini; Raja, Sruti; Ganesh, Suma; Sinha, Pawan
We possess a remarkably acute ability to detect even small misalignments between extended line segments. This “vernier acuity”significantly exceeds our “resolution acuity”—the ability to resolve closely separated stimuli—and is generally considered a“hyperacuity,” since the detectable misalignments are markedly finer than the diameter of single retinal cones. Vernier acuityhas, thus, often been proposed to reflect spatial organization and multi-unit cortical processing, rendering it an important indexof visual function. Notably, vernier acuity exhibits a characteristic developmental signature: it is inferior to resolution acuity earlyin life but eventually exceeds it by up to one order of magnitude. However, vernier acuity may be disproportionately sensitiveto developmental disruptions. Here, we examined the resilience of acquiring this visual proficiency to early-onset, prolongeddeprivation by longitudinally tracking vernier and resolution acuities in children with dense congenital cataracts who gainedsight late in life as part of Project Prakash. Our data reveal marked longitudinal improvements in both acuity measures andalso demonstrate that, like the normally-sighted, late-sighted individuals’ vernier acuity exceeds their resolution acuity, therebyrendering it a hyperacuity. However, the extent of this hyperacuity is weaker than observed in normally-sighted controls, pointingto partial limitations in postsurgical skill acquisition. Despite these constraints, our findings point to the feasibility of formingsome integrative circuits in the visual system even when inputs are severely compromised, and to the availability of some residualplasticity late in childhood, with implications for the rehabilitation prospects of children following treatment for congenitalcataracts.
</description>
<pubDate>Wed, 05 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162812</guid>
<dc:date>2025-02-05T00:00:00Z</dc:date>
</item>
<item>
<title>Laser‐Enabled Fabrication of Flexible Printed Electronics with Integrated Functional Devices</title>
<link>https://hdl.handle.net/1721.1/162811</link>
<description>Laser‐Enabled Fabrication of Flexible Printed Electronics with Integrated Functional Devices
Babatain, Wedyan; Park, Christine; Ishii, Hiroshi; Gershenfeld, Neil
The demand for ﬂexible and printed electronics in wearable and soft roboticsapplications has increased the need for scalable, additive manufacturingprocesses. However, traditional printed circuit board manufacturing involvescomplex, multistep processes, is limited to certain substrates, and faceschallenges in integrating functional devices. Here, an additive, laser-enabledprocess is introduced for fabricating ﬂexible, double-sided printed electronicsleveraging laser-induced graphene (LIG) as a seed layer for selective copperelectrodeposition (E-LIG). This technique enables precise conductive circuitpatterning down to 50 µm and is reliable via formation in a single streamlinedprocess. E-LIG supports transfer to various substrates, allowing for large-areaelectronics up to 100 cm2 , broadening applications in large-scale interfaces.Functional LIG device integration, including sensors and actuators, directlyinterfaced with control circuits on a single substrate is demonstrated.Applications such as real-time graphical output and interactive interfacingshowcase the method’s versatility. E-LIG exhibits repairability for on-demandrestoration of damaged circuits, enhancing durability and oﬀering a scalable,cost-eﬀective solution for multifunctional printed electronics.
</description>
<pubDate>Tue, 04 Mar 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162811</guid>
<dc:date>2025-03-04T00:00:00Z</dc:date>
</item>
<item>
<title>Understanding housing market responses to stringent energy codes</title>
<link>https://hdl.handle.net/1721.1/162810</link>
<description>Understanding housing market responses to stringent energy codes
Muzio, Maria Jimena; Niu, Dongxiao; Steil, Justin; Zheng, Siqi
Increased energy efficiency in buildings is essential to reducing carbon emissions and addressing climate change. Massachusetts' Green Communities Act of 2008, aiming for a 50% reduction in carbon emissions by 2030 and net-zero by 2050, mandates the Stretch Energy Code for eligibility for state funding. This code requires new residential constructions to meet stringent Home Energy Rating System (HERS) Index scores. While these requirements benefit the environment, they may increase construction costs, affecting housing production and affordability. Using the staggered municipal adoption of the Stretch Energy Code to tease out causal relationships, we analyze the effects of the Stretch Energy Code on housing quantity and price across municipalities in Massachusetts. The results indicate that more energy-efficient single-family properties command a sales price premium of 4.0%, and the Stretch Energy Code adoption is associated with a decrease in the quantity of new single-family housing starts. Approximately 45.5% of the price increase is due to higher willingness to pay for energy-efficient homes, with the remainder attributed to reduced housing supply. Our article is particularly relevant as policymakers seek to balance the objectives and address the tensions between “E” and “S” in their “ESG” policy packages.
</description>
<pubDate>Sun, 16 Mar 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162810</guid>
<dc:date>2025-03-16T00:00:00Z</dc:date>
</item>
<item>
<title>Knowing to infinity: Full knowledge and the margin‐for‐error principle</title>
<link>https://hdl.handle.net/1721.1/162809</link>
<description>Knowing to infinity: Full knowledge and the margin‐for‐error principle
Fiat, Yonathan
Let’s say that I fully know that &#119901; if I know that &#119901;, I knowthat I know that &#119901;, I know that I know that I knowthat &#119901;, and so on. Let’s say that I partially know that &#119901;if I know that &#119901; but I don’t fully know that &#119901;. What,if anything, do I fully know? What, if anything, do Ipartially know? One response in the literature is that Ifully know everything that I know; partial knowledgeis impossible. This response is in tension with a plausi-ble margin-for-error principle on knowledge. A differentresponse in the literature is that I don’t fully know any-thing; everything that I know, I partially know. Recently,Goldstein (forthcoming, 2024) defended a third view,according to which I fully know some things and I par-tially know other things. While this seems plausible,Goldstein’s account is based on denying the margin-for-error principle. In this paper, I show that the possibilityof both full knowledge and partial knowledge is consis-tent with the margin-for-error principle. I also argue thatthe resulting picture of knowledge is well-motivated.
</description>
<pubDate>Tue, 18 Mar 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162809</guid>
<dc:date>2025-03-18T00:00:00Z</dc:date>
</item>
<item>
<title>Cleavable Strand‐Fusing Cross‐Linkers as Additives for Chemically Deconstructable Thermosets with Preserved Thermomechanical Properties</title>
<link>https://hdl.handle.net/1721.1/162808</link>
<description>Cleavable Strand‐Fusing Cross‐Linkers as Additives for Chemically Deconstructable Thermosets with Preserved Thermomechanical Properties
Zhang, Shuyi; Xu, Zhenchuang; Husted, Keith E. L.; Lundberg, David J.; Brown, Christopher M.; Wang, Yuyan; Shieh, Peyton; Ko, Kwangwook; Moore, Jeffrey S.; Johnson, Jeremiah A.
Permanently cross-linked polymer networks—thermosets—are often difﬁcult to chemically deconstruct. Theinstallation of cleavable bonds into the strands of thermosets using cleavable comonomers as additives can facilitatethermoset deconstruction without replacement of permanent cross-links, but such monomers can lead to reducedthermomechanical properties and require high loadings to function effectively, motivating the design of new and optimalcleavable additives. Here, we introduce “strand-fusing cross-linkers” (SFCs), which fuse two network strands via a four-way cleavable cross-link. SFCs enable deconstruction of model polydicyclopentadiene (pDCPD) thermosets with aslittle as one-ﬁfth of the molar loading needed to achieve deconstruction using traditional cleavable comonomers. SFCsfunction under traditional oven curing as well as low-energy frontal ring-opening metathesis polymerization (FROMP)conditions and lead to improved thermomechanical properties, for example, glass transition temperatures, compared toprior cleavable comonomer designs. This work motivates the development of increasingly improved cleavable additives toenable thermoset deconstruction without compromising material performance.
</description>
<pubDate>Thu, 27 Mar 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162808</guid>
<dc:date>2025-03-27T00:00:00Z</dc:date>
</item>
<item>
<title>Neo-Panamax Decarbonization via Microreactor Propulsion Conversion</title>
<link>https://hdl.handle.net/1721.1/162806</link>
<description>Neo-Panamax Decarbonization via Microreactor Propulsion Conversion
Kang, Richard; Izurieta Torres, Jose; O’Connor, Kristen
This study presents a comprehensive feasibility assessment for retrofitting a Neo-Panamax (NPX)&#13;
container vessel with nuclear microreactor propulsion to contribute to decarbonization of commercial shipping.&#13;
The project selected a 12,000 TEU container vessel as a baseline hull and replaced its WinGD 7x92-B diesel&#13;
engine and auxiliary generators with two MIT-designed Organically Cooled Reactors (OCRs), each paired with&#13;
a 27MW Mitsubishi steam turbine generator and a Leonardo DRS 36.5MW direct-drive electric motor. Detailed&#13;
Computer-Aided Design (CAD) modeling and Finite Element Analysis (FEA) were used to validate seakeeping&#13;
performance, optimize system arrangements, and verify the structural integrity of deck reinforcements under&#13;
static and buckling loads. Stability and damaged-condition survivability were evaluated using MAXSURF,&#13;
demonstrating intact and damaged American Bureau of Shipping (ABS) compliance across operational load&#13;
cases. Seakeeping analyses at sea states 4–9 confirmed that motions remain within recoverable righting-arm&#13;
limits. A bottom-up financial analysis compared lifecycle costs over 25 years, showing that the retrofit’s $540M&#13;
total cost—including capital, operations, maintenance, and nuclear fuel, and nuclear insurance—is significantly&#13;
lower than the $946M projected lifecycle cost of a conventional NPX and yields $405–806M in net savings&#13;
when accounting for impending carbon taxes. Key regulatory challenges including absence of propulsion-&#13;
specific nuclear regulations and port-entry protocols were identified as primary non-technical hurdles, with&#13;
emerging frameworks from industry consortia offering pathways to implementation. Nuclear microreactor&#13;
retrofits can be technically and economically viable for large commercial vessels, positioning them as a potent&#13;
strategy to meet International Maritime Organization’s (IMO) net-zero targets by 2050.
</description>
<pubDate>Wed, 01 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162806</guid>
<dc:date>2025-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design and testing of flat-panel pixel electrospray thrusters</title>
<link>https://hdl.handle.net/1721.1/162802</link>
<description>Design and testing of flat-panel pixel electrospray thrusters
Nachtigal, Catherine J.; Lozano, Paulo C.
Electrospray thrusters are a promising form of electric propulsion due to their compactness and high mass efficiency, making them advantageous in most mission scenarios, especially for small spacecraft. These thrusters operate through the emission of charged particles from an electrically-conductive liquid flowing inside an array of capillaries or sharp permeable structures from applying a potential difference between the liquid and a downstream extractor electrode. Emission is most efficient when operated in the pure ionic regime (PIR), with recent designs utilizing sharp porous structures to transport the liquid and provide electric field enhancement to induce ion evaporation. However, these structures are often difficult to manufacture uniformly at the scales required to ensure stable PIR emission. Existing electrospray thrusters also suffer in reliability due to the monolithic nature of their extractor design, which is prone to induce full array failure upon the shortage of a single emitter structure. These issues can be mitigated by a design that utilizes (1) a flat-panel array configuration, where the geometry and arrangement of each emitter element meets the physical requirements that ensure consistent manufacturing and PIR operation, and (2) a series of fuses interconnecting individual extractor rings for each emitter structure, which would break upon shortage, protecting the rest of the extractors in an array in case of a single emitter shortage. These fuses would allow each emitter to function as a pixel on an LED screen, where the outage of a single pixel does not prevent the rest of the pixels from producing the rest of the image. Through this research, an emitter design is properly fabricated with properties that favor PIR emission, as a capillary fabricated on top of a porous glass substrate. The required starting voltage based on this approach is simulated and a preliminary characterization is performed using a non-integrated extractor. Though degradation of the emitter is experienced over time due to the preliminary extractor set-up, it is found that the emitter capillary can properly wick propellant and operate at moderate voltages for tens of minutes.
</description>
<pubDate>Wed, 30 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162802</guid>
<dc:date>2025-04-30T00:00:00Z</dc:date>
</item>
<item>
<title>TutorUp: What If Your Students Were Simulated? Training Tutors to Address Engagement Challenges in Online Learning</title>
<link>https://hdl.handle.net/1721.1/162801</link>
<description>TutorUp: What If Your Students Were Simulated? Training Tutors to Address Engagement Challenges in Online Learning
Pan, Sitong; Schmucker, Robin; Garcia Bulle Bueno, Bernardo; Llanes, Salome Aguilar; Albo Alarc?n, Fernanda; Zhu, Hangxiao; Teo, Adam; Xia, Meng
With the rise of online learning, many novice tutors lack experience engaging students remotely. We introduce TutorUp, a Large Language Model (LLM)-based system that enables novice tutors to practice engagement strategies with simulated students through scenario-based training. Based on a formative study involving two surveys (N1 = 86, N2 = 102) on student engagement challenges, we summarize scenarios that mimic real teaching situations. To enhance immersion and realism, we employ a prompting strategy that simulates dynamic online learning dialogues. TutorUp provides immediate and asynchronous feedback by referencing tutor-students online session dialogues and evidence-based teaching strategies from learning science literature. In a within-subject evaluation (N = 16), participants rated TutorUp significantly higher than a baseline system without simulation capabilities regarding effectiveness and usability. Our findings suggest that TutorUp provides novice tutors with more effective training to learn and apply teaching strategies to address online student engagement challenges.
CHI ’25, Yokohama, Japan
</description>
<pubDate>Fri, 25 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162801</guid>
<dc:date>2025-04-25T00:00:00Z</dc:date>
</item>
<item>
<title>Ionic liquid electrospray beam target performance characterization</title>
<link>https://hdl.handle.net/1721.1/162800</link>
<description>Ionic liquid electrospray beam target performance characterization
Arestie, Steven M.; Marrese-Reading, Colleen M.; Shaik, Saba Z.
Electrospray thruster ground testing, with well understood facility effects, is of critical importance to qualify the technology for long duration flight missions. While there has been substantial work to understand the beam physics and plume dynamics of electrospray thrusters and the implications thereof on performance and lifetime, work to understand the impact of facility effects has been neglected until recently. Interactions between an electrospray plume and the vacuum chamber test facility have implications on both performance and lifetime. Therefore, any effort to characterize electrospray thruster performance and lifetime must be done so with an understanding of facility effects. In some ways, this is no different than the significant investment that has been made to understand the facility effects for plasma thruster testing. However, there are different challenges with the management of positively charged, negatively charged, and neutral propellant particles across a distribution of particle charge and mass when testing electrospray thrusters in a vacuum chamber. The focus of this paper is to characterize the significance of secondary particles from the impact of ionic liquid electrosprays with a beam target, and the influence of a novel beam target design and biasing. Results on secondary current and mass flux measurements are presented with some initial results on secondary time-of-flight measurements from the beam target. Additionally, beam target modeling results are presented to support the experiments and interpretation of the results. The results revealed secondary particles with an average charge-to-mass ratio as low as 31 C/kg, and that an improperly biased beam target, or no beam target, can artificially inflate emitted current due to electron back streaming by as much as 20%. The experimental and modeling results suggest an optimized beam target and screen voltage of -100 V and -200 V, respectively. If no consideration of facility effects is included in testing electrospray thrusters, performance, reliability, and lifetime can be adversely affected, and premature thruster failure may result. The work presented here improves our understanding of facility effects and our capabilities to mitigate them to successfully qualify and acceptance test electrospray thrusters for flight.
</description>
<pubDate>Tue, 01 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162800</guid>
<dc:date>2025-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Semi‐Automated, High‐Throughput Approach for the Synthesis and Identification of Highly Photo‐Cytotoxic Iridium Complexes</title>
<link>https://hdl.handle.net/1721.1/162799</link>
<description>A Semi‐Automated, High‐Throughput Approach for the Synthesis and Identification of Highly Photo‐Cytotoxic Iridium Complexes
Kench, Timothy; Rahardjo, Arielle; Terrones, Gianmarco G; Bellamkonda, Adinarayana; Maher, Thomas E; Storch, Marko; Kulik, Heather J; Vilar, Ramon
The discovery of new compounds with pharmacological properties is usually a lengthy, laborious and expensive process. Thus, there is increasing interest in developing workflows that allow for the rapid synthesis and evaluation of libraries of compounds with the aim of identifying leads for further drug development. Herein, we apply combinatorial synthesis to build a library of 90 iridium(III) complexes (81 of which are new) over two synthesise‐and‐test cycles, with the aim of identifying potential agents for photodynamic therapy. We demonstrate the power of this approach by identifying highly active complexes that are well‐tolerated in the dark but display very low nM phototoxicity against cancer cells. To build a detailed structure–activity relationship for this class of compounds we have used density functional theory (DFT) calculations to determine some key electronic parameters and study correlations with the experimental data. Finally, we present an optimised semi‐automated synthesise‐and‐test protocol to obtain multiplex data within 72 hours.
</description>
<pubDate>Mon, 26 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162799</guid>
<dc:date>2024-02-26T00:00:00Z</dc:date>
</item>
<item>
<title>Self-Amplified HF Release and Polymer Deconstruction Cascades Triggered by Mechanical Force</title>
<link>https://hdl.handle.net/1721.1/162798</link>
<description>Self-Amplified HF Release and Polymer Deconstruction Cascades Triggered by Mechanical Force
Hu, Yixin; Wang, Liqi; Kevlishvili, Ilia; Wang, Shu; Chiou, Chun-Yu; Shieh, Peyton; Lin, Yangju; Kulik, Heather J; Johnson, Jeremiah A; Craig, Stephen L
Hydrogen fluoride (HF) is a versatile reagent for material transformation, with applications in self-immolative polymers, remodeled siloxanes, and degradable polymers. The responsive in situ generation of HF in materials therefore holds promise for new classes of adaptive material systems. Here, we report the mechanochemically coupled generation of HF from alkoxy-gem-difluorocyclopropane (gDFC) mechanophores derived from the addition of difluorocarbene to enol ethers. Production of HF involves an initial mechanochemically assisted rearrangement of gDFC mechanophore to α-fluoro allyl ether whose regiochemistry involves preferential migration of fluoride to the alkoxy-substituted carbon, and ab initio steered molecular dynamics simulations reproduce the observed selectivity and offer insights into the mechanism. When the alkoxy gDFC mechanophore is derived from poly(dihydrofuran), the α-fluoro allyl ether undergoes subsequent hydrolysis to generate 1 equiv of HF and cleave the polymer chain. The hydrolysis is accelerated via acid catalysis, leading to self-amplifying HF generation and concomitant polymer degradation. The mechanically generated HF can be used in combination with fluoride indicators to generate an optical response and to degrade polybutadiene with embedded HF-cleavable silyl ethers (11 mol %). The alkoxy-gDFC mechanophore thus provides a mechanically coupled mechanism of releasing HF for polymer remodeling pathways that complements previous thermally driven mechanisms.
</description>
<pubDate>Sat, 30 Mar 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162798</guid>
<dc:date>2024-03-30T00:00:00Z</dc:date>
</item>
<item>
<title>A Thermally Stable SO2-Releasing Mechanophore: Facile Activation, Single-Event Spectroscopy, and Molecular Dynamic Simulations</title>
<link>https://hdl.handle.net/1721.1/162797</link>
<description>A Thermally Stable SO2-Releasing Mechanophore: Facile Activation, Single-Event Spectroscopy, and Molecular Dynamic Simulations
Sun, Yunyan; Neary, William J; Huang, Xiao; Kouznetsova, Tatiana B; Ouchi, Tetsu; Kevlishvili, Ilia; Wang, Kecheng; Chen, Yingying; Kulik, Heather J; Craig, Stephen L; Moore, Jeffrey S
Polymers that release small molecules in response to mechanical force are promising candidates as next-generation on-demand delivery systems. Despite advancements in the development of mechanophores for releasing diverse payloads through careful molecular design, the availability of scaffolds capable of discharging biomedically significant cargos in substantial quantities remains scarce. In this report, we detail a nonscissile mechanophore built from an 8-thiabicyclo[3.2.1]octane 8,8-dioxide (TBO) motif that releases one equivalent of sulfur dioxide (SO2) from each repeat unit. The TBO mechanophore exhibits high thermal stability but is activated mechanochemically using solution ultrasonication in either organic solvent or aqueous media with up to 63% efficiency, equating to 206 molecules of SO2 released per 143.3 kDa chain. We quantified the mechanochemical reactivity of TBO by single-molecule force spectroscopy and resolved its single-event activation. The force-coupled rate constant for TBO opening reaches ∼9.0 s–1 at ∼1520 pN, and each reaction of a single TBO domain releases a stored length of ∼0.68 nm. We investigated the mechanism of TBO activation using ab initio steered molecular dynamic simulations and rationalized the observed stereoselectivity. These comprehensive studies of the TBO mechanophore provide a mechanically coupled mechanism of multi-SO2 release from one polymer chain, facilitating the translation of polymer mechanochemistry to potential biomedical applications.
</description>
<pubDate>Sat, 06 Apr 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162797</guid>
<dc:date>2024-04-06T00:00:00Z</dc:date>
</item>
<item>
<title>Improving gas adsorption modeling for MOFs by local calibration of Hubbard U parameters</title>
<link>https://hdl.handle.net/1721.1/162796</link>
<description>Improving gas adsorption modeling for MOFs by local calibration of Hubbard U parameters
Cho, Yeongsu; Kulik, Heather J
While computational screening with density functional theory (DFT) is frequently employed for the screening of metal–organic frameworks (MOFs) for gas separation and storage, commonly applied generalized gradient approximations (GGAs) exhibit self-interaction errors, which hinder the predictions of adsorption energies. We investigate the Hubbard U parameter to augment DFT calculations for full periodic MOFs, targeting a more precise modeling of gas molecule–MOF interactions, specifically for N2, CO2, and O2. We introduce a calibration scheme for the U parameter, which is tailored for each MOF, by leveraging higher-level calculations on the secondary building unit (SBU) of the MOF. When applied to the full periodic MOF, the U parameter calibrated against hybrid HSE06 calculations of SBUs successfully reproduces hybrid-quality calculations of the adsorption energy of the periodic MOF. The mean absolute deviation of adsorption energies reduces from 0.13 eV for a standard GGA treatment to 0.06 eV with the calibrated U, demonstrating the utility of the calibration procedure when applied to the full MOF structure. Furthermore, attempting to use coupled cluster singles and doubles with perturbative triples calculations of isolated SBUs for this calibration procedure shows varying degrees of success in predicting the experimental heat of adsorption. It improves accuracy for N2 adsorption for cases of overbinding, whereas its impact on CO2 is minimal, and ambiguities in spin state assignment hinder consistent improvements of O2 adsorption. Our findings emphasize the limitations of cluster models and advocate the use of full periodic MOF systems with a calibrated U parameter, providing a more comprehensive understanding of gas adsorption in MOFs.
</description>
<pubDate>Tue, 16 Apr 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162796</guid>
<dc:date>2024-04-16T00:00:00Z</dc:date>
</item>
<item>
<title>Quantitative Electron Beam‐Single Atom Interactions Enabled by Sub‐20‐pm Precision Targeting</title>
<link>https://hdl.handle.net/1721.1/162795</link>
<description>Quantitative Electron Beam‐Single Atom Interactions Enabled by Sub‐20‐pm Precision Targeting
Roccapriore, Kevin M.; Ross, Frances M.; Klein, Julian
The ability to probe and control matter at the picometer scale is essential foradvancing quantum and energy technologies. Scanning transmission electronmicroscopy oﬀers powerful capabilities for materials analysis andmodiﬁcation, but sample damage, drift, and scan distortions hinder singleatom analysis and deterministic manipulation. Materials analysis andmodiﬁcation via electron–solid interactions can be transformed by precisedelivery of electrons to a speciﬁed atomic location, maintaining the beamposition despite drift, and minimizing collateral dose. Here a fast, low-dose,sub-20-pm precision electron beam positioning technique is developed,“atomic lock-on,” (ALO), which oﬀers the ability to position the beam on aspeciﬁc atomic column without previously irradiating that column. Thistechnique is used to lock onto a single selected atomic location to repeatedlymeasure its weak electron energy loss signal despite sample drift. Moreover,electron beam-matter interactions in single atomic events are measured with&#120525;s time resolution. This enables observation of single-atom dynamics, suchas atomic bistability, revealing partially bonded atomic conﬁgurations andrecapture phenomena. This opens prospects for using electron microscopyfor high-precision measurements and deterministic control of matter forquantum technologies.
</description>
<pubDate>Wed, 25 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162795</guid>
<dc:date>2025-06-25T00:00:00Z</dc:date>
</item>
<item>
<title>Enhancing resilience with natural growth targeting</title>
<link>https://hdl.handle.net/1721.1/162794</link>
<description>Enhancing resilience with natural growth targeting
Orphanides, Athanasios
Despite a number of helpful changes, including the adop-tion of an inflation target, the Fed's monetary policy strat-egy proved insufficiently resilient in recent years. Whilethe Fed eased policy appropriately during the pandemic,it fell behind the curve during the post-pandemic recov-ery. During 2021, the Fed kept easing policy while theinflation outlook was deteriorating and the economy wasgrowing considerably faster than the economy's naturalgrowth rate—the sum of the Fed's 2% inflation goal andthe growth rate of potential output. The resilience of theFed's monetary policy strategy could be enhanced, andsuch errors be avoided with guidance from a simple natu-ral growth targeting rule that prescribes that the federalfunds rate during each quarter be raised (cut) when pro-jected nominal income growth exceeds (falls short) of theeconomy's natural growth rate. An illustration with real-time data and forecasts since the early 1990s shows thatFed policy has not persistently deviated from this simplerule with the notable exception of the period coincidingwith the Fed's post-pandemic policy error.
</description>
<pubDate>Tue, 18 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162794</guid>
<dc:date>2025-02-18T00:00:00Z</dc:date>
</item>
<item>
<title>N‐Heterocyclic Carbene‐Based Copolymers for Templated Synthesis and Stabilization of Gold Nanoparticles</title>
<link>https://hdl.handle.net/1721.1/162793</link>
<description>N‐Heterocyclic Carbene‐Based Copolymers for Templated Synthesis and Stabilization of Gold Nanoparticles
Nguyen, Suong T.; Brown, Christopher M.; Zhang, Wenxu; Kilgallon, Landon J.; Johnson, Jeremiah A.
Surface functionalization and colloidal stability are pivotal for numerous applications of gold nanoparticles (Au-NPs). Over the past decade, N-heterocyclic carbenes (NHCs) have emerged as promising ligands for stabilizing Au-NPs owing to their ease of synthesis, structural diversity, and strong metal-ligand bonds. Here, we introduce new Au(I)–NHCcopolymer scaffolds as precursors to multidentate NHC-protected Au-NPs. Ring-opening metathesis copolymerization of a norbornene-appended Au(I)−NHC complex with another functionalized norbornene comonomer provides NHC–Au(I) copolymers with modular compositions and structures. Upon reduction, these copolymers yield multidentate polyNHC-coated Au-NPs with varied properties and corona functionalities dictated by the secondary monomer. These nanoparticles exhibit excellent size homogeneity and stability against aggregation in various buffers, cell culture media, and under exposure to electrolytes, oxidants, and exogenous thiols over extended periods. Moreover, we demonstrate post-synthetic surface functionalization reactions of polyNHC−Au-NPs while maintaining colloidal stability, highlighting their robustness and potential for applications such as bioconjugation. Overall, these findings underscore the potential of ROMP-derived NHC-containing copolymers as highly tunable and versatile multidentate ligands that may be suitable for other inorganic colloids and flat surfaces.
</description>
<pubDate>Mon, 17 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162793</guid>
<dc:date>2025-02-17T00:00:00Z</dc:date>
</item>
<item>
<title>Simultaneous 3D quantitative magnetization transfer imaging and susceptibility mapping</title>
<link>https://hdl.handle.net/1721.1/162792</link>
<description>Simultaneous 3D quantitative magnetization transfer imaging and susceptibility mapping
Jang, Albert; Chan, Kwok‐Shing; Mareyam, Azma; Stockmann, Jason; Huang, Susie Yi; Wang, Nian; Jang, Hyungseok; Lee, Hong‐Hsi; Liu, Fang
Purpose: Introduce a unified acquisition and modeling strategy to simul-taneously quantify magnetization transfer (MT), tissue susceptibility (&#120594;)and T∗2 .&#13;
Theory and Methods: Magnetization transfer is induced through the appli-cation of off-resonance irradiation between excitation and acquisition of anRF-spoiled gradient-echo scheme, where free pool spin–lattice relaxation (TF1 ),macromolecular proton fraction (f ) and magnetization exchange rate (kF ) werecalculated by modeling the magnitude of the MR signal using a binary spin-bathMT model with B+1 inhomogeneity correction via Bloch-Siegert shift. Simultane-ously, a multi-echo acquisition is incorporated into this framework to measurethe time evolution of both signal magnitude and phase, which was further mod-eled for estimating T∗2 and tissue susceptibility. In this work, we demonstratethe feasibility of this new acquisition and modeling strategy in vivo on the braintissue.&#13;
Results: In vivo brain experiments were conducted on five healthy subjects tovalidate our method. Utilizing an analytically derived signal model, we simul-taneously obtained 3D TF1 , f , kF , &#120594; and T∗2 maps of the whole brain. Our resultsfrom the brain regional analysis show good agreement with those previouslyreported in the literature, which used separate MT and QSM methods.Conclusion: A unified acquisition and modeling strategy based on an analyticalsignal model that fully leverages both the magnitude and phase of the acquiredsignals was demonstrated and validated for simultaneous MT, susceptibility andT∗2 quantification that are free from B+1 bias.
</description>
<pubDate>Mon, 17 Mar 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162792</guid>
<dc:date>2025-03-17T00:00:00Z</dc:date>
</item>
<item>
<title>The Development of Carbon Markets in Upper‐Middle‐Income Countries</title>
<link>https://hdl.handle.net/1721.1/162791</link>
<description>The Development of Carbon Markets in Upper‐Middle‐Income Countries
Stek, Pieter E.; Lima‐de‐Oliveira, Renato; Vasudhevan, Thessa
Upper-middle-income economies face a specific set of trade-offs when reducing carbon emissions, which differ from the trade-offs faced in low- and high-income economies. To mobilize domestic funds, middle-income countries are developing carbonmarkets to attract private sector investment. This study advances a theoretical framework for carbon market development andexplores the process in Brazil, Indonesia, and Malaysia. The case of Malaysia is examined in depth due to the slow developmentof its carbon market compared to its peers. Analysis reveals that Malaysia faces a carbon market dilemma due to high domesticemissions and internal challenges related to energy market regulation and land ownership, which have hindered the emergenceof a pro-carbon market coalition. In contrast, Brazil and Indonesia have been more active in the international voluntary carbonmarket and have implemented key regulations with domestic political support. This study provides insights into the challengesand opportunities of carbon market development in middle-income economies, highlighting the importance of resource endow-ments and an enabling coalition for successful implementation.
</description>
<pubDate>Wed, 05 Mar 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162791</guid>
<dc:date>2025-03-05T00:00:00Z</dc:date>
</item>
<item>
<title>Quality Disclosure and Regulation: Scoring Design in Medicare Advantage</title>
<link>https://hdl.handle.net/1721.1/162790</link>
<description>Quality Disclosure and Regulation: Scoring Design in Medicare Advantage
Vatter, Benjamin
Policymakers and market intermediaries often use quality scores to alleviate asymmetric information about product quality. Scores affect the demand for quality and, in equilibrium, its supply. Equilibrium effects break the rule whereby more information is always better, and the optimal design of scores must account for them. In the context of Medicare Advantage, I find that consumers' information is limited, and quality is inefficiently low. A simple design alleviates these issues and increases total welfare by 3.7 monthly premiums. More than half of the gains stem from scores' effect on quality rather than information. Scores can outperform full-information outcomes by regulating inefficient oligopolistic quality provision, and a binary certification of quality attains 98% of this welfare. Scores are informative even when coarse; firms' incentives are to produce quality at the scoring threshold, which consumers know. The primary design challenge of scores is to dictate thresholds and thus regulate quality.
</description>
<pubDate>Tue, 10 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162790</guid>
<dc:date>2025-06-10T00:00:00Z</dc:date>
</item>
<item>
<title>Robust Chemiresistive Behavior in Conductive Polymer/MOF Composites</title>
<link>https://hdl.handle.net/1721.1/162789</link>
<description>Robust Chemiresistive Behavior in Conductive Polymer/MOF Composites
Roh, Heejung; Kim, Dong‐Ha; Cho, Yeongsu; Jo, Young‐Moo; del Alamo, Jesús A; Kulik, Heather J; Dincă, Mircea; Gumyusenge, Aristide
Metal-organic frameworks (MOFs) are promising materials for gas sensing but are often limited to single-use detection. A hybridization strategy is demonstrated synergistically deploying conductive MOFs (cMOFs) and conductive polymers (cPs) as two complementary mixed ionic-electronic conductors in high-performing stand-alone chemiresistors. This work presents significant improvement in i) sensor recovery kinetics, ii) cycling stability, and iii) dynamic range at room temperature. The effect of hybridization across well-studied cMOFs is demonstrated based on 2,3,6,7,10,11-hexahydroxytriphenylene (HHTP) and 2,3,6,7,10,11-hexaiminotriphenylene (HITP) ligands with varied metal nodes (Co, Cu, Ni). A comprehensive mechanistic study is conducted to relate energy band alignments at the heterojunctions between the MOFs and the polymer with sensing thermodynamics and binding kinetics. The findings reveal that hole enrichment of the cMOF component upon hybridization leads to selective enhancement in desorption kinetics, enabling significantly improved sensor recovery at room temperature, and thus long-term response retention. This mechanism is further supported by density functional theory calculations on sorbate–analyte interactions. It is also found that alloying cPs and cMOFs enables facile thin film co-processing and device integration, potentially unlocking the use of these hybrid conductors in diverse electronic applications.
</description>
<pubDate>Wed, 17 Apr 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162789</guid>
<dc:date>2024-04-17T00:00:00Z</dc:date>
</item>
<item>
<title>Internal Catalysis in Dynamic Hydrogels with Associative Thioester Cross-Links</title>
<link>https://hdl.handle.net/1721.1/162788</link>
<description>Internal Catalysis in Dynamic Hydrogels with Associative Thioester Cross-Links
Zhang, Vivian; Ou, Carrie; Kevlishvili, Ilia; Hemmingsen, Christina M; Accardo, Joseph V; Kulik, Heather J; Kalow, Julia A
Thioesters are an essential functional group in biosynthetic pathways, which has motivated their development as reactive handles in probes and peptide assembly. Thioester exchange is typically accelerated by catalysts or elevated pH. Here, we report the use of bifunctional aromatic thioesters as dynamic covalent cross-links in hydrogels, demonstrating that at physiologic pH in aqueous conditions, transthioesterification facilitates stress relaxation on the time scale of hundreds of seconds. We show that intramolecular hydrogen bonding is responsible for accelerated exchange, evident in both molecular kinetics and macromolecular stress relaxation. Drawing from concepts in the vitrimer literature, this system exemplifies how dynamic cross-links that exchange through an associative mechanism enable tunable stress relaxation without altering stiffness.
</description>
<pubDate>Fri, 03 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162788</guid>
<dc:date>2024-05-03T00:00:00Z</dc:date>
</item>
<item>
<title>CH−π Interactions Are Required for Human Galectin-3 Function</title>
<link>https://hdl.handle.net/1721.1/162787</link>
<description>CH−π Interactions Are Required for Human Galectin-3 Function
Diehl, Roger C; Chorghade, Rajeev S; Keys, Allison M; Alam, Mohammad Murshid; Early, Stephen A; Dugan, Amanda E; Krupkin, Miri; Ribbeck, Katharina; Kulik, Heather J; Kiessling, Laura L
Glycan-binding proteins, or lectins, recognize distinct structural elements of polysaccharides, to mediate myriad biological functions. Targeting glycan-binding proteins involved in human disease has been challenging due to an incomplete understanding of the molecular mechanisms that govern protein-glycan interactions. Bioinformatics and structural studies of glycan-binding proteins indicate that aromatic residues with the potential for CH-π interactions are prevalent in glycan-binding sites. However, the contributions of these CH-π interactions to glycan binding and their relevance in downstream function remain unclear. An emblematic lectin, human galectin-3, recognizes lactose and &lt;i&gt;N&lt;/i&gt;-acetyllactosamine-containing glycans by positioning the electropositive face of a galactose residue over the tryptophan 181 (W181) indole forming a CH-π interaction. We generated a suite of galectin-3 W181 variants to assess the importance of these CH-π interactions to glycan binding and function. As determined experimentally and further validated with computational modeling, variants with smaller or less electron-rich aromatic side chains (W181Y, W181F, W181H) or sterically similar but nonaromatic residues (W181M, W181R) showed poor or undetectable binding to lactose and attenuated ability to bind mucins or agglutinate red blood cells. The latter functions depend on multivalent binding, highlighting that weakened CH-π interactions cannot be overcome by avidity. Two galectin-3 variants with disrupted hydrogen bonding interactions (H158A and E184A) showed similarly impaired lactose binding. Molecular simulations demonstrate that all variants have decreased binding orientation stability relative to native galectin-3. Thus, W181 collaborates with the endogenous hydrogen bonding network to enhance binding affinity for lactose, and abrogation of these CH-π interactions is as deleterious as eliminating key hydrogen bonding interactions. These findings underscore the critical roles of CH-π interactions in carbohydrate binding and lectin function and will aid the development of novel lectin inhibitors.
</description>
<pubDate>Thu, 18 Jul 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162787</guid>
<dc:date>2024-07-18T00:00:00Z</dc:date>
</item>
<item>
<title>Improving Predictions of Spin-Crossover Complex Properties through DFT Calculations with a Local Hybrid Functional</title>
<link>https://hdl.handle.net/1721.1/162786</link>
<description>Improving Predictions of Spin-Crossover Complex Properties through DFT Calculations with a Local Hybrid Functional
Rajpurohit, Sangeeta; Vennelakanti, Vyshnavi; Kulik, Heather J
We conducted a study on the performance of the local hybrid exchange-correlation functional PBE0r for a set of 95 experimentally characterized iron spin-crossover (SCO) complexes [Vennelakanti, V.; &lt;i&gt;J. Chem. Phys.&lt;/i&gt; 2023, 159, 024120]. The PBE0r functional is a variant of PBE0 where the exchange correction is restricted to on-site terms formulated on the basis of local orbitals. We determine the free parameters of the PBE0r functional against the experimental data and other hybrid functionals. With a Hartree-Fock (HF) exchange factor of 4%, the PBE0r functional accurately reproduces the electronic and free-energy trends predicted in prior DFT studies for these 95 complexes by using the B3LYP functional. Larger values of HF exchange stabilize high-spin states. The PBE0r-predicted bond lengths tend to exceed the experimental bond lengths, although bond lengths are less sensitive to HF exchange than in global hybrids. The predicted SCO transition temperatures &lt;i&gt;T&lt;/i&gt;&lt;sub&gt;1/2&lt;/sub&gt; from PBE0r correlate moderately with the experimental transition temperatures, showing a slight improvement compared to the previous modB3LYP-predicted &lt;i&gt;T&lt;/i&gt;&lt;sub&gt;1/2&lt;/sub&gt;. This study suggests that the PBE0r functional is computationally cost-effective and offers the possibility of simulating larger complexes with accuracy comparable to global hybrid functionals, provided the HF-exchange parameter is carefully optimized.
</description>
<pubDate>Thu, 03 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162786</guid>
<dc:date>2024-10-03T00:00:00Z</dc:date>
</item>
<item>
<title>Ligand‐Mediated Quantum Yield Enhancement in 1‐D Silver Organothiolate Metal–Organic Chalcogenolates</title>
<link>https://hdl.handle.net/1721.1/162785</link>
<description>Ligand‐Mediated Quantum Yield Enhancement in 1‐D Silver Organothiolate Metal–Organic Chalcogenolates
X‐ray free electron laser (XFEL) microcrystallography and synchrotron single‐crystal crystallography are used to evaluate the role of organic substituent position on the optoelectronic properties of metal–organic chalcogenolates (MOChas). MOChas are crystalline 1D and 2D semiconducting hybrid materials that have varying optoelectronic properties depending on composition, topology, and structure. While MOChas have attracted much interest, small crystal sizes impede routine crystal structure determination. A series of constitutional isomers where the aryl thiol is functionalized by either methoxy or methyl ester are solved by small molecule serial femtosecond X‐ray crystallography (smSFX) and single crystal rotational crystallography. While all the methoxy examples have a low quantum yield (0‐1%), the methyl ester in the &lt;jats:italic&gt;ortho&lt;/jats:italic&gt; position yields a high quantum yield of 22%. The proximity of the oxygen atoms to the silver inorganic core correlates to a considerable enhancement of quantum yield. Four crystal structures are solved at a resolution range of 0.8–1.0 Å revealing a collapse of the 2D topology for functional groups in the 2‐ and 3‐ positions, resulting in needle‐like crystals. Further analysis using density functional theory (DFT) and many‐body perturbation theory (MBPT) enables the exploration of complex excitonic phenomena within easily prepared material systems.
</description>
<pubDate>Sun, 01 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162785</guid>
<dc:date>2024-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Development of the Deployable HF Vector Sensor for the AERO-VISTA Spacecraft</title>
<link>https://hdl.handle.net/1721.1/162782</link>
<description>Development of the Deployable HF Vector Sensor for the AERO-VISTA Spacecraft
Silver, Mark; Lopez, Alai; Howe, Daniel; Thompson, Erik; Morris, Alexander; Fenn, Alan; Knapp, Mary; Erickson, Philip; Lind, Frank; Paritsky, Lenny; Masterson, Rebecca; Ammons, Kristen; Belsten, Nicholas; Kononov, Ekaterina; Payne, Cadence
The Auroral Emissions Radio Observer (AERO) and Vector Interferometry Space Technology using AERO (VISTA) CubeSat missions will use two identical 6U CubeSats developed to measure HF auroral emissions from Low Earth Orbit for NASA’s Space Mission Directorate (SMD) for Heliophysics. Each CubeSat employs a unique antenna, called a Vector Sensor Antenna (VSA), to measure all six electromagnetic degrees of freedom of incoming HF radiation via a combination of loop, dipole and monopole antennas. The VSA payload stows into a compact volume within the 6U spacecraft, and through a series of deployments, makes a 4 m by 4 m by 2.3 m antenna array. The relatively large antenna element deployment from such a small initial volume is achieved using fiberglass composite tape springs which unroll to form the antenna elements. These tape springs fall into a class of structural elements called High Strain Composites, which are becoming more commonly used in space missions. This paper describes the development, integration and testing of the AERO-VISTA VSA payload prototype.
2024 IEEE Aerospace Conference, Big Sky, MT, USA, 2-9 March
</description>
<pubDate>Mon, 13 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162782</guid>
<dc:date>2024-05-13T00:00:00Z</dc:date>
</item>
<item>
<title>Accessibility for Whom? Perceptions of Mobility Barriers Across Disability Groups and Implications for Designing Personalized Maps</title>
<link>https://hdl.handle.net/1721.1/162781</link>
<description>Accessibility for Whom? Perceptions of Mobility Barriers Across Disability Groups and Implications for Designing Personalized Maps
Li, Chu; Pang, Rock Yuren; Labb?, Delphine; Eisenberg, Yochai; Hosseini, Maryam; Froehlich, Jon
Today’s mapping tools fail to address the varied experiences of different mobility device users. This paper presents a large-scale online survey exploring how five mobility groups—users of canes, walkers, mobility scooters, manual wheelchairs, and motorized wheelchairs—perceive sidewalk barriers and differences therein. Using 52 sidewalk barrier images, respondents evaluated their confidence in navigating each scenario. Our findings (N=190) reveal variations in barrier perceptions across groups, while also identifying shared concerns. To further demonstrate the value of this data, we showcase its use in two custom prototypes: a visual analytics tool and a personalized routing tool. Our survey findings and open dataset advance work in accessibility-focused maps, routing algorithms, and urban planning.
CHI ’25, Yokohama, Japan
</description>
<pubDate>Fri, 25 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162781</guid>
<dc:date>2025-04-25T00:00:00Z</dc:date>
</item>
<item>
<title>Many-body expansion based machine learning models for octahedral transition metal complexes</title>
<link>https://hdl.handle.net/1721.1/162780</link>
<description>Many-body expansion based machine learning models for octahedral transition metal complexes
Meyer, Ralf; Chu, Daniel BK; Kulik, Heather J
Graph-based machine learning (ML) models for material properties show great potential to accelerate virtual high-throughput screening of large chemical spaces. However, in their simplest forms, graph-based models do not include any 3D information and are unable to distinguish stereoisomers such as those arising from different orderings of ligands around a metal center in coordination complexes. In this work we present a modification to revised autocorrelation descriptors, a molecular graph featurization method, for predicting spin state dependent properties of octahedral transition metal complexes (TMCs). Inspired by analytical semi-empirical models for TMCs, the new modeling strategy is based on the many-body expansion (MBE) and allows one to tune the captured stereoisomer information by changing the truncation order of the MBE. We present the necessary modifications to include this approach in two commonly used ML methods, kernel ridge regression and feed-forward neural networks. On a test set composed of all possible isomers of binary TMCs, the best MBE models achieve mean absolute errors (MAEs) of 2.75 kcal mol−1 on spin-splitting energies and 0.26 eV on frontier orbital energy gaps, a 30%–40% reduction in error compared to models based on our previous approach. We also observe improved generalization to previously unseen ligands where the best-performing models exhibit MAEs of 4.00 kcal mol−1 (i.e. a 0.73 kcal mol−1 reduction) on the spin-splitting energies and 0.53 eV (i.e. a 0.10 eV reduction) on the frontier orbital energy gaps. Because the new approach incorporates insights from electronic structure theory, such as ligand additivity relationships, these models exhibit systematic generalization from homoleptic to heteroleptic complexes, allowing for efficient screening of TMC search spaces.
</description>
<pubDate>Mon, 06 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162780</guid>
<dc:date>2025-01-06T00:00:00Z</dc:date>
</item>
<item>
<title>Mixed-Chalcogen 2D Silver Phenylchalcogenides (AgE1–xExPh; E = S, Se, Te)</title>
<link>https://hdl.handle.net/1721.1/162779</link>
<description>Mixed-Chalcogen 2D Silver Phenylchalcogenides (AgE1–xExPh; E = S, Se, Te)
Lee, Woo Seok; Cho, Yeongsu; Paritmongkol, Watcharaphol; Sakurada, Tomoaki; Ha, Seung Kyun; Kulik, Heather J; Tisdale, William A
Alloying is a powerful strategy for tuning the electronic band structure and optical properties of semiconductors. Here, we investigate the thermodynamic stability and excitonic properties of mixed-chalcogen alloys of two-dimensional (2D) hybrid organic–inorganic silver phenylchalcogenides (AgEPh; E = S, Se, Te). Using a variety of structural and optical characterization techniques, we demonstrate that the AgSePh-AgTePh system forms homogeneous alloys (AgSe1–xTexPh, 0 ≤ x ≤ 1) across all compositions, whereas the AgSPh-AgSePh and AgSPh-AgTePh systems exhibit distinct miscibility gaps. Density functional theory calculations reveal that chalcogen mixing is energetically unfavorable in all cases but comparable in magnitude to the ideal entropy of mixing at room temperature. Because AgSePh and AgTePh have the same crystal structure (which is different from AgSPh), alloying is predicted to be thermodynamically preferred over phase separation in the case of AgSePh-AgTePh, whereas phase separation is predicted to be more favorable than alloying for both the AgSPh-AgSePh and AgSPh-AgTePh systems, in agreement with experimental observations. Homogeneous AgSe1–xTexPh alloys exhibit continuously tunable excitonic absorption resonances in the ultraviolet–visible range, while the emission spectrum reveals competition between exciton delocalization (characteristic of AgSePh) and localization behavior (characteristic of AgTePh). Overall, these observations provide insight into the thermodynamics of 2D silver phenylchalcogenides and the effect of lattice composition on electron–phonon interactions in 2D hybrid organic–inorganic semiconductors.
</description>
<pubDate>Thu, 12 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162779</guid>
<dc:date>2024-12-12T00:00:00Z</dc:date>
</item>
<item>
<title>Assessment of ARPA-E Energy Storage Program: Capability and Capacity to Solve Battery Waste Issues</title>
<link>https://hdl.handle.net/1721.1/162778</link>
<description>Assessment of ARPA-E Energy Storage Program: Capability and Capacity to Solve Battery Waste Issues
Lubeck, Mila A.
Society today relies on batteries to power our devices, electric vehicles, and at growing rates grid-scale energy. As the demand for batteries increases so does the amount of waste produced. The Advanced Research Projects Agency-Energy (ARPA-E) has tried to tackle the battery waste issue through its energy storage program with a project called Catalyzing Innovative Research for Circular Use of Long-Lived Advanced Rechargeable (CIRCULAR). The program intends to introduce Electric Vehicle (EV) battery technology with longer lifespans and driving ranges to a circular supply chain. They also want to integrate an EV battery health monitor into the circular supply chain practices. The program intends to determine the ability of the project to commercialize at scale through analytics. This article notes previous ARPA-E efforts to solve the battery waste issue through a circular supply chain and develops a proposed innovation policy framework for a circular battery economy. This framework is separated into five categories which identify emerging technologies and create a system of federally funded waste and recycling sites. We propose integrating support mechanisms and using neoclassical economic tools to induce innovation. Also, we recommend collaborating with the appropriate agencies for the creation, continuation, and oversight of facilities. Lastly, we will include technology transfer of emerging technology for testing and validation upon hand-off. The article utilizes the proposed framework to guide policy recommendations and contribute one possible solution for the battery waste issue through a national system of transport and collection for material recovery, reuse, and cascaded use.
</description>
<pubDate>Thu, 22 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162778</guid>
<dc:date>2025-05-22T00:00:00Z</dc:date>
</item>
<item>
<title>First-principles study of SiO 2 / MoS 2 and SiO 2 / WS 2 interfaces: A comparative analysis of surface terminations, van der Waals corrections, and functionals</title>
<link>https://hdl.handle.net/1721.1/162777</link>
<description>First-principles study of SiO 2 / MoS 2 and SiO 2 / WS 2 interfaces: A comparative analysis of surface terminations, van der Waals corrections, and functionals
Fotopoulos, Vasileios; Siebenhofer, Matthäus; Huang, Mantao; Xu, Longlong; Yildiz, Bilge
This study presents a first-principles investigation of SiO 2 / MoS 2 and SiO 2 / WS 2 interfaces, examining how surface terminations, van der Waals (vdW) corrections, and functional choices impact structural stability and electronic properties. Using density functional theory with generalized gradient approximation (GGA; PBE, PBEsol, revPBE), meta-GGA (SCAN, r 2 SCAN), and hybrid (PBE0) functionals, we assess the effect of vdW correction schemes (D2, D3, Tkatchenko-Scheffler) on interfacial energetics and separation. The results show that vdW corrections are essential for accurate GGA descriptions, while meta-GGAs yield similar accuracy even without them, enabling efficient modeling of SiO 2 /2D heterostructures. Additionally, SiO 2 surface morphology plays a significant role, with fully saturated interfaces showing lower energy and greater interlayer separations. In both SiO 2 / MoS 2 and SiO 2 / WS 2 systems, band gap predictions using PBE0 closely match the experimental values, underscoring the value of hybrid functionals for accurate electronic structure calculations.
</description>
<pubDate>Mon, 19 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162777</guid>
<dc:date>2025-05-19T00:00:00Z</dc:date>
</item>
<item>
<title>Optoionics: New opportunity for ionic conduction-based radiation detection</title>
<link>https://hdl.handle.net/1721.1/162776</link>
<description>Optoionics: New opportunity for ionic conduction-based radiation detection
Defferriere, Thomas; Tuller, Harry L.
Optoionics, involving light-modulated ionic transport in ionic solids, parallels optoelectronics in semiconductors and offers novel device design opportunities across various fields. Among these opportunities, grain boundary phenomena related to radiation-induced electron/hole pair generation and charge trapping at the boundaries causing a modulation in ionic current could enable fast, sensitive, and reversible radiation detectors. The robustness of ionic solids in chemical, structural, and thermal aspects in turn makes them scalable and robust alternatives to traditional semiconductor detectors. This article explores the theoretical underpinnings, experimental breakthroughs, and design considerations needed to optimize such optoionic devices.
</description>
<pubDate>Tue, 13 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162776</guid>
<dc:date>2025-05-13T00:00:00Z</dc:date>
</item>
<item>
<title>Deceptive Explanations by Large Language Models Lead People to Change their Beliefs About Misinformation More Often than Honest Explanations</title>
<link>https://hdl.handle.net/1721.1/162775</link>
<description>Deceptive Explanations by Large Language Models Lead People to Change their Beliefs About Misinformation More Often than Honest Explanations
Danry, Valdemar; Pataranutaporn, Pat; Groh, Matthew; Epstein, Ziv
Advanced Artificial Intelligence (AI) systems, specifically large language models (LLMs), have the capability to generate not just misinformation, but also deceptive explanations that can justify and propagate false information and discredit true information. We examined the impact of deceptive AI generated explanations on individuals’ beliefs in a pre-registered online experiment with 11,780 observations from 589 participants. We found that in addition to being more persuasive than accurate and honest explanations, AI-generated deceptive explanations can significantly amplify belief in false news headlines and undermine true ones as compared to AI systems that simply classify the headline incorrectly as being true/false. Moreover, our results show that logically invalid explanations are deemed less credible - diminishing the effects of deception. This underscores the importance of teaching logical reasoning and critical thinking skills to identify logically invalid arguments, fostering greater resilience against advanced AI-driven misinformation.
CHI ’25, Yokohama, Japan
</description>
<pubDate>Fri, 25 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162775</guid>
<dc:date>2025-04-25T00:00:00Z</dc:date>
</item>
<item>
<title>Voxel Invention Kit: Reconfigurable Building Blocks for Prototyping Interactive Electronic Structures</title>
<link>https://hdl.handle.net/1721.1/162774</link>
<description>Voxel Invention Kit: Reconfigurable Building Blocks for Prototyping Interactive Electronic Structures
Smith, Miana; Forman, Jack; Abdel-Rahman, Amira; Wang, Sophia; Gershenfeld, Neil
Prototyping large, electronically integrated structures is challenging and often results in unwieldy wiring, weak mechanical properties, expensive iterations, or limited reusability. While many electronics prototyping kits exist for small-scale objects, relatively few methods exist to freely iterate large and sturdy structures with integrated electronics. To address this gap, we present the Voxel Invention Kit (VIK), which uses reconfigurable blocks that assemble into high-stiffness, lightweight structures with integrated electronics. We do this by creating cubic blocks composed of PCBs that carry electrical routing and components and can be (re)configured with simple tools into a variety of structures. To ensure structural stability without expertise, we created a tool to configure structures and simulate applied loads, which we validated with mechanical testing data. Using VIK, we produced devices reconfigured from a shared set of voxels: multiple iterations of a customizable AV lounge seat, a dance floor game, and a force-sensing bridge.
CHI ’25, Yokohama, Japan
</description>
<pubDate>Fri, 25 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162774</guid>
<dc:date>2025-04-25T00:00:00Z</dc:date>
</item>
<item>
<title>The energetic landscape of CH–π interactions in protein–carbohydrate binding</title>
<link>https://hdl.handle.net/1721.1/162773</link>
<description>The energetic landscape of CH–π interactions in protein–carbohydrate binding
Keys, Allison M; Kastner, David W; Kiessling, Laura L; Kulik, Heather J
CH–π interactions between carbohydrates and aromatic amino acids play an essential role in biological systems that span all domains of life. Quantifying the strength and importance of these CH–π interactions is challenging because these interactions involve several atoms and can exist in many distinct orientations. To identify an orientational landscape of CH–π interactions, we constructed a dataset of close contacts formed between β-D-galactose residues and the aromatic amino acids, tryptophan, tyrosine, and phenylalanine, across crystallographic structures deposited in the Protein Data Bank. We carried out quantum mechanical calculations to quantify their interaction strengths. The data indicate that tryptophan-containing CH–π interactions have more favorable interaction energies than those formed by tyrosine or phenylalanine. The energetic differences between these amino acids are caused by the aromatic ring system electronics and size. We use individual distance and angle features to train random forest models to successfully predict the first-principles computed energetics of CH–π interactions. Using insights from our models, we define a tradeoff in CH–π interaction strength arising from the proximity of galactose carbons 1 and 2 versus carbons 4 and 6 to the aromatic amino acid. Our work demonstrates that a feature of CH–π stacking interactions is that numerous orientations allow for highly favorable interaction strengths.
</description>
<pubDate>Tue, 03 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162773</guid>
<dc:date>2024-12-03T00:00:00Z</dc:date>
</item>
<item>
<title>Space-Based Solar Power: Implications for Operational Robustness in Lunar EVAs and Exploration Architectures</title>
<link>https://hdl.handle.net/1721.1/162772</link>
<description>Space-Based Solar Power: Implications for Operational Robustness in Lunar EVAs and Exploration Architectures
MacRobbie, Madelyn; Tretiakova, Anna; Chen, Vanessa; Ma, Clara
Human exploration of the lunar surface has large power requirements for both the lunar base and for rover exploration. NASA’s recent contract awards indicate a reliance on fission surface power. While nuclear options provide reliable power to lunar base locations, they have a limited reach that restricts exploration capacity. The Space Exploration Vehicle’s 125-mile range only allows coverage of 0.34% of the lunar surface. A constellation of space-based solar power (SBSP) satellites paired with pressurized rovers allows 24-h, full-surface coverage on excursions from the lunar base. A case study is conducted of the constellation design, system cost, operational lifetime, and power provided using SBSP. Results of the case study demonstrate that SBSP provides an additional 20 kW/h of emergency power and extends EVA range from 125 to 1000 km to cover 26 of the lunar geologic units, at an added lifecycle cost of less than 1% of the baseline mission cost. Addition of a SBSP constellation for rovers provides operational flexibility, safety, and robustness to enable multiple lunar exploration architectures beyond that enabled by surface power infrastructures, and should be further explored for lunar missions.
</description>
<pubDate>Sun, 01 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162772</guid>
<dc:date>2025-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards optimal energy efficiency: analysing generalized and tailored retrofitting decisions</title>
<link>https://hdl.handle.net/1721.1/162771</link>
<description>Towards optimal energy efficiency: analysing generalized and tailored retrofitting decisions
Castro, Wilamy; Barrelas, Joana; Mendes, Maria P.; Reinhart, Christoph; Silva, Ana
A building’s energy performance, in terms of thermal comfort, energy demand, cost and CO2 emissions, is considerably affected by its envelope. Enhancing energy efficiency through maintenance and retrofitting is essential to reduce consumption and emissions, thereby mitigating climate change. However, selecting the most cost-effective retrofitting solution remains challenging for decision-makers. Analysing real data across multiple scenarios provides valuable insights, supporting informed decision-making. This study discusses the impact of thermal retrofitting decisions on the energy efficiency of an existing single-family home, by analysing multiple scenarios concerning the implementation of measures on external walls, roof and windows. Both generalized and tailored approaches, particularly for external walls, are evaluated. Options include different insulation materials for the roof and façades—with the latter employing an external thermal insulation composite system (ETICS)—and various framing materials with double-glazing for window replacement. Various scenarios are discussed based on thermal simulations, implementation costs, and cost-benefit analysis. Additionally, multi-criteria (MCA) and sensitivity (SA) analyses are conducted to determine the optimal retrofitting solution. The most effective combined strategy applies ETICS with rock wool on the external walls, extruded polystyrene panels on the roof, and aluminium-framed windows with a thermal break, balancing energy efficiency, costs, durability, and sustainability. Although not part of the optimal solution, tailored retrofitting of façade F2 presents a viable alternative under cost constraints.
</description>
<pubDate>Sat, 12 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162771</guid>
<dc:date>2025-07-12T00:00:00Z</dc:date>
</item>
<item>
<title>Thrust Density in Porous Electrospray Thrusters</title>
<link>https://hdl.handle.net/1721.1/162770</link>
<description>Thrust Density in Porous Electrospray Thrusters
Corrado, Matthew N.; Lozano, Paulo C.
A path for increasing thrust density in electrospray thrusters is through fabrication of&#13;
denser arrays of emitters. Conventional arguments assume thrust to scale linearly with the&#13;
emitter number, but there has not been a critical analysis to examine the behavior of this&#13;
trend at very high densities. Here, we describe a model for thruster current as a function&#13;
of array density which considers how hydraulic losses change as density increases, and we&#13;
find that the ideal scaling is a poor approximation. In the optimistic cases, the current&#13;
increases monotonically with density but with diminishing returns. In the worst cases,&#13;
packing more emitters into the same space is detrimental as hydraulic losses dominate over&#13;
gains in the number of emitters. Under certain conditions there is an optimum density&#13;
which maximizes the net output. We also describe the fabrication and testing of a family&#13;
of porous electrospray emitters featuring pore sizes in the 10 nm to 100 nm range, with&#13;
the purpose of leveraging the high precision and uniformity afforded by these materials&#13;
to develop a platform suitable for experimentally validating the density models. A set&#13;
of test results from two of these thrusters is presented, both having a 450 µm pitch but&#13;
with different pore sizes. The 100 nm pore thruster shows characteristics similar to other&#13;
porous electrosprays, emitting in the pure-ion mode at currents up to 400 µA and exhibiting&#13;
current-temperature behavior commensurate with the liquid viscosity. The 10 nm pore&#13;
thruster appears to be greatly flow-restricted, producing about an order of magnitude less&#13;
current at analogous conditions and showing negligible response to changes in temperature.
39th International Electric Propulsion Conference, Imperial College London, London, United Kingdom 14-19 September 2025
</description>
<pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162770</guid>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Tractoriae and the logistics of Carolingian entourages</title>
<link>https://hdl.handle.net/1721.1/162769</link>
<description>Tractoriae and the logistics of Carolingian entourages
Goldberg, Eric J.
Entourages played a central role in Carolingian politics and militaryorganization. Yet historians have neglected the important question of howkings and magnates supplied their retinues. This article investigates thattopic by examining an overlooked genre of evidence: tractoriae or royal lettersof requisition. Louis the Pious revived the use of these late Roman andMerovingian documents to authorize magnates to collect supplies for theirfollowers and horses. The provisions enumerated in tractoriae give us rareinsight into the composition and scale of ninth-century retinues and armies.Their disappearance during the reign of Charles the Bald was bound upwith larger transformations of late Carolingian politics.
</description>
<pubDate>Wed, 19 Mar 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162769</guid>
<dc:date>2025-03-19T00:00:00Z</dc:date>
</item>
<item>
<title>Abstraction Alignment: Comparing Model-Learned and Human-Encoded Conceptual Relationships</title>
<link>https://hdl.handle.net/1721.1/162767</link>
<description>Abstraction Alignment: Comparing Model-Learned and Human-Encoded Conceptual Relationships
Boggust, Angie; Bang, Hyemin; Strobelt, Hendrik; Satyanarayan, Arvind
While interpretability methods identify a model’s learned concepts, they overlook the relationships between concepts that make up its abstractions and inform its ability to generalize to new data. To assess whether models’ have learned human-aligned abstractions, we introduce abstraction alignment, a methodology to compare model behavior against formal human knowledge. Abstraction alignment externalizes domain-specific human knowledge as an abstraction graph, a set of pertinent concepts spanning levels of abstraction. Using the abstraction graph as a ground truth, abstraction alignment measures the alignment of a model’s behavior by determining how much of its uncertainty is accounted for by the human abstractions. By aggregating abstraction alignment across entire datasets, users can test alignment hypotheses, such as which human concepts the model has learned and where misalignments recur. In evaluations with experts, abstraction alignment differentiates seemingly similar errors, improves the verbosity of existing model-quality metrics, and uncovers improvements to current human abstractions.
CHI ’25, Yokohama, Japan
</description>
<pubDate>Fri, 25 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162767</guid>
<dc:date>2025-04-25T00:00:00Z</dc:date>
</item>
<item>
<title>Deep Flows Transmitted by Forced Surface Gravity Waves</title>
<link>https://hdl.handle.net/1721.1/162766</link>
<description>Deep Flows Transmitted by Forced Surface Gravity Waves
Pizzo, Nick; Wagner, Gregory L.
We examine a two-dimensional deep-water surface gravity wave packet generated by a pressure disturbance in the Lagrangian reference frame. The pressure disturbance has the form of a narrow-banded weakly nonlinear deep-water wave packet. During forcing, the vorticity equation implies that the momentum resides entirely in the near-surface Lagrangian-mean flow, which in this context is often called the “Stokes drift”. After the forcing turns off, the wave packet propagates away from the forcing region, carrying with it most of the energy imparted by the forcing. These waves together with their induced long wave response have no momentum in a depth integrated sense, in agreement with the classical results of Longuet-Higgins and Stewart (Deep Sea Research and Oceanographic Abstracts 11, 592−562) and McIntyre (Journal of Fluid Mechanics 106, 331−347). The total flow associated with the propagating packet has no net momentum. In contrast with the finite-depth scenario discussed by McIntyre (Journal of Fluid Mechanics 106, 331−347), however, momentum imparted to the fluid during forcing resides in a dipolar structure that persists in the forcing region—rather than being carried away by shallow-water waves. We conclude by examining waves propagating from deep to shallow water and show that wave packets, which initially have no momentum, may have non-zero momentum in finite-depth water through reflected and trapped long waves. This explains how deep water waves acquire momentum as they approach shore. The artificial form of the parameterized forcing from the wind facilitates the thought experiments considered in this paper, as opposed to striving to model more realistic wind forcing scenarios.
</description>
<pubDate>Wed, 26 Mar 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162766</guid>
<dc:date>2025-03-26T00:00:00Z</dc:date>
</item>
<item>
<title>SpeakEasy: Enhancing Text-to-Speech Interactions for Expressive Content Creation</title>
<link>https://hdl.handle.net/1721.1/162765</link>
<description>SpeakEasy: Enhancing Text-to-Speech Interactions for Expressive Content Creation
Brade, Stephen; Anderson, Sam; Kumar, Rithesh; Jin, Zeyu; Truong, Anh
Novice content creators often invest significant time recording expressive speech for social media videos. While recent advancements in text-to-speech (TTS) technology can generate highly realistic speech in various languages and accents, many struggle with unintuitive or overly granular TTS interfaces. We propose simplifying TTS generation by allowing users to specify high-level context alongside their script. Our Wizard-of-Oz system, SpeakEasy, leverages user-provided context to inform and influence TTS output, enabling iterative refinement with high-level feedback. This approach was informed by two 8-subject formative studies: one examining content creators’ experiences with TTS, and the other drawing on effective strategies from voice actors. Our evaluation shows that participants using SpeakEasy were more successful in generating performances matching their personal standards, without requiring significantly more effort than leading industry interfaces.
CHI ’25, Yokohama, Japan
</description>
<pubDate>Fri, 25 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162765</guid>
<dc:date>2025-04-25T00:00:00Z</dc:date>
</item>
<item>
<title>Giant, non-perturbative tuning of light-matter interaction of embedded quantum dots in semiconducting matrices</title>
<link>https://hdl.handle.net/1721.1/162764</link>
<description>Giant, non-perturbative tuning of light-matter interaction of embedded quantum dots in semiconducting matrices
Wu, Ming-Chung; Hsiao, Kai-Chi; Fu, Chuliang; Lin, Ting-Han; Chang, Yin-Hsuan; Huang, Yu-Ching; Nieh, Mu-Ping; Su, Wei-Fang; Li, Mingda
Embedding quantum dots (QDs) in a solid-state matrix represents a promising hybrid platform that offers great flexibility and tunability. However, the lack of clear underlying designing principle and presence of large design space make the design process heavily relies on trial-and-error methods. Here we present a new principle that can drastically tailor the light-matter interaction of matrix by matrix-mediated QD interactions. We show that conducting matrices like P3HT can mediate a non-perturbative inter-QD interactions that lead to qualitatively distinct properties, including the enhanced carrier lifetime and enhanced binding energies with increased QD densities, which cannot be explained by conventional perturbative scattering theories and in sharp contrast to independent embedded QDs in an insulating matrix like PMMA. An effective quantum-field-theory is developed, showing qualitative agreement with experiments. Our study serves as a foundation for the predictive design of advanced hybrid materials aimed at optimizing functionalities.
</description>
<pubDate>Sat, 21 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162764</guid>
<dc:date>2025-06-21T00:00:00Z</dc:date>
</item>
<item>
<title>Machine learning used to study risk factors for chronic diseases: A scoping review</title>
<link>https://hdl.handle.net/1721.1/162763</link>
<description>Machine learning used to study risk factors for chronic diseases: A scoping review
Shergill, Mahek; Durant, Steve; Birdi, Sharon; Rabet, Roxana; Ziegler, Carolyn; Ali, Shehzad; Buckeridge, David; Ghassemi, Marzyeh; Gibson, Jennifer; John-Baptiste, Ava; Macklin, Jillian; McCradden, Melissa; McKenzie, Kwame; Naraei, Parisa
Objectives Machine learning (ML) has received significant attention for its potential to process and learn from vast amounts of data. Our aim was to perform a scoping review to identify studies that used ML to study risk factors for chronic diseases at a population level, notably those that incorporated methods to mitigate algorithmic bias. We focused on ML applications for the most common risk factors for chronic disease: tobacco use, alcohol use, unhealthy eating, physical activity, and psychological stress. Methods We searched the peer-reviewed, indexed literature using Medline (Ovid), Embase (Ovid), Cochrane Central Register of Controlled Trials and Cochrane Database of Systematic Reviews (Ovid), Scopus, ACM Digital Library, INSPEC, and Web of Science’s Science Citation Index, Social Sciences Citation Index, and Emerging Sources Citation Index. Among the included studies, we examined whether bias was considered and identified strategies employed to mitigate bias. Synthesis The search identified 10,329 studies, and 20 met our inclusion criteria. The studies we identified used ML for a wide range of goals, from prediction of chronic disease development to automating the classification of data to identifying new associations between risk factors and disease. Nine studies (45%) included some discussion of algorithmic bias. Studies that incorporated a broad array of sociodemographic variables did so primarily to improve the performance of a ML model rather than to mitigate potential harms to populations made vulnerable by social and economic policies. Conclusion This work contributes to our understanding of how ML can be used to advance population and public health.
</description>
<pubDate>Wed, 11 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162763</guid>
<dc:date>2025-06-11T00:00:00Z</dc:date>
</item>
<item>
<title>eaSEL: Promoting Social-Emotional Learning and Parent-Child Interaction through AI-Mediated Content Consumption</title>
<link>https://hdl.handle.net/1721.1/162762</link>
<description>eaSEL: Promoting Social-Emotional Learning and Parent-Child Interaction through AI-Mediated Content Consumption
Shen, Jocelyn; King Chen, Jennifer; Findlater, Leah; Dietz Smith, Griffin
CHI ’25, Yokohama, Japan
</description>
<pubDate>Fri, 25 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162762</guid>
<dc:date>2025-04-25T00:00:00Z</dc:date>
</item>
<item>
<title>Analysis Facilities for the HL-LHC White Paper</title>
<link>https://hdl.handle.net/1721.1/162761</link>
<description>Analysis Facilities for the HL-LHC White Paper
Ciangottini, D.; C. Forti, A.; Heinrich, L.; Skidmore, N.; Alpigiani, C.; Aly, M.; Benjamin, D.; Bockelman, B.; Bryant, L.; Catmore, J.
This white paper presents the current status of the R&amp;D for Analysis Facilities (AFs) and attempts to summarize the views on the future direction of these facilities. These views have been collected through the High Energy Physics (HEP) Software Foundation’s (HSF) Analysis Facilities forum (HSF Analysis Facilities Forum), established in March 2022, the Analysis Ecosystems II workshop (Analysis Ecosystems Workshop II), that took place in May 2022, and the WLCG/HSF pre-CHEP workshop (WLCG–HSF pre-CHEP Workshop), that took place in May 2023. The paper attempts to cover all the aspects of an analysis facility.
</description>
<pubDate>Sun, 13 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162761</guid>
<dc:date>2025-07-13T00:00:00Z</dc:date>
</item>
<item>
<title>SymbolFit: Automatic Parametric Modeling with Symbolic Regression</title>
<link>https://hdl.handle.net/1721.1/162760</link>
<description>SymbolFit: Automatic Parametric Modeling with Symbolic Regression
Tsoi, Ho F.; Rankin, Dylan; Caillol, Cecile; Cranmer, Miles; Dasu, Sridhara; Duarte, Javier; Harris, Philip; Lipeles, Elliot; Loncar, Vladimir
We introduce SymbolFit (API:  https://github.com/hftsoi/symbolfit ), a framework that automates parametric modeling by using symbolic regression to perform a machine-search for functions that fit the data while simultaneously providing uncertainty estimates in a single run. Traditionally, constructing a parametric model to accurately describe binned data has been a manual and iterative process, requiring an adequate functional form to be determined before the fit can be performed. The main challenge arises when the appropriate functional forms cannot be derived from first principles, especially when there is no underlying true closed-form function for the distribution. In this work, we develop a framework that automates and streamlines the process by utilizing symbolic regression, a machine learning technique that explores a vast space of candidate functions without requiring a predefined functional form because the functional form itself is treated as a trainable parameter, making the process far more efficient and effortless than traditional regression methods. We demonstrate the framework in high-energy physics experiments at the CERN Large Hadron Collider (LHC) using five real proton-proton collision datasets from new physics searches, including background modeling in resonance searches for high-mass dijet, trijet, paired-dijet, diphoton, and dimuon events. We show that our framework can flexibly and efficiently generate a wide range of candidate functions that fit a nontrivial distribution well using a simple fit configuration that varies only by random seed, and that the same fit configuration, which defines a vast function space, can also be applied to distributions of different shapes, whereas achieving a comparable result with traditional methods would have required extensive manual effort.
</description>
<pubDate>Tue, 01 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162760</guid>
<dc:date>2025-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Loss remakes you</title>
<link>https://hdl.handle.net/1721.1/162759</link>
<description>Loss remakes you
Edoh, Amah
This article tells the story of my research on Dutch wax cloth, a highly prized textileand cultural artifact in Togo, my home country. I examine the fate of the cloth and ofthe Togolese women who made it into an object of great significance in the wake ofpolitical upheaval starting in the late 1980s, the same upheaval that led to my family’spermanent departure from Togo in 1991. Tracking my trajectory through the researchas a Togolese émigrée, I come to see clearly for the first time that the cloth’s story andmy own were not only shaped by the same historical forces but that they also tracedsimilar arcs. Told together, the stories weave a tale of belonging, rupture, and of whatcomes after; a story of how loss remakes us, and how we remake ourselves in the faceof loss. Autoethnography emerges as a tool for unearthing the personal agendas thatso often guide our choice of research topics as anthropologists. And research on topicsthat are close to home proves to be as likely to reawaken old wounds as it is to openpathways to some measure of resolution.
</description>
<pubDate>Mon, 24 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162759</guid>
<dc:date>2025-02-24T00:00:00Z</dc:date>
</item>
<item>
<title>Multidimensional Labeling of Gesture in Communication: the M3D Proposal</title>
<link>https://hdl.handle.net/1721.1/162758</link>
<description>Multidimensional Labeling of Gesture in Communication: the M3D Proposal
Rohrer, Patrick L.; Tütüncübasi, Ulya; Florit-Pons, Júlia; Vilà-Giménez, Ingrid; Esteve-Gibert, Núria; Ren-Mitchell, Ada; Shattuck-Hufnagel, Stefanie; Prieto, Pilar
Communication is multimodal in that speakers use not only their voices, but also co-speech gestures to communicate. Recent insights suggest that gestural behavior has a strong association with prosodic structure and that a single gesture can communicate various semantic and pragmatic meanings. This highlights the importance of developing a comprehensive, flexible, and transparent approach to gesture annotation that accounts for multiple dimensions of gesture, including a gesture’s form, prosodic properties, and semantic and pragmatic contributions. To address this need for an increasingly dimensionalized approach to multimodal data annotation, the main goal of this paper is to present and describe a novel labeling system for manual gestures. The MultiModal MultiDimensional (M3D) system consists of an open access package that has been developed in coordination with five different labs working on gesture and its interaction with speech. The package includes a set of reliable annotation guidelines, a validated training program, and two annotated audiovisual corpora that represent over 60 minutes of lecture-style speech.
</description>
<pubDate>Thu, 26 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162758</guid>
<dc:date>2025-06-26T00:00:00Z</dc:date>
</item>
<item>
<title>Why axis inversion? Optimizing interactions between users, interfaces, and visual displays in 3D environments</title>
<link>https://hdl.handle.net/1721.1/162757</link>
<description>Why axis inversion? Optimizing interactions between users, interfaces, and visual displays in 3D environments
Corbett, Jennifer E.; Munneke, Jaap
From video games to laparoscopic surgeries, differences in users’ abilities to adapt to new control schemes can have significant, even deadly impacts on performance. Starting with the question of why some video game players invert the y-axis on their console controllers, this work aims to provide a foundation for future investigations of how control schemes can significantly impact performance. We argue that fragmented research across disciplines hinders a unified understanding of how the spatial relationships between users, interfaces, and visual displays affect performance. Therefore, we begin with a multidisciplinary literature synthesis, clarifying existing findings, and identifying methodological inconsistencies that contribute to conflicting results. We then explore the relationship between key behavioral and cognitive factors and y-axis inversion preference in a group of experienced 3rd person gamers. Based on these preliminary results, we propose a “general purpose” framework to systematically investigate how control inversion and visual input influence perception and performance across various movement goals. We demonstrate how this framework can be used to evaluate performance in the context of a common and challenging laparoscopic procedure, and how it can be generalized to assess and predict sensorimotor compatibility effects across a wide variety of real-world situations.
</description>
<pubDate>Mon, 23 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162757</guid>
<dc:date>2025-06-23T00:00:00Z</dc:date>
</item>
<item>
<title>Realism Drives Interpersonal Reciprocity but Yields to AI-Assisted Egocentrism in a Coordination Experiment</title>
<link>https://hdl.handle.net/1721.1/162756</link>
<description>Realism Drives Interpersonal Reciprocity but Yields to AI-Assisted Egocentrism in a Coordination Experiment
Shirado, Hirokazu; Shimizu, Kye; Christakis, Nicholas; Kasahara, Shunichi
Virtual reality technologies that enhance realism and artificial intelligence (AI) systems that assist human behavior are increasingly interwoven in social applications. However, how these technologies might jointly influence interpersonal coordination remains unclear. We conducted an experiment with 240 participants in 120 pairs who interacted through remote-controlled robot cars in a physical space or virtual cars in a digital space, with or without autosteering assistance, using the chicken game, an established model of interpersonal coordination. We find that both realism and AI assistance help improve user performance but through opposing mechanisms. Real-world contexts enhanced communication, fostering reciprocal actions and collective benefits. In contrast, autosteering assistance diminished the need for interpersonal coordination, shifting participants’ focus towards self-interest. Notably, when combined, the egocentric effects of autosteering assistance outweighed the prosocial effects of realism. The design of HCI systems that involve social coordination will, we believe, need to take such effects into account.
CHI ’25, Yokohama, Japan
</description>
<pubDate>Fri, 25 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162756</guid>
<dc:date>2025-04-25T00:00:00Z</dc:date>
</item>
<item>
<title>InteRecon: Towards Reconstructing Interactivity of Personal Memorable Items in Mixed Reality</title>
<link>https://hdl.handle.net/1721.1/162755</link>
<description>InteRecon: Towards Reconstructing Interactivity of Personal Memorable Items in Mixed Reality
Li, Zisu; Li, Jiawei; Xiong, Zeyu; Zhang, Shumeng; Faruqi, Faraz; Mueller, Stefanie; Liang, Chen; Ma, Xiaojuan; Fan, Mingming
Digital capturing of memorable personal items is a key way to archive personal memories. Although current digitization methods (e.g., photos, videos, 3D scanning) can replicate the physical appearance of an item, they often cannot preserve its real-world interactivity. We present Interactive Digital Item (IDI), a concept of reconstructing both the physical appearance and, more importantly, the interactivity of an item. We first conducted a formative study to understand users’ expectations of IDI, identifying key physical interactivity features, including geometry, interfaces, and embedded content of items. Informed by these findings, we developed InteRecon, an AR prototype enabling personal reconstruction functions for IDI creation. An exploratory study was conducted to assess the feasibility of using InteRecon and explore the potential of IDI to enrich personal memory archives. Results show that InteRecon is feasible for IDI creation, and the concept of IDI brings new opportunities for augmenting personal memory archives.
CHI ’25, Yokohama, Japan
</description>
<pubDate>Fri, 25 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162755</guid>
<dc:date>2025-04-25T00:00:00Z</dc:date>
</item>
<item>
<title>Systematic refinement of experimental practices to improve repeatability in flow battery cycling</title>
<link>https://hdl.handle.net/1721.1/162754</link>
<description>Systematic refinement of experimental practices to improve repeatability in flow battery cycling
O’Connor, Hugh; Quinn, Alexander H.; Brushett, Fikile R.; Istrate, Oana; Glover, Stephen; Bailey, Josh J.; Nockemann, Peter
Flow batteries represent one of the leading options for large-scale, long-duration energy storage. In recent years, research into this technology has accelerated, with numerous innovative studies focusing on electrolytes, membranes, and electrode materials. Despite this, there is presently no clear set of testing protocols followed during full-cell testing of flow batteries and the experimental techniques detailed in published literature are often insufficient to reproduce results. Furthermore, testing to quantify the repeatability of experiments is not often reported. In this work, various aspects of an experimental procedure developed from the peer-reviewed literature are refined, with voltage efficiency, coulombic efficiency, energy efficiency, and electrolyte utilization used as indicators of repeatability. A set of improved testing protocols are presented for researchers to consider when conducting charge–discharge testing, and additional factors to be reported and studied in the context of repeatability are suggested.
</description>
<pubDate>Wed, 11 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162754</guid>
<dc:date>2025-06-11T00:00:00Z</dc:date>
</item>
<item>
<title>More with less: topology optimization strategies for structural glass design</title>
<link>https://hdl.handle.net/1721.1/162676</link>
<description>More with less: topology optimization strategies for structural glass design
Jewett, Jackson L.; Koniari, Anna M.; Andriotis, Charalampos P.; Oikonomopoulou, Faidra; Bristogianni, Telesilla; Carstensen, Josephine V.
Advances in structural glass have enabled a new paradigm in expressive and transparent architecture. Cast glass can further extend the possibilities of structural glass by allowing for more complex and sophisticated shapes than the current planar geometries of structural float glass. However, the use of cast glass is currently limited because of the lengthy annealing process, making massive component sizes impractical to fabricate. Topology optimization (TO) has been proposed as a solution to this problem, as it is known to generate structurally efficient designs with a low volume of material. If tailored appropriately, TO can reduce component sizes and thereby diminish the total annealing time needed, while intelligently placing material in the areas where it will be utilized most effectively. For TO of glass to be successful, algorithms must properly capture glass’s specific material behavior. This research proposes a suite of TO algorithmic frameworks that design specifically for structural glass. These algorithms are demonstrated in a 2D design space, and the resulting geometries are fabricated using cut float glass and tested for experimental comparison on a 4-point bending load case. The results of these experiments provide valuable insights into the development of TO for structural glass, and help inform future research in TO of large-scale cast glass structures.
</description>
<pubDate>Fri, 30 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162676</guid>
<dc:date>2025-05-30T00:00:00Z</dc:date>
</item>
<item>
<title>Study of an effective machine learning-integrated science curriculum for high school youth in an informal learning setting</title>
<link>https://hdl.handle.net/1721.1/162675</link>
<description>Study of an effective machine learning-integrated science curriculum for high school youth in an informal learning setting
Rabinowitz, Gabrielle; Moore, Katherine S.; Ali, Safinah; Weckel, Mark; Lee, Irene; Gupta, Preeti; Chaffee, Rachel
Abstract Purpose This study evaluates the effectiveness of a machine learning (ML) integrated science curriculum implemented within the Science Research Mentorship Program (SRMP) for high school youth at the American Museum of Natural History (AMNH) over 2 years. The 4-week curriculum focused on ML knowledge gain, skill development, and self-efficacy, particularly for under-represented youth in STEM. Background ML is increasingly prevalent in STEM fields, making early exposure to ML methods and artificial intelligence (AI) literacy crucial for youth pursuing STEM careers. However, STEM fields, particularly those focused on AI research and development, suffer from a lack of diversity. Learning experiences that support the participation of under-represented groups in STEM and ML are essential to addressing this gap. Results Participant learning was assessed through pre- and post-surveys measuring ML knowledge, skills, and self-efficacy. Results from the implementation of the curriculum show that participants gained understanding of ML knowledge and skills (p &lt; 0.001, d = 1.083) and self-efficacy in learning ML concepts (p = 0.004, d = 0.676). On average, participants who identified as female and non-white showed greater learning gains than their white male peers (ML knowledge: p &lt; 0.001, d = 1.191; self-efficacy: p = 0.006, d = 0.631), decreasing gaps in ML knowledge, skills, and self-efficacy identified in pre-survey scores. Conclusions The ML-integrated curriculum effectively enhances students’ understanding and confidence in ML concepts, especially for under-represented groups in STEM, and provides a model for future ML education initiatives in informal science settings. We suggest that policy makers and school leaders take into account that high school age youth can learn ML concepts through integrated curricula while maintaining an awareness that curriculum effectiveness varies across demographic groups.
</description>
<pubDate>Sat, 19 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162675</guid>
<dc:date>2025-04-19T00:00:00Z</dc:date>
</item>
<item>
<title>Ancestry inferences from DNA testing results: The problem of sociogenetic essentialism</title>
<link>https://hdl.handle.net/1721.1/162674</link>
<description>Ancestry inferences from DNA testing results: The problem of sociogenetic essentialism
Kampourakis, Kostas; Fux, Michal
Millions of people have now taken DNA ancestry tests, with many of them looking for information about their origins or even their ethnic identity. However, what these tests can only do is allow for a probabilistic estimate of a person’s similarity to a reference group. This is often based on research in population genetics that study human genetic variation by identifying ancestry informative markers, that is, DNA markers that are found more often in one population rather than others. Whereas these markers are not the criteria for membership in a group, they can serve as indicia for it. However, a confusion of indicia for criteria can emerge supported by a particular form of intuitive thinking, psychological essentialism. It consists of a set of interrelated beliefs: (a) Particular categories distinguish between fundamentally different kinds of people; (b) The boundaries that separate these categories are strict and absolute; (c) These categories have internal homogeneity and differ fundamentally from one another; (d) All this is due to internal essences that make the members of each category what they are. When our genome or DNA are perceived to be these essences and when this kind of thinking is applied to social categories such as race and ethnicity, a view that we call “sociogenetic essentialism”, it can be highly problematic as it can form the basis for discrimination and exclusion. We argue that the use and reference to ancestry informative markers, unless clearly explained, may be misinterpreted due to a sociogenetic essentialist bias as confirming the genetic basis of social groups.
</description>
<pubDate>Fri, 16 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162674</guid>
<dc:date>2025-05-16T00:00:00Z</dc:date>
</item>
<item>
<title>RiD-kit: software package designed to do enhanced sampling using reinforced dynamics</title>
<link>https://hdl.handle.net/1721.1/162673</link>
<description>RiD-kit: software package designed to do enhanced sampling using reinforced dynamics
Fan, Jiahao; Wang, Yanze; Wang, Dongdong; Zhang, Linfeng
Background Developing an efficient method to accelerate the speed of molecular dynamics is a central theme in the field of molecular simulation. One category among the methods are collective-variable-based methods, which rely on predefined collective variables. The difficulty of selecting a few important collective variables hinders the methods to be applied to large systems easily. Method Here we present RiD-kit, which can utilize a large number of collective variables for enhanced sampling. The method could be applied to various kinds of systems, including biomolecules, chemical reactions and materials. In this protocol, we guide the users through all phases of the RiD-kit workflow, from preparing the input files, setting the simulation parameters and analyzing the results. Discussion The RiD-kit workflow provides an efficient and user-friendly command line tool which could submit jobs to various kinds of platforms including the high-performance computing platform, cloud server and local machines.
</description>
<pubDate>Tue, 24 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162673</guid>
<dc:date>2025-06-24T00:00:00Z</dc:date>
</item>
<item>
<title>Optimizing VGOS observations using an SNR-based scheduling approach</title>
<link>https://hdl.handle.net/1721.1/162671</link>
<description>Optimizing VGOS observations using an SNR-based scheduling approach
Schartner, Matthias; Petrachenko, Bill; Titus, Mike; Krásná, Hana; Barrett, John; Hoak, Dan; Mondal, Dhiman; Xu, Ming H.; Soja, Benedikt
The geodetic and astrometric very long baseline interferometry (VLBI) community is in the process of upgrading its existing infrastructure with the VLBI Global Observing System (VGOS). The primary objective of VGOS is to substantially boost the number of scans per hour for enhanced parameter estimation. However, the current observing strategy results in fewer scans than anticipated. During 2022, six 24-h VGOS Research and Development (R&amp;D) sessions were conducted to demonstrate a proof-of-concept aimed at addressing this shortcoming. The new observation strategy centers around a signal-to-noise (SNR)-based scheduling approach combined with eliminating existing overhead times in existing VGOS sessions. Two SNR-based scheduling approaches were tested during these sessions: one utilizing inter-/extrapolation of existing S/X source flux density models and another based on a newly derived source flux density catalog at VGOS frequencies. Both approaches proved effective, leading to a 2.3-fold increase in the number of scheduled scans per station and a 2.6-fold increase in the number of observations per station while maintaining a high observation success rate of approximately 90 % to 95 %. Consequently, both strategies succeeded in the main objective of these sessions by successfully increasing the number of scans per hour. The strategies described in this work can be easily applied to operational VGOS observations. Besides outlining and discussing the observation strategy, we further provide insight into the resulting signal-to-noise ratios, and discuss the impact on the precision of the estimated geodetic parameters. Monte Carlo simulations predicted a roughly 50 % increase in geodetic precision compared to operational VGOS sessions. The analysis confirmed that the formal errors in estimated station coordinates were reduced by 40 % to 50 %. In addition, Earth orientation parameters showed significant improvement, with a 40 % to 50 % reduction in formal errors.
</description>
<pubDate>Wed, 07 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162671</guid>
<dc:date>2025-05-07T00:00:00Z</dc:date>
</item>
<item>
<title>Leveraging AI-Generated Emotional Self-Voice to Nudge People towards their Ideal Selves</title>
<link>https://hdl.handle.net/1721.1/162670</link>
<description>Leveraging AI-Generated Emotional Self-Voice to Nudge People towards their Ideal Selves
Fang, Cathy Mengying; Chua, Phoebe; Chan, Samantha; Leong, Joanne; Bao, Andria; Maes, Pattie
Emotions, shaped by past experiences, significantly influence decision-making and goal pursuit. Traditional cognitive-behavioral techniques for personal development rely on mental imagery to envision ideal selves, but may be less effective for individuals who struggle with visualization. This paper introduces Emotional Self-Voice (ESV), a novel system combining emotionally expressive language models and voice cloning technologies to render customized responses in the user’s own voice. We investigate the potential of ESV to nudge individuals towards their ideal selves in a study with 60 participants. Across all three conditions (ESV, text-only, and mental imagination), we observed an increase in resilience, confidence, motivation, and goal commitment, and the ESV condition was perceived as uniquely engaging and personalized. We discuss the implications of designing generated self-voice systems as a personalized behavioral intervention for different scenarios.
CHI ’25, Yokohama, Japan
</description>
<pubDate>Fri, 25 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162670</guid>
<dc:date>2025-04-25T00:00:00Z</dc:date>
</item>
<item>
<title>ReMirrorFugue: Examining the Emotional Experience of Presence and (Illusory) Communications Across Time</title>
<link>https://hdl.handle.net/1721.1/162669</link>
<description>ReMirrorFugue: Examining the Emotional Experience of Presence and (Illusory) Communications Across Time
Xiao, Xiao; Noh, Hayoun; Lefevre, Adrien; Li, Lucy; McKee, Holly; Algargoosh, Alaa; Ishii, Hiroshi
This paper examines how strategies for simulating social presence across distance can evoke a sense of presence and facilitate illusory interactions across time. We conducted a mixed-methods study with 28 participants, exploring their emotional experience of interacting with decade-old recorded piano performances on MirrorFugue—a player piano enhanced with life-sized projections of the pianist’s hands and body, creating the illusion of a virtual reflection playing the instrument. Data were collected via wearable sensors, questionnaires, and interviews.&#13;
Results showed that participants felt a strong presence of past pianists, with some experiencing the illusion of two-way communication and an overall increase in connection. The emotional experience was significantly influenced by the participant’s relationship with the recorded pianist and the pianist’s vital status. These findings suggest that telepresence technologies can foster connections with the past, offering spaces for memory recall, self-reflection, and a sense of “time travel.”
CHI ’25, Yokohama, Japan
</description>
<pubDate>Fri, 25 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162669</guid>
<dc:date>2025-04-25T00:00:00Z</dc:date>
</item>
<item>
<title>Why Is the Monsoon Coastal Upwelling Signal Subdued in the Bay of Bengal?</title>
<link>https://hdl.handle.net/1721.1/162668</link>
<description>Why Is the Monsoon Coastal Upwelling Signal Subdued in the Bay of Bengal?
Abbott, Kathleen; Mahadevan, Amala
The Indian summer monsoon, which brings heavy precipitation to the densely populated Indiansubcontinent, plays an important role in the development of a coastal upwelling circulation that brings colder,nutrient‐rich water to the surface. Although the western shores of the Arabian Sea (AS) and Bay of Bengal(BoB) both experience upwelling‐favorable winds during June‐August, only the AS coastline exhibitssignificant surface cooling. In contrast, the BoB remains warm and its upwelling is characterized by a transient,weak sea surface temperature (SST) response confined to the east coast of India. A weaker mean alongshorewind stress and coastal circulation do not sufficiently explain the lack of SST response in the BoB. Here, weexamine other reasons for the differing behavior of these two coastal margins. Firstly, we show that while windsare persistently upwelling‐favorable in the western AS, intraseasonal wind variability in the BoB inducesintermittent upwelling. Secondly, the vertical density stratification is controlled by salinity in the BoB, andupwelled waters are saltier, but only marginally cooler than surface waters. By contrast, the density in the AS istemperature‐controlled, and upwelled waters are substantially colder than the surface. Additionally, satellite‐based SST in the BoB does not adequately resolve the upwelling signal. Using a numerical model, we find thatsalinity stratification has a greater influence on the mean SST, while wind frequency alters near‐shore SST andits temporal variability. This work has implications for the sensitivity of upwelling regions and their response towind stress and stratification in a warming climate.
</description>
<pubDate>Tue, 10 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162668</guid>
<dc:date>2024-12-10T00:00:00Z</dc:date>
</item>
<item>
<title>Autonomous observations enhance our ability to observe the biological carbon pump across diverse carbon export regimes</title>
<link>https://hdl.handle.net/1721.1/162667</link>
<description>Autonomous observations enhance our ability to observe the biological carbon pump across diverse carbon export regimes
Traylor, Shawnee; Nicholson, David P; Clevenger, Samantha J; Buesseler, Ken O; D'Asaro, Eric; Lee, Craig M
The expansion of autonomous observation platforms offers vast opportunities for analyzing ocean ecosystems and their role in carbon export. As part of the EXport Processes in the Ocean from RemoTe Sensing campaign, we autonomously measured the productivity regimes in two contrasting end-member ecosystem states. The first campaign occurred in the subpolar North Pacific near Ocean Station Papa (Site 1), characterized by iron limitation and a highly regenerative regime. The second captured a springtime bloom in the North Atlantic (Site 2), which typically drives efficient export of productivity. Using a combination of floats and gliders carrying biogeochemical sensors, we quantified gross primary productivity, net community production, and organic carbon export potential (fCorg) to assess biological carbon pump strength. Site 2 demonstrated higher cruise-period productivity, with roughly 5× the gross primary productivity and 13× the euphotic zone net community production seen at Site 1. Greater export efficiency at Site 2 was reflected in numerous indices, such as the ratio of new production to net primary productivity (ef-ratio; Site 1: 0.33; Site 2: 0.73), the ratio of sinking particulate organic carbon to net primary productivity (ez-ratio; Site 1: 0.24; Site 2: 0.69), and mean daily fCorg (Site 1: 3.4 ± 0.7; Site 2: 20.3 ± 2.3 mmol C m−2 d−1). Together with particulate organic carbon flux derived from thorium-234 measurements, we infer that observed low net community production was almost entirely routed to sinking particulate organic carbon at Site 1, while the much higher net community production at Site 2 resulted in near-equal proportions routed to dissolved organic carbon production and sinking particulate organic carbon.
</description>
<pubDate>Thu, 28 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162667</guid>
<dc:date>2025-08-28T00:00:00Z</dc:date>
</item>
<item>
<title>Transitive Array: An Efficient GEMM Accelerator with Result Reuse</title>
<link>https://hdl.handle.net/1721.1/162666</link>
<description>Transitive Array: An Efficient GEMM Accelerator with Result Reuse
Guo, Cong; Wei, Chiyue; Tang, Jiaming; Duan, Bowen; Han, Song; Li, Hai; Chen, Yiran
Deep Neural Networks (DNNs) and Large Language Models (LLMs) have revolutionized artificial intelligence, yet their deployment faces significant memory and computational challenges, especially in resource-constrained environments. Quantization techniques have mitigated some of these issues by reducing data precision, primarily focusing on General Matrix Multiplication (GEMM). This study introduces a novel sparsity paradigm, transitive sparsity, which leverages the reuse of previously computed results to substantially minimize computational overhead in GEMM operations. By representing transitive relations using a directed acyclic graph, we develop an efficient strategy for determining optimal execution orders, thereby overcoming inherent challenges related to execution dependencies and parallelism. Building on this foundation, we present the Transitive Array, a multiplication-free accelerator designed to exploit transitive sparsity in GEMM. Our architecture effectively balances computational workloads across multiple parallel lanes, ensuring high efficiency and optimal resource utilization. Comprehensive evaluations demonstrate that the Transitive Array achieves approximately 7.46 × and 3.97 × speedup and 2.31 × and 1.65 × energy reduction compared to state-of-the-art accelerators such as Olive and BitVert while maintaining comparable model accuracy on LLaMA models.
ISCA ’25, Tokyo, Japan
</description>
<pubDate>Fri, 20 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162666</guid>
<dc:date>2025-06-20T00:00:00Z</dc:date>
</item>
<item>
<title>Sonora: Human-AI Co-Creation of 3D Audio Worlds and its Impact on Anxiety and Cognitive Load</title>
<link>https://hdl.handle.net/1721.1/162665</link>
<description>Sonora: Human-AI Co-Creation of 3D Audio Worlds and its Impact on Anxiety and Cognitive Load
De La Torre, Fernanda; Hernandez, Javier; Wilson, Andrew; Amores, Judith
Soundscapes are widely used for relaxation, but their potential for personalized, navigable experiences remains under-explored. To address this, we developed Sonora, an AI tool that enables real-time generation of synthetic, spatialized soundscapes, allowing users to navigate immersive auditory environments and customize soundscapes using voice commands. Sonora’s architecture integrates audio diffusion models and LLMs within Unity3D. A between-subjects study with 32 participants investigated its effects on anxiety and user experience, compared to a control condition involving passive listening to a soundscape. Participants who interacted with Sonora reported higher entertainment than the control group. A positive correlation was found between state anxiety and user requests for Sonora, suggesting anxious users engaged more. Participants with moderate to high trait anxiety experienced significant reductions in state anxiety across both conditions, with no significant difference in cognitive load. Our findings highlight Sonora’s potential to promote relaxation, emphasizing the value of personalized experiences for mental health.
CHI ’25, Yokohama, Japan
</description>
<pubDate>Fri, 25 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162665</guid>
<dc:date>2025-04-25T00:00:00Z</dc:date>
</item>
<item>
<title>Concorde: Fast and Accurate CPU Performance Modeling with Compositional Analytical-ML Fusion</title>
<link>https://hdl.handle.net/1721.1/162664</link>
<description>Concorde: Fast and Accurate CPU Performance Modeling with Compositional Analytical-ML Fusion
Nasr-Esfahany, Arash; Alizadeh, Mohammad; Lee, Victor; Alam, Hanna; Coon, Brett; Culler, David; Dadu, Vidushi; Dixon, Martin; Levy, Henry; Pandey, Santosh; Ranganathan, Parthasarathy; Yazdanbakhsh, Amir
Cycle-level simulators such as gem5 are widely used in microarchitecture design, but they are prohibitively slow for large-scale design space explorations. We present Concorde, a new methodology for learning fast and accurate performance models of microarchitectures. Unlike existing simulators and learning approaches that emulate each instruction, Concorde predicts the behavior of a program based on compact performance distributions that capture the impact of different microarchitectural components. It derives these performance distributions using simple analytical models that estimate bounds on performance induced by each microarchitectural component, providing a simple yet rich representation of a program’s performance characteristics across a large space of microarchitectural parameters. Experiments show that Concorde is more than five orders of magnitude faster than a reference cycle-level simulator, with about 2% average Cycles-Per-Instruction (CPI) prediction error across a range of SPEC, open-source, and proprietary benchmarks. This enables rapid design-space exploration and performance sensitivity analyses that are currently infeasible, e.g., in about an hour, we conducted a first-of-its-kind fine-grained performance attribution to different microarchitectural components across a diverse set of programs, requiring nearly 150 million CPI evaluations.
ISCA ’25, Tokyo, Japan
</description>
<pubDate>Fri, 20 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162664</guid>
<dc:date>2025-06-20T00:00:00Z</dc:date>
</item>
<item>
<title>What's in a Query: Polarity-Aware Distribution-Based Fair Ranking</title>
<link>https://hdl.handle.net/1721.1/162663</link>
<description>What's in a Query: Polarity-Aware Distribution-Based Fair Ranking
Balagopalan, Aparna; Wang, Kai; Salaudeen, Olawale; Biega, Asia; Ghassemi, Marzyeh
Machine learning-driven rankings, where individuals (or items) are ranked in response to a query, mediate search exposure or attention in a variety of safety-critical settings. Thus, it is important to ensure that such rankings are fair. Under the goal of equal opportunity, attention allocated to an individual on a ranking interface should be proportional to their relevance across search queries. In this work, we examine amortized fair ranking -- where relevance and attention are cumulated over a sequence of user queries to make fair ranking more feasible in practice. Unlike prior methods that operate on expected amortized attention for each individual, we define new divergence-based measures for attention distribution-based fairness in ranking (DistFaiR), characterizing unfairness as the divergence between the distribution of attention and relevance corresponding to an individual over time. This allows us to propose new definitions of unfairness, which are more reliable at test time. Second, we prove that group fairness is upper-bounded by individual fairness under this definition for a useful class of divergence measures, and experimentally show that maximizing individual fairness through an integer linear programming-based optimization is often beneficial to group fairness. Lastly, we find that prior research in amortized fair ranking ignores critical information about queries, potentially leading to a fairwashing risk in practice by making rankings appear more fair than they actually are.
WWW ’25, April 28-May 2, 2025, Sydney, NSW, Australia
</description>
<pubDate>Tue, 22 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162663</guid>
<dc:date>2025-04-22T00:00:00Z</dc:date>
</item>
<item>
<title>The Computational Advantage of MIP* Vanishes in the Presence of Noise</title>
<link>https://hdl.handle.net/1721.1/162662</link>
<description>The Computational Advantage of MIP* Vanishes in the Presence of Noise
Dong, Yangjing; Fu, Honghao; Natarajan, Anand; Qin, Minglong; Xu, Haochen; Yao, Penghui
Quantum multiprover interactive proof systems with entanglement MIP* are much more powerful than its classical counterpart MIP (Babai et al. '91, Ji et al. '20): while MIP = NEXP, the quantum class MIP* is equal to RE, a class including the halting problem. This is because the provers in MIP* can share unbounded quantum entanglement. However, recent works of Qin and Yao '21 and '23 have shown that this advantage is significantly reduced if the provers' shared state contains noise. This paper attempts to exactly characterize the effect of noise on the computational power of quantum multiprover interactive proof systems. We investigate the quantum two-prover one-round interactive system MIP*[poly, O(1)], where the verifier sends polynomially many bits to the provers and the provers send back constantly many bits. We show noise completely destroys the computational advantage given by shared entanglement in this model. Specifically, we show that if the provers are allowed to share arbitrarily many noisy EPR states, where each EPR state is affected by an arbitrarily small constant amount of noise, the resulting complexity class is equivalent to NEXP = MIP. This improves significantly on the previous best-known bound of NEEEXP (nondeterministic triply exponential time) by Qin and Yao '21. We also show that this collapse in power is due to the noise, rather than the O(1) answer size, by showing that allowing for noiseless EPR states gives the class the full power of RE = MIP*[poly, poly]. Along the way, we develop two technical tools of independent interest. First, we give a new, deterministic tester for the positivity of an exponentially large matrix, provided it has a low-degree Fourier decomposition in terms of Pauli matrices. Secondly, we develop a new invariance principle for smooth matrix functions having bounded third-order Fr&amp;#233;chet derivatives or which are Lipschitz continuous.
</description>
<pubDate>Sat, 16 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162662</guid>
<dc:date>2025-08-16T00:00:00Z</dc:date>
</item>
<item>
<title>Inherent Bias in Electronic Health Records: A Scoping Review of Sources of Bias</title>
<link>https://hdl.handle.net/1721.1/162661</link>
<description>Inherent Bias in Electronic Health Records: A Scoping Review of Sources of Bias
Perets, Oriel; Stagno, Emanuela; Ben Yehuda, Eyal; McNichol, Megan; Celi, Leo; Rappoport, Nadav; Dorotic, Matilda
Biases inherent in electronic health records (EHRs), a common data source for training medical AI models, may exacerbate health inequities and hinder the adoption of ethical, responsible AI in healthcare. These biases originate from various sources, including implicit clinician biases, data collection and labeling practices, medical devices, and tools used for data processing. Such biases undermine data reliability, influence clinical decisions, and worsen healthcare disparities. When EHR data is used to develop data-driven solutions, biases can further propagate, creating systems that perpetuate inequities. This scoping review categorizes the primary sources of bias in EHRs. We conducted a literature search on PubMed and Web of Science (January 19, 2023) for English-language studies published between 2016 and 2023, following the PRISMA methodology. From 430 initial papers, 27 duplicates were removed, and 403 studies were screened for eligibility. After title, abstract, and full-text reviews, 116 articles were included in the final analysis.    Existing studies often focus on isolated biases in EHRs but lack a comprehensive taxonomy. To address this gap, we propose a systematic classification framework encompassing six key sources of bias: (a) biases from prior clinical trials; (b) data-related biases, such as missing or incomplete information; (c) implicit clinician bias; (d) referral and admission bias; (e) diagnosis or risk disparity biases; and (f) biases in medical devices and algorithms. This taxonomy, outlined in Table 1, provides a foundation for evaluating and addressing these issues.    While machine learning has transformative potential in healthcare, its effectiveness depends on the integrity of its inputs. Current evidence predominantly addresses data-related biases, with less attention to human or device-related biases, which are often anecdotal or underexplored. For example, racial biases in EHRs are well-documented, but gender-related, sexual orientation, and socially induced biases remain less studied. Compounding biases from these diverse sources can significantly impact AI recommendations, clinical decisions, and patient outcomes. Our review underscores the prevalence of data, human, and machine biases in healthcare and their role in amplifying disparities. To mitigate these challenges, we recommend adopting a ?bias-in-mind? approach when designing data-driven solutions, along with developing safeguards and generating more empirical evidence on bias impacts. This holistic understanding is essential for ensuring equitable and reliable AI applications in healthcare.
</description>
<pubDate>Mon, 05 Aug 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162661</guid>
<dc:date>2024-08-05T00:00:00Z</dc:date>
</item>
<item>
<title>Analyzing and Categorizing Emerging Cybersecurity Regulations</title>
<link>https://hdl.handle.net/1721.1/162660</link>
<description>Analyzing and Categorizing Emerging Cybersecurity Regulations
Marotta, Angelica; Madnick, Stuart
As cyber-attacks become more frequent, sophisticated, and impactful, governments worldwide are responding by introducing or proposing new cybersecurity regulations. This paper examines over 170 recent regulations and trends in cybersecurity across various regions, including the United States, Europe, and beyond. It identifies 17 key features in many of these regulations, which we have grouped into 5 categories, analyzes observed patterns, and proposes areas for improvement. This paper's primary objective is to significantly contribute to the cybersecurity compliance domain by helping researchers understand the structure of these regulations and helping organizations to assess and mitigate their cyber risk within an increasingly complex and regulated cybersecurity environment. Our findings provide valuable direction to those trying to navigate the flood of new cybersecurity regulations and the governments enacting new cybersecurity regulations.
</description>
<pubDate>Fri, 08 Sep 2028 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162660</guid>
<dc:date>2028-09-08T00:00:00Z</dc:date>
</item>
<item>
<title>From Tech Lash to Tech Fash: Strategic Reflections on a Decade of Collective Organizing in Computing</title>
<link>https://hdl.handle.net/1721.1/162659</link>
<description>From Tech Lash to Tech Fash: Strategic Reflections on a Decade of Collective Organizing in Computing
Huber, Linda; Reynolds-Cu?llar, Pedro; DeVrio, Alicia; Raihan, Jensine; Sum, Cella; Dombrowski, Lynn; Zhang, Justine; Becker, Christoph; Irani, Lilly; Krafft, P M; Hughes, Margaret
Computing is a field plagued with presentism, oriented towards the new in ways that limit our design and research practices - as well as our capacity to understand and collectively respond to emerging crises. To improve our sensemaking and strategizing about today’s crises, this workshop explores what Tamara Kneese has deemed the last decade’s shift from “techlash” to “tech fash.” What have we learned from the era of misinformation and bias, of “surveillance capitalism” and tech worker organizing that can inform our struggle against the increasing power of a techno-fascist oligarchy? We will also look towards previous generations of computing professionals and activists, who likewise sought to address the harms of emerging automated systems and the complicity of computing within violent, imperialist projects. This workshop will create space for participants to explore these questions collectively, bridging past and present moments in an effort to devise strategies moving forward.
</description>
<pubDate>Sat, 30 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162659</guid>
<dc:date>2025-08-30T00:00:00Z</dc:date>
</item>
<item>
<title>Using experimental data in computationally guided rational design of inorganic materials with machine learning</title>
<link>https://hdl.handle.net/1721.1/162658</link>
<description>Using experimental data in computationally guided rational design of inorganic materials with machine learning
Kulik, Heather J.
While the impact of machine learning (ML) has been felt everywhere, its effect has been most transformative where large, high-quality datasets are available. For promising materials spaces, such as transition metal coordination complexes and metal–organic frameworks, the large chemical diversity has not yet been matched by similarly large datasets, and computational datasets (e.g., from density functional theory) may not be predictive. Extraction of experimental data from the literature represents an alternative approach to the data-driven design of materials. This perspective will describe efforts in (i) extracting experimental data; (ii) associating extracted data with known chemical structures; (iii) leveraging data in ML and screening; (iv) designing materials with enriched stability; and (v) using experimental data to improve high-throughput workflows. I will summarize some of the outstanding challenges and opportunities for data enrichment with high-throughput experimentation and large language models.
</description>
<pubDate>Tue, 08 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162658</guid>
<dc:date>2025-04-08T00:00:00Z</dc:date>
</item>
<item>
<title>Reduced-Order Modeling for Physical Simulation: From the Classical to the Neural</title>
<link>https://hdl.handle.net/1721.1/162657</link>
<description>Reduced-Order Modeling for Physical Simulation: From the Classical to the Neural
Levin, David IW; Chen, Peter Yichen; Grinspun, Eitan
This workshop aims to explore the evolution of subspace methods&#13;
in physical simulation, tracing their origins from classical engineering formulations to cutting-edge neural techniques. By gathering&#13;
leading researchers, students, and practitioners, the session will&#13;
serve as a platform for cross-disciplinary dialogue, education, and&#13;
community building around model reduction techniques in graphics and simulation.
SIGGRAPH Frontiers ’25, Vancouver, BC, Canada
</description>
<pubDate>Tue, 19 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162657</guid>
<dc:date>2025-08-19T00:00:00Z</dc:date>
</item>
<item>
<title>Drawing and Sketching: Art, Psychology, and Computer Graphics</title>
<link>https://hdl.handle.net/1721.1/162656</link>
<description>Drawing and Sketching: Art, Psychology, and Computer Graphics
Vinker, Yael; Tang, Mia; Hertzmann, Aaron; Fan, Judith; Agrawala, Maneesh; Chandra, Kartik; Fu, Hongbo; Schaldenbrand, Peter
Sketching is a fundamental form of expression that supports visual thinking, conceptual exploration, and communication across cultures, generations, and disciplines [Fan et al. 2023; Goel 1995; Hertzmann 2021; Tversky 2002; 2011; Tversky et al. 2003]. Whether through quick marks or detailed renderings, it externalizes ideas into tangible visual form, serving as both a creative act and a cognitive tool. For example, designers use sketches to explore new ideas [Goldschmidt 1992; Tversky et al. 2003], scientists employ them to formulate problems [Kaiser 2019; Nasim 2019], and children engage in sketching to learn and express themselves [Fiorella and Kuhlmann 2020; Forbus et al. 2011]. This central role has made drawing and sketching a long-standing topic of interest in computer graphics, computer vision, and machine learning [Bénard and Hertzmann 2018; Berger et al. 2013; Canny 1986; DeCarlo et al. 2003; Ha and Eck 2017; Hertzmann 2003; Judd et al. 2007; Vinker et al. 2022; Winnemöller et al. 2012; Xie and Tu 2017; Xu et al. 2020].
SIGGRAPH Frontiers ’25, Vancouver, BC, Canada
</description>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162656</guid>
</item>
<item>
<title>Towards Interoperability: Pursuing an ontology for data exchange between deliberative democratic platforms</title>
<link>https://hdl.handle.net/1721.1/162655</link>
<description>Towards Interoperability: Pursuing an ontology for data exchange between deliberative democratic platforms
Hughes, Margaret; DeSota, Elianna; Victor, Matthew; Lynn, Stuart; Stormonth-Darling, John; Barry, Liz
In response to the fragmented state of civic engagement tools and the urgent challenges facing democratic systems, this paper introduces a shared, contributor-driven ontology to connect diverse civic tech platforms, emerging from the work of the Interoperable Deliberative Tool cohort at Metagov. By integrating platforms like Voice to Vision, Assemblis, and Decidim, we enable the flow of deliberative data across contexts, supporting more cohesive decision-making. This approach helps bridge gaps between input, analysis, and action, enhancing democratic resilience in crisis moments. Through our work, we demonstrate how interoperability can strengthen civic engagement and provide a foundation for more responsive, collaborative governance.
AAR Adjunct 2025, Aarhus N, Denmark
</description>
<pubDate>Sat, 30 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162655</guid>
<dc:date>2025-08-30T00:00:00Z</dc:date>
</item>
<item>
<title>Calorimetric wire detector for measurement of atomic hydrogen beams</title>
<link>https://hdl.handle.net/1721.1/162654</link>
<description>Calorimetric wire detector for measurement of atomic hydrogen beams
Astaschov, M.; Bhagvati, S.; Böser, S.; Brandsema, M. J.; Cabral, R.; Claessens, C.; de Viveiros, L.; Enomoto, S.; Fenner, D.; Fertl, M.; Formaggio, J. A.; Foust, B. T.; Gaison, J. K.; Harmston, P.; Heeger, K. M.; Hüneborn, M. B.; Huyan, X.; Jones, A. M.; Jones, B. J. P.; Karim, E.
A calorimetric detector for minimally disruptive measurements of atomic hydrogen beams is described. The calorimeter measures heat released by the recombination of hydrogen atoms into molecules on a thin wire. As a demonstration, the angular distribution of a beam with a peak intensity of ≈ 10 16 atoms / ( cm 2 s ) is measured by translating the wire across the beam. The data agree well with an analytic model of the beam from the thermal hydrogen atom source. Using the beam shape model, the relative intensity of the beam can be determined to 5% precision or better at any angle. Graphical abstract
</description>
<pubDate>Mon, 26 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162654</guid>
<dc:date>2025-05-26T00:00:00Z</dc:date>
</item>
<item>
<title>Optimal transport for generating transition states in chemical reactions</title>
<link>https://hdl.handle.net/1721.1/162652</link>
<description>Optimal transport for generating transition states in chemical reactions
Duan, Chenru; Liu, Guan-Horng; Du, Yuanqi; Chen, Tianrong; Zhao, Qiyuan; Jia, Haojun; Gomes, Carla P; Theodorou, Evangelos A; Kulik, Heather J
Transition states (TSs) are transient structures that are key to understanding reaction mechanisms and designing catalysts but challenging to capture in experiments. Many optimization algorithms have been developed to search for TSs computationally. Yet, the cost of these algorithms driven by quantum chemistry methods (usually density functional theory) is still high, posing challenges for their applications in building large reaction networks for reaction exploration. Here we developed React-OT, an optimal transport approach for generating unique TS structures from reactants and products. React-OT generates highly accurate TS structures with a median structural root mean square deviation of 0.053 Å and median barrier height error of 1.06 kcal mol&lt;jats:sup&gt;−1&lt;/jats:sup&gt; requiring only 0.4 s per reaction. The root mean square deviation and barrier height error are further improved by roughly 25% through pretraining React-OT on a large reaction dataset obtained with a lower level of theory, GFN2-xTB. We envision that the remarkable accuracy and rapid inference of React-OT will be highly useful when integrated with the current high-throughput TS search workflow. This integration will facilitate the exploration of chemical reactions with unknown mechanisms.
</description>
<pubDate>Wed, 23 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162652</guid>
<dc:date>2025-04-23T00:00:00Z</dc:date>
</item>
<item>
<title>Blueprints for the Geometric Control of N-Heterocyclic Carbene–Carbodiimide Isomers</title>
<link>https://hdl.handle.net/1721.1/162651</link>
<description>Blueprints for the Geometric Control of N-Heterocyclic Carbene–Carbodiimide Isomers
Day, Craig S; Grabicki, Niklas; Chu, Daniel BK; Keys, Allison; Singhal, Avni; Vennelakanti, Vyshnavi; Kevlishvili, Ilia; Gómez‐Bombarelli, Rafael; Kulik, Heather J; Johnson, Jeremiah
Rational control of the 3D presentation of atoms—stereochemistry—lies at the heart of synthetic organic and materials chemistries. Here, researchers report detailed computational studies on conformational isomerism in N-heterocyclic carbene–carbodiimide (NHC–CDI) zwitterionic adducts. By varying the steric and electronic parameters of the NHC and CDI components, criteria for controlling isomerization thermodynamics and predicting energetically favorable conformations are identified. These criteria is validated experimentally using a novel synthetic approach to NHC–CDIs, which exploits the thermodynamic equilibrium between sterically unencumbered NHC dimers to access NHC–CDI adducts with low barriers to conformational isomerization, including the first example of an (E/E)-NHC–CDI.
</description>
<pubDate>Tue, 20 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162651</guid>
<dc:date>2025-05-20T00:00:00Z</dc:date>
</item>
<item>
<title>CoRE MOF DB: A curated experimental metal-organic framework database with machine-learned properties for integrated material-process screening</title>
<link>https://hdl.handle.net/1721.1/162650</link>
<description>CoRE MOF DB: A curated experimental metal-organic framework database with machine-learned properties for integrated material-process screening
Zhao, Guobin; Brabson, Logan M; Chheda, Saumil; Huang, Ju; Kim, Haewon; Liu, Kunhuan; Mochida, Kenji; Pham, Thang D; Prerna; Terrones, Gianmarco G; Yoon, Sunghyun; Zoubritzky, Lionel; Coudert, François-Xavier; Haranczyk, Maciej; Kulik, Heather J; Moosavi, Seyed Mohamad; Sholl, David S; Siepmann, J Ilja; Snurr, Randall Q; Chung, Yongchul G
We present an updated version of the Computation-Ready, Experimental (CoRE) Metal-Organic Framework (MOF) database, which includes a curated set of computation-ready MOF crystal structures designed for high-throughput computational materials discovery. Data collection and curation procedures were improved from the previous version to enable more frequent updates in the future. Machine-learning-predicted properties, such as stability metrics and heat capacities, are included in the dataset to streamline screening activities. An updated version of MOFid was developed to provide detailed information on metal nodes, organic linkers, and topologies of an MOF structure. DDEC6 partial atomic charges of MOFs were assigned based on a machine-learning model. Gibbs ensemble Monte Carlo simulations were used to classify the hydrophobicity of MOFs. The finalized dataset was subsequently used to perform integrated material-process screening for various carbon-capture conditions using high-fidelity temperature-swing adsorption (TSA) simulations. Our workflow identified multiple MOF candidates that are predicted to outperform CALF-20 for these applications.
</description>
<pubDate>Wed, 04 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162650</guid>
<dc:date>2025-06-04T00:00:00Z</dc:date>
</item>
<item>
<title>Toward Scalable Learning-Based Optical Restoration</title>
<link>https://hdl.handle.net/1721.1/162649</link>
<description>Toward Scalable Learning-Based Optical Restoration
Huang, Siyong; Song, Qingyu; Yu, Kexin; Wang, Zhaoning; Zhong, Zhizhen; Xiang, Qiao; Shu, Jiwu
The increasing scale and dynamic nature of modern optical networks present significant challenges to the scalability and adaptability of fault recovery. Existing state-of-the-art (SOTA) optical restoration methods rely primarily on offline pre-computation for each fault scenario, followed by online traffic reallocation. Their scalability to large network topologies is limited by the reliance on traditional solvers and imprecise modeling of potential faults.&#13;
This paper proposes LBOR, an optical restoration system built on multi-agent reinforcement learning (MARL) and integrated with a traffic allocation framework. We introduce a sequential restoration workflow for each failed IP link, employing two agents dedicated to path selection and wavelength assignment, respectively. In addition, we develop a randomized assignment ordering strategy to mitigate premature convergence to local optima and an action masking mechanism to prune the MARL search space. Experiments conducted on a large topology with 70 nodes indicate that LBOR achieves up to a 1000 × speedup compared to the SOTA approach, with only a slight reduction in allocation precision.
APNET 2025, Shang Hai, China
</description>
<pubDate>Wed, 06 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162649</guid>
<dc:date>2025-08-06T00:00:00Z</dc:date>
</item>
<item>
<title>Beyond Words: An Experimental Study of Signaling in Crowdfunding</title>
<link>https://hdl.handle.net/1721.1/162648</link>
<description>Beyond Words: An Experimental Study of Signaling in Crowdfunding
Dambanemuya, Henry; Choi, Eunseo; Gergle, Darren; Horv?t, Em?ke-?gnes
Increasingly, crowdfunding is transforming financing for many people worldwide. Yet we know relatively little about how, why, and when funding outcomes are impacted by signaling between funders. We conduct two studies of N=500 and N=750 participants involved in crowdfunding to investigate the effect of crowd signals, i.e., certain characteristics deduced from the amounts and timing of contributions, on the decision to fund. In our first study, we find that, under a variety of conditions, contributions of heterogeneous amounts arriving at varying time intervals are significantly more likely to be selected than homogeneous contribution amounts and times. The impact of signaling is strongest among participants who are susceptible to social influence. The effect is remarkably general across different project types, fundraising goals, participant interest in the projects, and participants' altruistic attitudes. Our second study using less strict controls indicates that the role of crowd signals in decision-making is typically unrecognized by participants. Our results underscore the fundamental nature of social signaling in crowdfunding. They highlight the importance of designing around these crowd signals and inform user strategies both on the project creator and funder side.
</description>
<pubDate>Sat, 14 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162648</guid>
<dc:date>2025-06-14T00:00:00Z</dc:date>
</item>
<item>
<title>Runtime Bounds for a Coevolutionary Algorithm on Classes of Potential Games</title>
<link>https://hdl.handle.net/1721.1/162647</link>
<description>Runtime Bounds for a Coevolutionary Algorithm on Classes of Potential Games
Hevia Fajardo, Mario Alejandro; Toutouh, Jamal; Hemberg, Erik; O'Reilly, Una-May; Lehre, Per Kristian
Coevolutionary algorithms are a family of black-box optimisation algorithms with many applications in game theory. We study a coevolutionary algorithm on an important class of games in game theory: potential games. In these games, a real-valued function defined over the entire strategy space encapsulates the strategic choices of all players collectively. We present the first theoretical analysis of a coevolutionary algorithm on potential games, showing a runtime guarantee that holds for all exact potential games, some weighted and ordinal potential games, and certain non-potential games. Using this result, we show a polynomial runtime on singleton congestion games. Furthermore, we show that there exist games for which coevolutionary algorithms find Nash equilibria exponentially faster than best or better response dynamics, and games for which coevolutionary algorithms find better Nash equilibria as well. Finally, we conduct experimental evaluations showing that our algorithm can outperform widely used algorithms, such as better response on random instances of singleton congestion games, as well as fictitious play, counterfactual regret minimisation (CFR), and external sampling CFR on dynamic routing games.
FOGA ’25, Leiden, Netherlands
</description>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162647</guid>
</item>
<item>
<title>MeshTorrent: A Community-Driven P2P System for AI-Generated 3D Model Creation and Distribution</title>
<link>https://hdl.handle.net/1721.1/162646</link>
<description>MeshTorrent: A Community-Driven P2P System for AI-Generated 3D Model Creation and Distribution
Lewis, Ryan Hardesty
MeshTorrent is a peer-to-peer platform for automated 3D content creation and exchange, inspired by BitTorrent-style file sharing. By merging AI-based text-to-3D generation with swarm-based distribution, MeshTorrent harnesses the combined bandwidth and storage resources of its users, enabling scalable and decentralized sharing of 3D assets. This paper describes the core design of MeshTorrent, including an AI workflow for generating fresh .glb files, metadata management via a distributed hash table, partial previews for quick inspection, and specialized extensions for 2D sprites (SpriteTorrent) and rigged character models (RigTorrent). Preliminary tests show faster content download times than single-host alternatives, reduced server costs, and robust resilience to network churn, advancing an open ecosystem for collaborative 3D model exchange.
SIGGRAPH Labs ’25, Vancouver, BC, Canada
</description>
<pubDate>Sun, 10 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162646</guid>
<dc:date>2025-08-10T00:00:00Z</dc:date>
</item>
<item>
<title>Graphics4Science: Computer Graphics for Scientific Impacts</title>
<link>https://hdl.handle.net/1721.1/162645</link>
<description>Graphics4Science: Computer Graphics for Scientific Impacts
Chen, Peter Yichen; Guo, Minghao; Pfister, Hanspeter; Lin, Ming; Freeman, William; Huang, Qixing; Shen, Han-Wei; Matusik, Wojciech
Computer graphics, often associated with films, games, and visual effects, has long been a powerful tool for addressing scientific challenges—from its origins in 3D visualization for medical imaging to its role in modern computational modeling and simulation. This course explores the deep and evolving relationship between computer graphics and science, highlighting past achievements, ongoing contributions, and open questions that remain. We show how core methods, such as geometric reasoning and physical modeling, provide inductive biases that help address challenges in both fields, especially in data-scarce settings. To that end, we aim to reframe graphics as a modeling language for science by bridging vocabulary gaps between the two communities. Designed for both newcomers and experts, Graphics4Science invites the graphics community to engage with science, tackle high-impact problems where graphics expertise can make a difference, and contribute to the future of scientific discovery. Additional details are available on the course website: https://graphics4science.github.io.
SIGGRAPH Courses ’25, Vancouver, BC, Canada
</description>
<pubDate>Thu, 14 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162645</guid>
<dc:date>2025-08-14T00:00:00Z</dc:date>
</item>
<item>
<title>Mobile Underwater Backscatter Networking</title>
<link>https://hdl.handle.net/1721.1/162642</link>
<description>Mobile Underwater Backscatter Networking
Wang, Purui; Afzal, Sayed Saad; Adib, Fadel
Underwater backscatter is a promising technology for ultra-low-power underwater networking, but existing systems break down in mobile scenarios. This paper presents EchoRider, the first system to enable reliable underwater backscatter networking under mobility.&#13;
EchoRider introduces three key components. First, it incorporates a robust and energy-efficient downlink architecture that uses chirp-modulated transmissions at the reader and a sub-Nyquist chirp decoder on backscatter nodes—bringing the resilience of LoRa-style signaling to underwater backscatter while remaining ultra-low-power. Second, it introduces a NACK-based full-duplex retransmission protocol, enabling efficient, reliable packet delivery. Third, it implements a Doppler-resilient uplink decoding pipeline that includes adaptive equalization, polar coding, and dynamic retraining to combat channel variation.&#13;
We built a full EchoRider prototype and evaluated it across over 1,200 real-world mobile experiments. EchoRider improves bit error rate by over 125× compared to a state-of-the-art baseline and maintains underwater goodput of 0.8 kbps at speeds up to 2.91 knots. In contrast, the baseline fails at speeds as low as 0.17 knots. Finally, we demonstrate EchoRider in end-to-end deployments involving mobile drones and sensor nodes, showing its effectiveness in practical underwater networked applications.
SIGCOMM ’25, Coimbra, Portugal
</description>
<pubDate>Wed, 27 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162642</guid>
<dc:date>2025-08-27T00:00:00Z</dc:date>
</item>
<item>
<title>PreTE: Traffic Engineering with Predictive Failures</title>
<link>https://hdl.handle.net/1721.1/162641</link>
<description>PreTE: Traffic Engineering with Predictive Failures
Miao, Congcong; Zhong, Zhizhen; Zhao, Yiren; Gupta, Arpit; Zhang, Ying; Li, Sirui; He, Zekun; Zou, Xianneng; Wang, Jilong
Fiber links in wide-area networks (WANs) are exposed to complicated environments and hence are vulnerable to failures like fiber cuts. The conventional approach of using static probabilistic failures falls short in fiber-cut scenarios because these fiber cuts are rare but disruptive, making it difficult for network operators to balance network utilization and availability in WAN traffic engineering. Our large-scale measurements of per-second optical-layer data reveal that the fiber's failure probability increases by several orders of magnitude when experiencing a rare and ephemeral degradation state. Therefore, we present a novel traffic engineering (TE) system called PreTE to factor in the dynamic fiber cut probabilities directly into TE systems. At the core of the PreTE system, fiber degradation facilitates failure predictions and traffic tunnels to be proactively updated, followed by traffic allocation optimizations among updated tunnels. We evaluate PreTE using a production-level WAN testbed and large-scale simulations. The testbed evaluation quantifies PreTE's runtime to demonstrate the feasibility to implement in large-scale WANs. Our large-scale simulation results show that PreTE can support up to 2× more demand at the same level of availability as compared to existing TE schemes.
SIGCOMM ’25, September 8–11, 2025, Coimbra, Portugal
</description>
<pubDate>Wed, 27 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162641</guid>
<dc:date>2025-08-27T00:00:00Z</dc:date>
</item>
<item>
<title>Carbon- and Precedence-Aware Scheduling for Data Processing Clusters</title>
<link>https://hdl.handle.net/1721.1/162640</link>
<description>Carbon- and Precedence-Aware Scheduling for Data Processing Clusters
Lechowicz, Adam; Shenoy, Rohan; Bashir, Noman; Hajiesmaili, Mohammad; Wierman, Adam; Delimitrou, Christina
As large-scale data processing workloads continue to grow, their carbon footprint raises concerns. Prior research on carbon-aware schedulers has focused on shifting computation to align with the availability of low-carbon energy, but these approaches assume that each task can be executed independently. In contrast, data processing jobs have precedence constraints that complicate decisions, since delaying an upstream "bottleneck" task to a low-carbon period also blocks downstream tasks, impacting makespan. In this paper, we show that carbon-aware scheduling for data processing benefits from knowledge of both time-varying carbon and precedence constraints. Our main contribution is PCAPS, a carbon-aware scheduler that builds on state-of-the-art scoring or probability-based techniques - in doing so, it explicitly relates the structural importance of each task against the time-varying characteristics of carbon intensity. To illustrate gains due to fine-grained task-level scheduling, we also study CAP, a wrapper for any carbon-agnostic scheduler that generalizes the provisioning ideas of PCAPS. Both techniques allow a user-configurable priority between carbon and makespan, and we give basic analytic results to relate the trade-off between these objectives. Our prototype on a 100-node Kubernetes cluster shows that a moderate configuration of PCAPS reduces carbon footprint by up to 32.9% without significantly impacting total efficiency.
SIGCOMM ’25, Coimbra, Portugal
</description>
<pubDate>Wed, 27 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162640</guid>
<dc:date>2025-08-27T00:00:00Z</dc:date>
</item>
<item>
<title>MixNet: A Runtime Reconfigurable Optical-Electrical Fabric for Distributed Mixture-of-Experts Training</title>
<link>https://hdl.handle.net/1721.1/162639</link>
<description>MixNet: A Runtime Reconfigurable Optical-Electrical Fabric for Distributed Mixture-of-Experts Training
Liao, Xudong; Sun, Yijun; Tian, Han; Wan, Xinchen; Jin, Yilun; Wang, Zilong; Ren, Zhenghang; Huang, Xinyang; Li, Wenxue; Tse, Kin Fai; Zhong, Zhizhen; Liu, Guyue; Zhang, Ying; Ye, Xiaofeng; Zhang, Yiming; Chen, Kai
Mixture-of-Expert (MoE) models outperform conventional models by selectively activating different subnets, named experts, on a per-token basis. This gated computation generates dynamic communications that cannot be determined beforehand, challenging the existing GPU interconnects that remain static during distributed training. In this paper, we advocate for a first-of-its-kind system, called MixNet, that unlocks topology reconfiguration during distributed MoE training. Towards this vision, we first perform a production measurement study and show that the MoE dynamic communication pattern has strong locality, alleviating the need for global reconfiguration. Based on this, we design and implement a regionally reconfigurable high-bandwidth domain that augments existing electrical interconnects using optical circuit switching (OCS), achieving scalability while maintaining rapid adaptability. We build a fully functional MixNet prototype with commodity hardware and a customized collective communication runtime. Our prototype trains state-of-the-art MoE models with in-training topology reconfiguration across 32 A100 GPUs. Large-scale packet-level simulations show that MixNet achieves performance comparable to a non-blocking fat-tree fabric while boosting the networking cost efficiency (e.g., performance per dollar) of four representative MoE models by 1.2×–1.5× and 1.9×–2.3× at 100 Gbps and 400 Gbps link bandwidths, respectively.
SIGCOMM ’25, Coimbra, Portugal
</description>
<pubDate>Wed, 27 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162639</guid>
<dc:date>2025-08-27T00:00:00Z</dc:date>
</item>
<item>
<title>Political Prediction and the Wisdom of Crowds</title>
<link>https://hdl.handle.net/1721.1/162638</link>
<description>Political Prediction and the Wisdom of Crowds
Sethi, Rajiv; Seager, Julie; Morstatter, Fred; Benjamin, Daniel; Hammell, Anna; Liu, Tianshuo; Patel, Sachi; Subramanian, Ramya
We evaluate the relative forecasting performance of three statistical models and a prediction market for several outcomes decided during the November 2024 elections in the United States—the winner of the presidency, the popular vote, fifteen competitive states in the Electoral College, eleven Senate races, and thirteen House races. We argue that conventional measures of predictive accuracy such as the average daily Brier score reward modeling flaws that result in predicable reversals, as long as such movements are in a direction that is aligned with the eventual outcome. Instead, we adopt a test based on the idea that the strength of a model can be measured by the profitability of a trader who believes its forecasts and bets on the market based on this belief. The results of this test depend on the risk preferences with which the trader is endowed, but we show that within a large parameter range this does not lead to ranking reversals. We find that all models failed to beat the market in the headline contract but some did so convincingly in contracts referencing less visible races.
CI 2025, San Diego, CA, USA
</description>
<pubDate>Sun, 03 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162638</guid>
<dc:date>2025-08-03T00:00:00Z</dc:date>
</item>
<item>
<title>Evolutionary and Coevolutionary Multi-Agent Design Choices and Dynamics</title>
<link>https://hdl.handle.net/1721.1/162637</link>
<description>Evolutionary and Coevolutionary Multi-Agent Design Choices and Dynamics
Hemberg, Erik; Moskal, Stephen; O'Reilly, Una-May; Liu; Fuller
We investigate two representation alternatives for the controllers of teams of cyber agents. We combine these controller representations with different evolutionary algorithms, one of which introduces a novel LLM-supported mutation operator. Using a cyber security scenario, we evaluate agent learning when one side is trained to compete against a side that does not evolve and when two sides coevolve with each other. This allows us to quantify the relative merits and tradeoffs of representation and algorithm combinations in terms of team performance. The scenario also allows us to compare the performance impact and dynamics of coevolution versus evolution under different combinations. Across the algorithms and representations, we observe that coevolution reduces the performance highs and lows of both sides while it induces fluctuations on both sides. In contrast, when only one-side is optimized, performance peaks are higher and is more sustained than when both sides are optimized with coevolution.
GECCO ’25 Companion, July 14–18, 2025, Malaga, Spain
</description>
<pubDate>Mon, 11 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162637</guid>
<dc:date>2025-08-11T00:00:00Z</dc:date>
</item>
<item>
<title>MEDS: Building Models and Tools in a Reproducible Health AI Ecosystem</title>
<link>https://hdl.handle.net/1721.1/162636</link>
<description>MEDS: Building Models and Tools in a Reproducible Health AI Ecosystem
McDermott, Matthew; Xu, Justin; Bergamaschi, Teya; Jeong, Hyewon; Lee, Simon; Oufattole, Nassim; Rockenschaub, Patrick; Steinberg, Ethan; Sun, Jimeng; Water, Robin; Wornow, Michael; Wu, John; Wu, Zhenbang; Stankevičiūtė, Kamilė
Health AI suffers from a systemic reproducibility crisis that irreparably hinders research in this space across academia and industry. To combat this and empower researchers in the health AI space, we propose a comprehensive interactive tutorial introducing the ''Medical Event Data Standard'' (MEDS) and its growing open-source ecosystem. Working in MEDS allows you to more easily build AI models over public or private longitudinal EHR datasets and to readily benchmark existing, published models against contributions on local datasets and tasks. MEDS simplifies the construction of AI models on longitudinal Electronic Health Record (EHR) datasets and enables straightforward benchmarking against established models. Reflecting its growing adoption, MEDS is utilized at over 15 institutions across 8 countries, features 7+ open-source tools, supports 10+ published models, and provides publicly available Extract-Transform-Load (ETL) pipelines for major public EHR datasets. A KDD tutorial offering practical experience with MEDS will significantly enhance reproducibility and comparability in health AI research.&#13;
In this tutorial, we will teach attendees how to (1) transform datasets into the MEDS format(2) pre-process MEDS data for modeling needs(3) build highly effective, efficient, AI models for diverse predictive tasks on their datasets, and (4) contribute their results to MEDS-DEV, a decentralized benchmark enabling robust evaluation against meaningful baselines. Participants will engage in collaborative, minimal-dependency Jupyter notebook exercises, guided through each step by structured instruction and practical coding sessions. Attendees will leave equipped with practical knowledge to build reproducible, state-of-the-art AI models within the MEDS ecosystem.
KDD ’25, Toronto, ON, Canada
</description>
<pubDate>Sun, 03 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162636</guid>
<dc:date>2025-08-03T00:00:00Z</dc:date>
</item>
<item>
<title>A Guide to Misinformation Detection Data and Evaluation</title>
<link>https://hdl.handle.net/1721.1/162635</link>
<description>A Guide to Misinformation Detection Data and Evaluation
Thibault, Camille; Tian, Jacob-Junqi; P?loquin-Skulski, Gabrielle; Curtis, Taylor; Zhou, James; Laflamme, Florence; Guan, Luke Yuxiang; Rabbany, Reihaneh; Godbout, Jean-Fran?ois; Pelrine, Kellin
Misinformation is a complex societal issue, and mitigating solutions are difficult to create due to data deficiencies. To address this, we have curated the largest collection of (mis)information datasets in the literature, totaling 75. From these, we evaluated the quality of 36 datasets that consist of statements or claims, as well as the 9 datasets that consist of data in purely paragraph form. We assess these datasets to identify those with solid foundations for empirical work and those with flaws that could result in misleading and non-generalizable results, such as spurious correlations, or examples that are ambiguous or otherwise impossible to assess for veracity. We find the latter issue is particularly severe and affects most datasets in the literature. We further provide state-of-the-art baselines on all these datasets, but show that regardless of label quality, categorical labels may no longer give an accurate evaluation of detection model performance. Finally, we propose and highlight Evaluation Quality Assurance (EQA) as a tool to guide the field toward systemic solutions rather than inadvertently propagating issues in evaluation. Overall, this guide aims to provide a roadmap for higher quality data and better grounded evaluations, ultimately improving research in misinformation detection. All datasets and other artifacts are available at misinfo-datasets.complexdatalab.com. The extended paper, including the appendices, can be accessed via arXiv at arxiv.org/abs/2411.05060.
KDD ’25, Toronto, ON, Canada
</description>
<pubDate>Sun, 03 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162635</guid>
<dc:date>2025-08-03T00:00:00Z</dc:date>
</item>
<item>
<title>CH−π interactions confer orientational flexibility in protein–carbohydrate binding sites</title>
<link>https://hdl.handle.net/1721.1/162626</link>
<description>CH−π interactions confer orientational flexibility in protein–carbohydrate binding sites
Keys, Allison M; Kastner, David W; Kiessling, Laura L; Kulik, Heather J
Protein-carbohydrate binding plays an essential role in biological processes including cellular recognition and immune signaling. However, glycans are hydrophilic with limited hydrophobic surfaces, a challenge for selective recognition by proteins. CH-π stacking interactions are pervasive in protein-carbohydrate binding sites and have emerged as critical drivers of protein-carbohydrate recognition. These interactions are highly favorable and have a broad orientational landscape. However, it is unknown how the orientations of CH-π stacking interactions are influenced by the protein environment; their functional interplay with hydrogen bonds in protein-carbohydrate binding is also unclear. Here, we employ well-tempered metadynamics simulations to obtain binding free energy landscapes for a set of protein-β-D-galactoside complexes with CH-π stacking interactions. Our data show that the favored orientation of a CH-π stacking interaction is controlled by the location of hydrogen bonds in the protein binding site. Complexes with extended carbohydrate ligands that form additional hydrogen bonds have more specific orientational dependencies, while protein variant complexes with fewer hydrogen bonds have broader free energy landscapes with glycan ligands adopting multiple CH-π stacking interaction orientations. We also show that forming multiple CH-π stacking interactions facilitates the dynamics necessary for the translocation of oligosaccharide ligands within a processive enzyme. Our findings underscore the cooperative nature of hydrogen bonds and CH-π stacking interactions, demonstrating that tuning the number and positions of these interactions through evolution or protein engineering can alter ligand recognition or support ligand movement.
</description>
<pubDate>Fri, 01 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162626</guid>
<dc:date>2025-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>High-Throughput Discovery of Ferrocene Mechanophores with Enhanced Reactivity and Network Toughening</title>
<link>https://hdl.handle.net/1721.1/162625</link>
<description>High-Throughput Discovery of Ferrocene Mechanophores with Enhanced Reactivity and Network Toughening
Kevlishvili, Ilia; Vakil, Jafer; Kastner, David W; Huang, Xiao; Craig, Stephen L; Kulik, Heather J
Mechanophores are molecules that undergo chemical changes in response to mechanical force, offering unique opportunities in chemistry, materials science, and drug delivery. However, many potential mechanophores remain unexplored. For example, ferrocenes are attractive targets as mechanophores due to their combination of high thermal stability and mechanochemical lability. However, the mechanochemical potential of ferrocene derivatives remains dramatically underexplored despite the synthesis of thousands of structurally diverse complexes. Herein, we report the computational, machine learning guided discovery of synthesizable ferrocene mechanophores. We identify over one hundred potential target ferrocene mechanophores with wide-ranging mechanochemical activity and use data-driven computational screening to identify a select number of promising complexes. We highlight design principles to alter their mechanochemical activation, including regio-controlled transition state stabilization through bulky groups and a change in mechanism through noncovalent ligand–ligand interactions. The computational screening is validated experimentally both at the polymer strand level through sonication experiments and at the network level, where a computationally discovered ferrocene mechanophore cross-linker leads to greater than 4-fold enhancement in material tearing energy. This work establishes a generalizable framework for the high-throughput discovery and rational design of mechanophores and offers insights into structure–activity relationships in mechanically responsive materials.
</description>
<pubDate>Fri, 01 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162625</guid>
<dc:date>2025-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>MOSS: Multi-Objective Optimization for Stable Rule Sets</title>
<link>https://hdl.handle.net/1721.1/162624</link>
<description>MOSS: Multi-Objective Optimization for Stable Rule Sets
Liu, Brian; Mazumder, Rahul
We present MOSS, a multi-objective optimization framework for constructing stable sets of decision rules. MOSS incorporates three important criteria for interpretability: sparsity, accuracy, and stability, into a single multi-objective optimization framework. Importantly, MOSS allows a practitioner to rapidly evaluate the trade-off between accuracy and stability in sparse rule sets in order to select an appropriate model. We develop a specialized cutting plane algorithm in our framework to rapidly compute the Pareto frontier between these two objectives, and our algorithm scales to problem instances beyond the capabilities of commercial optimization solvers. Our experiments show that MOSS outperforms state-of-the-art rule ensembles in terms of both predictive performance and stability.
KDD ’25, Toronto, ON, Canada
</description>
<pubDate>Sun, 03 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162624</guid>
<dc:date>2025-08-03T00:00:00Z</dc:date>
</item>
<item>
<title>FlanS: A Foundation Model for Free-Form Language-based Segmentation in Medical Images</title>
<link>https://hdl.handle.net/1721.1/162623</link>
<description>FlanS: A Foundation Model for Free-Form Language-based Segmentation in Medical Images
Da, Longchao; Wang, Rui; Xu, Xiaojian; Bhatia, Parminder; Kass-Hout, Taha; Wei, Hua; Xiao, Cao
Medical imaging is crucial for diagnosing a patient's health condition, and accurate segmentation of these images is essential for isolating regions of interest to ensure precise diagnosis and treatment planning. Existing methods primarily rely on bounding boxes or point-based prompts, while few have explored text-related prompts, despite clinicians often describing their observations and instructions in natural language. To address this gap, we first propose a RAG-based free-form text prompt generator that leverages the domain corpus to generate diverse and realistic descriptions. Then, we introduce FLanS, a novel medical image segmentation model that handles various free-form text prompts, including professional anatomy-informed queries, anatomy-agnostic position-driven queries, and anatomy-agnostic size-driven queries. Additionally, our model also incorporates a symmetry-aware canonicalization module to ensure consistent, accurate segmentations across varying scan orientations and reduce confusion between the anatomical position of an organ and its appearance in the scan. FLanS is trained on a large-scale dataset of over 100k medical images from 7 public datasets. Comprehensive experiments demonstrate the model's superior language understanding and segmentation precision, along with a deep comprehension of the relationship between them, outperforming SOTA baselines on both in-domain and out-of-domain datasets.
KDD ’25, August 3–7, 2025, Toronto, ON, Canada
</description>
<pubDate>Sun, 03 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162623</guid>
<dc:date>2025-08-03T00:00:00Z</dc:date>
</item>
<item>
<title>RL4CO: An Extensive Reinforcement Learning for Combinatorial Optimization Benchmark</title>
<link>https://hdl.handle.net/1721.1/162622</link>
<description>RL4CO: An Extensive Reinforcement Learning for Combinatorial Optimization Benchmark
Berto, Federico; Hua, Chuanbo; Park, Junyoung; Luttmann, Laurin; Ma, Yining; Bu, Fanchen; Wang, Jiarui; Ye, Haoran; Kim, Minsu; Choi, Sanghyeok; Zepeda, Nayeli; Hottung, Andr?; Zhou, Jianan; Bi, Jieyi; Hu, Yu; Liu, Fei; Kim, Hyeonah; Son, Jiwoo; Kim, Haeyeon; Angioni, Davide; Kool, Wouter
Combinatorial optimization (CO) is fundamental to several real-world applications, from logistics and scheduling to hardware design and resource allocation. Deep reinforcement learning (RL) has recently shown significant benefits in solving CO problems, reducing reliance on domain expertise and improving computational efficiency. However, the absence of a unified benchmarking framework leads to inconsistent evaluations, limits reproducibility, and increases engineering overhead, raising barriers to adoption for new researchers. To address these challenges, we introduce RL4CO, a unified and extensive benchmark with in-depth library coverage of 27 CO problem environments and 23 state-of-the-art baselines. Built on efficient software libraries and best practices in implementation, RL4CO features modularized implementation and flexible configurations of diverse environments, policy architectures, RL algorithms, and utilities with extensive documentation. RL4CO helps researchers build on existing successes while exploring and developing their own designs, facilitating the entire research process by decoupling science from heavy engineering. We finally provide extensive benchmark studies to inspire new insights and future work. RL4CO has already attracted numerous researchers in the community and is open-sourced at https://github.com/ai4co/rl4co.
KDD ’25, Toronto, ON, Canada
</description>
<pubDate>Sun, 03 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162622</guid>
<dc:date>2025-08-03T00:00:00Z</dc:date>
</item>
<item>
<title>SPARTA: An Optimization Framework for Differentially Private Sparse Fine-Tuning</title>
<link>https://hdl.handle.net/1721.1/162621</link>
<description>SPARTA: An Optimization Framework for Differentially Private Sparse Fine-Tuning
Makni, Mehdi; Behdin, Kayhan; Afriat, Gabriel; Xu, Zheng; Vassilvitskii, Sergei; Ponomareva, Natalia; Mazumder, Rahul; Hazimeh, Hussein
Differentially private stochastic gradient descent (DP-SGD) is broadly considered to be the gold standard for training and fine-tuning neural networks under differential privacy (DP). With the increasing availability of high-quality pre-trained model checkpoints (e.g., vision and language models), fine-tuning has become a popular strategy. However, despite recent progress in understanding and applying DP-SGD for private transfer learning tasks, significant challenges remain - most notably, the performance gap between models fine-tuned with DP-SGD and their non-private counterparts. Sparse fine-tuning on private data has emerged as an alternative to full-model fine-tuning -- recent work has shown that privately fine-tuning only a small subset of model weights and keeping the rest of the weights fixed can lead to better performance. In this work, we propose a new approach for sparse fine-tuning of neural networks under DP. Existing work on private sparse finetuning often used fixed choice of trainable weights (e.g., updating only the last layer), or relied on public model's weights to choose the subset of weights to modify. Such choice of weights remains suboptimal. In contrast, we explore an optimization-based approach, where our selection method makes use of the private gradient information, while using off the shelf privacy accounting techniques. Our numerical experiments on several computer vision models and datasets show that our parameter selection method leads to better prediction accuracy, compared to full-model private fine-tuning or existing private sparse fine-tuning approaches. Our code is available here: https://github.com/mazumder-lab/SPARTA/tree/main
KDD ’25, Toronto, ON, Canada
</description>
<pubDate>Sun, 03 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162621</guid>
<dc:date>2025-08-03T00:00:00Z</dc:date>
</item>
<item>
<title>When Heterophily Meets Heterogeneity: Challenges and a New Large-Scale Graph Benchmark</title>
<link>https://hdl.handle.net/1721.1/162620</link>
<description>When Heterophily Meets Heterogeneity: Challenges and a New Large-Scale Graph Benchmark
Lin, Junhong; Guo, Xiaojie; Zhang, Shuaicheng; Zhu, Yada; Shun, Julian
Graph mining has become crucial in fields such as social science, finance, and cybersecurity. Many large-scale real-world networks exhibit both heterogeneity, where multiple node and edge types exist in the graph, and heterophily, where connected nodes may have dissimilar labels and attributes. However, existing benchmarks primarily focus on either heterophilic homogeneous graphs or homophilic heterogeneous graphs, leaving a significant gap in understanding how models perform on graphs with both heterogeneity and heterophily. To bridge this gap, we introduce H2GB, a large-scale node-classification graph benchmark that brings together the complexities of both the heterophily and heterogeneity properties of real-world graphs. H2GB encompasses 9 real-world datasets spanning 5 diverse domains, 28 baseline models, and a unified benchmarking library with a standardized data loader, evaluator, unified modeling framework, and an extensible framework for reproducibility. We establish a standardized workflow supporting both model selection and development, enabling researchers to easily benchmark graph learning methods. Extensive experiments across 28 baselines reveal that current methods struggle with heterophilic and heterogeneous graphs, underscoring the need for improved approaches. Finally, we present a new variant of the model, H2G-former, developed following our standardized workflow, that excels at this challenging benchmark. Both the benchmark and the framework are publicly available at Github and PyPI, with documentation hosted at https://junhongmit.github.io/H2GB.
KDD ’25, Toronto, ON, Canada
</description>
<pubDate>Sun, 03 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162620</guid>
<dc:date>2025-08-03T00:00:00Z</dc:date>
</item>
<item>
<title>Inflammation awakens dormant cancer cells by modulating the epithelial–mesenchymal phenotypic state</title>
<link>https://hdl.handle.net/1721.1/162603</link>
<description>Inflammation awakens dormant cancer cells by modulating the epithelial–mesenchymal phenotypic state
Zhang, Jingwei; Zhang, Jingwen; Han, Longfei; Wu, Shiyi; Li, Jie; Eaton, Elinor Ng; Yuan, Bingbing; Reinhardt, Ferenc; Li, Hao; Strasser, Patrick C.; Das, Sunnny; Donaher, Joana Liu; Khalil, Md Imtiaz; Jiang, Haiping; Deuschel, Alexander; Lin, Danni; Sebastiany, Carolin; Maranga, Mariana; Shubitidze, Salomé; Liu, Xiaofei; Lambert, Arthur W.; Zhang, Yun; Liu, Yana; Sui, Lufei; Elmiligy, Sarah; Pozza, Umberto; Günsay, Rauf; Mishra, Ranjan; Velarde, Jose; Iyer, Sonia; Henry, Whitney S.; Weiskopf, Kipp; Feng, Guihai; Oni, Tobiloba E.; Watnick, Randolph S.; Li, Xin; Weinberg, Robert A
The awakening of dormant disseminated cancer cells appears to be responsible for the clinical relapses of patients whose primary tumors have been successfully cured months and even years earlier. In the present study, we demonstrate that dormant breast cancer cells lodged in the lungs reside in a highly mesenchymal, nonproliferative phenotypic state. The awakening of these cells is not triggered by a cancer cell-autonomous process. Instead, lung inflammation induced by the chemotherapeutic agent bleomycin effectively awakens dormant cancer cells, providing useful models for studying metastatic awakening. Mechanistically, the awakened cells shift from a highly mesenchymal to a quasi-mesenchymal phenotypic state in which they acquire tumorigenicity and proliferative ability. Once awakened, these cells can stably reside in this quasi-mesenchymal state and maintain their tumor-initiating ability, doing so without ongoing heterotypic signaling from the lung microenvironment. Epidermal growth factor receptor ligands released by the cells of the injured tissue microenvironment, including notably M2 type macrophages, promote dormant cancer cells to move toward this quasi-mesenchymal state, a transition that is critical for the awakening process. An understanding of the mechanisms of metastatic awakening may lead in the future to treatment strategies designed to prevent such awakening and resulting metastatic relapse.
</description>
<pubDate>Wed, 03 Sep 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162603</guid>
<dc:date>2025-09-03T00:00:00Z</dc:date>
</item>
<item>
<title>Causality - Exploiting Multi-Modal Data</title>
<link>https://hdl.handle.net/1721.1/162599</link>
<description>Causality - Exploiting Multi-Modal Data
Uhler, Caroline
Massive data collection holds the promise of a better understanding of complex phenomena and ultimately, of better decisions. Representation learning has become a key driver of deep learning applications, since it allows learning latent spaces that capture important properties of the data without requiring any supervised annotations. While representation learning has been hugely successful in predictive tasks, it can fail miserably in causal tasks including predicting the effect of an intervention. This calls for a marriage between representation learning and causal inference. An exciting opportunity in this regard stems from the growing availability of multi-modal and interventional data (in medicine, advertisement, education, etc.). However, these datasets are still miniscule compared to the action spaces of interest in these applications (e.g. interventions can take on continuous values like the dose of a drug or can be combinatorial as in combinatorial drug therapies). In this talk, we will present a statistical and computational framework for causal representation learning from multi-modal data and its application towards optimal intervention design.
KDD '25, August 3–7, 2025, Toronto, ON, Canada
</description>
<pubDate>Sun, 03 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162599</guid>
<dc:date>2025-08-03T00:00:00Z</dc:date>
</item>
<item>
<title>A Survey on Trustworthy LLM Agents: Threats and Countermeasures</title>
<link>https://hdl.handle.net/1721.1/162598</link>
<description>A Survey on Trustworthy LLM Agents: Threats and Countermeasures
Yu, Miao; Meng, Fanci; Zhou, Xinyun; Wang, Shilong; Mao, Junyuan; Pan, Linsey; Chen, Tianlong; Wang, Kun; Li, Xinfeng; Zhang, Yongfeng; An, Bo; Wen, Qingsong
With the rapid evolution of Large Language Models (LLMs), LLM-based agents and Multi-agent Systems (MAS) have significantly expanded the capabilities of LLM ecosystems. This evolution stems from empowering LLMs with additional modules such as memory, tools, environment, and even other agents. However, this advancement has also introduced more complex issues of trustworthiness, which previous research focusing solely on LLMs could not cover. In this survey, we propose the TrustAgent framework, a comprehensive study on the trustworthiness of agents, characterized by modular taxonomy, multi-dimensional connotations, and technical implementation. By thoroughly investigating and summarizing newly emerged attacks, defenses, and evaluation methods for agents and MAS, we extend the concept of Trustworthy LLM to the emerging paradigm of Trustworthy Agent. In TrustAgent, we begin by deconstructing and introducing various components of the Agent and MAS. Then, we categorize their trustworthiness into intrinsic (brain, memory, and tool) and extrinsic (user, agent, and environment) aspects. Subsequently, we delineate the multifaceted meanings of trustworthiness and elaborate on the implementation techniques of existing research related to these internal and external modules. Finally, we present our insights and outlook on this domain, aiming to provide guidance for future endeavors. For easy reference, we categorize all the studies mentioned in this survey according to our taxonomy, available at: https://github.com/Ymm-cll/TrustAgent.
KDD ’25, Toronto, ON, Canada
</description>
<pubDate>Sun, 03 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162598</guid>
<dc:date>2025-08-03T00:00:00Z</dc:date>
</item>
<item>
<title>Hopps: Leveraging Sparsity to Accelerate Automata Processing</title>
<link>https://hdl.handle.net/1721.1/162597</link>
<description>Hopps: Leveraging Sparsity to Accelerate Automata Processing
Du, Xingran; Emer, Joel; Sanchez, Daniel
Automata processing (AP) is a key kernel in data analytics and scientific computing. AP workloads process a stream of symbols with many automata (FSMs) in parallel, e.g., pattern-matching network traffic against many malicious strings.&#13;
The need for high-performance AP has sparked the design of specialized accelerators. But prior AP accelerators are inefficient: AP workloads have substantial sparsity, but accelerators exploit no or limited sparsity. Specifically, each AP workload can be expressed as the concurrent traversal of all automata, which are encoded as graphs. But state-of-the-art accelerators store these graphs uncompressed, using bitsets. This allows the use of specialized memory crossbars that provide high parallelism and efficiency when graphs are dense. But many graphs are highly sparse, making crossbar-based accelerators inefficient.&#13;
We present Hopps, the first automata processing accelerator that exploits sparse data representations. Hopps combines two types of processing units: one represents data uncompressed, which achieves high throughput but is space-inefficient, while the other uses a compressed-sparse representation, which achieves high space efficiency but lower and more variable throughput. To use Hopps well, we present a novel automata mapping algorithm that maps most work to high-throughput units, while keeping a large fraction of state in space-efficient units. Hopps's hybrid design relaxes several constraints in crossbar-based designs, allowing for more efficient high-throughput units (e.g., by using a large number of smaller crossbars). Thus, by making the uncommon case cheap, Hopps makes the common case even faster.&#13;
We evaluate Hopps on AutomataZoo benchmarks. Hopps outperforms prior state-of-the-art accelerators Impala and SpAP by gmean 2.5x and 2.2x when using equal area.
ASPLOS ’25, Rotterdam, Netherlands
</description>
<pubDate>Wed, 06 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162597</guid>
<dc:date>2025-08-06T00:00:00Z</dc:date>
</item>
<item>
<title>Sampling-based sublinear low-rank matrix arithmetic framework for dequantizing quantum machine learning</title>
<link>https://hdl.handle.net/1721.1/162596</link>
<description>Sampling-based sublinear low-rank matrix arithmetic framework for dequantizing quantum machine learning
Chia, Nai-Hui; Gilyen, Andras Pal; Li, Tongyang; Lin, Han-Hsuan; Tang, Ewin; Wang, Chunhao
We present an algorithmic framework for quantum-inspired classical algorithms on close-to-low-rank matrices, generalizing the series of results started by Tang’s breakthrough quantum-inspired algorithm for recommendation systems [STOC’19]. Motivated by quantum linear algebra algorithms and the quantum singular value transformation (SVT) framework of Gilyén et al. [STOC’19], we develop classical algorithms for SVT that run in time independent of input dimension, under suitable quantum-inspired sampling assumptions. Our results give compelling evidence that in the corresponding QRAM data structure input model, quantum SVT does not yield exponential quantum speedups. Since the quantum SVT framework generalizes essentially all known techniques for quantum linear algebra, our results, combined with sampling lemmas from previous work, suffice to generalize all prior results about dequantizing quantum machine learning algorithms. In particular, our classical SVT framework recovers and often improves the dequantization results on recommendation systems, principal component analysis, supervised clustering, support vector machines, low-rank regression, and semidefinite program solving. We also give additional dequantization results on low-rank Hamiltonian simulation and discriminant analysis. Our improvements come from identifying the key feature of the quantum-inspired input model that is at the core of all prior quantum-inspired results: ℓ2-norm sampling can approximate matrix products in time independent of their dimension. We reduce all our main results to this fact, making our exposition concise, self-contained, and intuitive.
</description>
<pubDate>Thu, 27 Oct 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162596</guid>
<dc:date>2022-10-27T00:00:00Z</dc:date>
</item>
<item>
<title>Interaction Is Necessary for Distributed Learning with Privacy or Communication Constraints</title>
<link>https://hdl.handle.net/1721.1/162595</link>
<description>Interaction Is Necessary for Distributed Learning with Privacy or Communication Constraints
Dagan, Yuval; Feldman, Vitaly
Local differential privacy (LDP) is a model where users send privatized data to an untrusted central server whose goal it to solve some data analysis task. In the non-interactive version of this model the protocol consists of a single round in which a server sends requests to all users then receives their responses. This version is deployed in industry due to its practical advantages and has attracted significant research interest.&#13;
Our main result is an exponential lower bound on the number of samples necessary to solve the standard task of learning a large-margin linear separator in the non-interactive LDP model. Via a standard reduction this lower bound implies an exponential lower bound for stochastic convex optimization and specifically, for learning linear models with a convex, Lipschitz and smooth loss. These results answer the questions posed by Smith, Thakurta, and Upadhyay (IEEE Symposium on Security and Privacy 2017) and Daniely and Feldman (NeurIPS 2019). Our lower bound relies on a new technique for constructing pairs of distributions with nearly matching moments but whose supports can be nearly separated by a large margin hyperplane. These lower bounds also hold in the model where communication from each user is limited and follow from a lower bound on learning using non-adaptive statistical queries.
STOC ’20, June 22–26, 2020, Chicago, IL, USA
</description>
<pubDate>Mon, 22 Jun 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162595</guid>
<dc:date>2020-06-22T00:00:00Z</dc:date>
</item>
<item>
<title>Imprinto: Enhancing Infrared Inkjet Watermarking for Human and Machine Perception</title>
<link>https://hdl.handle.net/1721.1/162594</link>
<description>Imprinto: Enhancing Infrared Inkjet Watermarking for Human and Machine Perception
Feick, Martin; Tang, Xuxin; Garcia-Martin, Raul; Luchianov, Alexandru; Huang, Roderick; Xiao, Chang; Siu, Alexa; Dogan, Mustafa Doga
Hybrid paper interfaces leverage augmented reality to combine the desired tangibility of paper documents with the affordances of interactive digital media. Typically, virtual content can be embedded through direct links (e.g., QR codes); however, this impacts the aesthetics of the paper print and limits the available visual content space. To address this problem, we present Imprinto, an infrared inkjet watermarking technique that allows for invisible content embeddings only by using off-the-shelf IR inks and a camera. Imprinto  was established through a psychophysical experiment, studying how much IR ink can be used while remaining invisible to users regardless of background color. We demonstrate that we can detect invisible IR content through our machine learning pipeline, and we developed an authoring tool that optimizes the amount of IR ink on the color regions of an input document for machine and human detectability. Finally, we demonstrate several applications, including augmenting paper documents and objects.
CHI ’25, Yokohama, Japan
</description>
<pubDate>Fri, 25 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162594</guid>
<dc:date>2025-04-25T00:00:00Z</dc:date>
</item>
<item>
<title>Large Language Models in Qualitative Research: Uses, Tensions, and Intentions</title>
<link>https://hdl.handle.net/1721.1/162590</link>
<description>Large Language Models in Qualitative Research: Uses, Tensions, and Intentions
Schroeder, Hope; Randazzo, Casey; Mimno, David; Schoenebeck, Sarita; Le Quéré, Marianne Aubin
Qualitative researchers use tools to collect, sort, and analyze their&#13;
data. Should qualitative researchers use large language models&#13;
(LLMs) as part of their practice? LLMs could augment qualitative&#13;
research, but it is unclear if their use is appropriate, ethical, or&#13;
aligned with qualitative researchers’ goals and values. We interviewed twenty qualitative researchers to investigate these tensions.&#13;
Many participants see LLMs as promising interlocutors with attractive use cases across the stages of research, but wrestle with their&#13;
performance and appropriateness. Participants surface concerns&#13;
regarding the use of LLMs while protecting participant interests,&#13;
and call attention to an urgent lack of norms and tooling to guide&#13;
the ethical use of LLMs in research. We document the rapid and&#13;
broad adoption of LLMs across surfaces, which can interfere with&#13;
intentional use vital to qualitative research. We use the tensions&#13;
surfaced by our participants to outline recommendations for researchers considering using LLMsin qualitative research and design&#13;
principles for LLM-assisted qualitative research tools.
CHI ’25, Yokohama, Japan
</description>
<pubDate>Fri, 25 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162590</guid>
<dc:date>2025-04-25T00:00:00Z</dc:date>
</item>
<item>
<title>Slow but Steady: Progress Toward Accessibility-Focused Initiatives in Computer Science Education</title>
<link>https://hdl.handle.net/1721.1/162589</link>
<description>Slow but Steady: Progress Toward Accessibility-Focused Initiatives in Computer Science Education
Jimenez, Yerika; Daily, Shaundra; Washington, A. Nicki; Sadler, Cecil?
Accessibility remains insufficiently integrated in computer science&#13;
(CS) education, despite its recognized importance. This paper examines how the 3C Fellows Program, a two-year professional development program, facilitated and supported the incorporation of&#13;
identity-inclusive topics, namely disability, into the postsecondary&#13;
CS education space. Through analysis of participant interviews and&#13;
deliverable documentation, findings reveal that through the program, participants deepened their understanding of how disability&#13;
impacts and is impacted by computing, leading to the design and&#13;
implementation of five unique accessibility-focused educational&#13;
initiatives. Results demonstrate that professional development can&#13;
effectively increase accessibility-focused content in CS education.
RESPECT 2025, Newark, NJ, USA
</description>
<pubDate>Mon, 14 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162589</guid>
<dc:date>2025-07-14T00:00:00Z</dc:date>
</item>
<item>
<title>Making Space: Dis/ability and the Scratch Online Community</title>
<link>https://hdl.handle.net/1721.1/162588</link>
<description>Making Space: Dis/ability and the Scratch Online Community
Sadler, Cecil?; Trapp, Jaleesa
Dis/abled youth often face barriers to participation in computational making spaces. This paper examines how youth engage with the Scratch online community to share projects and discussions around dis/ability, creating meaningful connections through creative self-expression. Through counter-storytelling examples, we demonstrate how young people leverage Scratch not only as a programming platform but as a space to build community and celebrate dis/ability identity. Our findings uplift the ways in which young people engage in these spaces to highlight how creative computing environments foster inclusion and connection, dispelling deficit-based narratives in computer science education.
RESPECT 2025, July 14–16, 2025, Newark, NJ, USA
</description>
<pubDate>Mon, 14 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162588</guid>
<dc:date>2025-07-14T00:00:00Z</dc:date>
</item>
<item>
<title>Redefining Data Science: Where Transformative Youth Organizing Meets Arts-Based Abolitionist Education</title>
<link>https://hdl.handle.net/1721.1/162587</link>
<description>Redefining Data Science: Where Transformative Youth Organizing Meets Arts-Based Abolitionist Education
Walker, Raechel; Cruse, Brady; Cora, Aisha; Breazeal, Cynthia
Data science courses often exclude engagement with minoritized groups, discouraging these students from persuing this field. Our Data Activism Program for African American students integrated arts-based abolitionist education and transformative youth organizing. Students collaborated with four community organizations, conducting interviews and surveys to engage with their community and highlight racial disparities in environmental injustice. Post-course surveys and interviews showed an increase in students' ability to apply transformative youth organizing to data science, demonstrating real-world impact. They found the program accessible and meaningful, transforming data science into a tool for self-expression, critical analysis, and activism rather than just an academic subject.
RESPECT 2025, Newark, NJ, USA
</description>
<pubDate>Mon, 14 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162587</guid>
<dc:date>2025-07-14T00:00:00Z</dc:date>
</item>
<item>
<title>Liberatory Computing: Empowering African American Students Through Data Activism</title>
<link>https://hdl.handle.net/1721.1/162586</link>
<description>Liberatory Computing: Empowering African American Students Through Data Activism
Walker, Raechel
Computing curricula often inadvertently reinforce a harmful, singular narrative about African American communities, focusing solely on stories that emphasize crime prediction and policing [4, 8, 9]. This reinforces the harmful stereotype that African American communities are primarily sites of criminal activity rather than centers of innovation, creativity, and resilience [1, 5, 7]. In contrast, the framework I developed, ''liberatory computing'', offers a guideline that can be integrated into computing curricula precisely to counter these cliches [13]. Composed of Dr. Aaliyah El-Amin's five pillars of liberation-a sound racial identity, critical consciousness, collective obligation, a liberation-centered academic identity, and activism skills-liberatory computing empowers students to challenge and mitigate systemic oppression through computing [2]. My research applies this framework as a way to empower African American students to address embedded racism through data activism, in which I created two Data Activism Programs [10]. The first taught students how to use data science to support the minoritized communities of the participants, while the second incorporated collaboration with community organizers, increasing the inclusion of desire-based research [12].&#13;
My first Data Activism program engaged 12 high school students of color; the second included 24 students of African descent who partnered with Greater Boston community organizations on projects involving data, geospatial, and qualitative analysis, as well as artistic expression. Pre- and post-surveys showed increased awareness of data science's role in addressing racism and enhanced advocacy skills [12]. Interviews revealed that working to challenge systemic oppression inspired students to continue integrating data activism into their futures.
RESPECT 2025, Newark, NJ, USA
</description>
<pubDate>Mon, 14 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162586</guid>
<dc:date>2025-07-14T00:00:00Z</dc:date>
</item>
<item>
<title>Towards Agentic AI for Science Hypothesis Generation, Comprehension, Quantification, and Validation</title>
<link>https://hdl.handle.net/1721.1/162585</link>
<description>Towards Agentic AI for Science Hypothesis Generation, Comprehension, Quantification, and Validation
Buehler, Markus
AI is revolutionizing scientific discovery by connecting&#13;
seemingly unrelated fields – from mechanics to biology, and&#13;
science to art. However, how can we build AI models that don’t&#13;
merely retrieve information but make new discoveries, going&#13;
beyond interpolation to extrapolate to reason over never-beforeseen scenarios and concepts? In this talk we describe how a new&#13;
generation of physics-aware AI is breaking traditional&#13;
boundaries through:&#13;
• Innovative graph-based generative AI combining physics&#13;
and data-driven modeling&#13;
• Biologically-inspired neural structures that adapt&#13;
dynamically&#13;
• Multi-agent systems that mirror natural systems&#13;
Through practical case studies, I will present how this&#13;
technology transforms materials science across scales – from&#13;
silk and collagen to biomineralized materials – with direct&#13;
applications in medicine, food systems, and agriculture. The&#13;
versatility in agent development allows for expertise in diverse&#13;
domains, including knowledge retrieval, protein structure&#13;
analysis, physics-based simulations, and results analysis, is&#13;
presented. The dynamic collaboration between agents,&#13;
empowered by LLMs that can reason over sequences, data,&#13;
images, and text, provides a versatile approach to tackling&#13;
protein design and analysis problems, as demonstrated through&#13;
diverse examples in this study.
WWW Companion '25, April 28-May 2, 2025, Sydney, NSW, Australia
</description>
<pubDate>Fri, 23 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162585</guid>
<dc:date>2025-05-23T00:00:00Z</dc:date>
</item>
<item>
<title>TIME 2025: 1st International Workshop on Transformative Insights in Multi-faceted Evaluation</title>
<link>https://hdl.handle.net/1721.1/162584</link>
<description>TIME 2025: 1st International Workshop on Transformative Insights in Multi-faceted Evaluation
Wang, Lei; Hossain, Md Zakir; Islam, Syed; Gedeon, Tom; Alghowinem, Sharifa; Yu, Isabella; Bono, Serena; Zhu, Xuanying; Nguyen, Gennie; Haldar, Nur Al Hasan; Jalali, Seyed Mohammad Jafar; Razzaque, Md Abdur; Razzak, Imran; Islam, Md Rafiqul; Uddin, Shahadat; Janjua, Naeem; Krishna, Aneesh; Ashraf, Manzur
Our workshop brings together domain experts and research students to share insights, practical guidance, and evaluations on key topics, including social network analysis, graph algorithms, web mining, semantics and knowledge, security, privacy, fairness, and ethics on the web. We invite survey, evaluation, or review papers that critically analyze models and datasets from diverse perspectives. These papers serve as essential resources by (i) providing quick reference guides for researchers and practitioners, (ii) enhancing accessibility for newcomers, and (iii) distilling key insights into actionable knowledge. Complementing these contributions, invited talks from experts and industry leaders will offer practical perspectives, fostering cross-domain collaboration in web technologies. Through thought-provoking discussions and networking opportunities, the workshop bridges research and real-world applications, setting a new standard for interdisciplinary exchange in the field.
WWW Companion ’25, April 28-May 2, 2025, Sydney, NSW, Australia
</description>
<pubDate>Fri, 23 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162584</guid>
<dc:date>2025-05-23T00:00:00Z</dc:date>
</item>
<item>
<title>Mitigating Spatial Disparity in Urban Prediction Using Residual-Aware Spatiotemporal Graph Neural Networks: A Chicago Case Study</title>
<link>https://hdl.handle.net/1721.1/162583</link>
<description>Mitigating Spatial Disparity in Urban Prediction Using Residual-Aware Spatiotemporal Graph Neural Networks: A Chicago Case Study
Zhuang, Dingyi; Xu, Hanyong; Guo, Xiaotong; Zheng, Yunhan; Wang, Shenhao; Zhao, Jinhua
Urban prediction tasks, such as forecasting traffic flow, temperature, and crime rates, are crucial for efficient urban planning and management. However, existing Spatiotemporal Graph Neural Networks (ST-GNNs) often rely solely on accuracy, overlooking spatial and demographic disparities in their predictions. This oversight can lead to imbalanced resource allocation and exacerbate existing inequities in urban areas. This study introduces a Residual-Aware Attention (RAA) Block and an equality-enhancing loss function to address these disparities. By adapting the adjacency matrix during training and incorporating spatial disparity metrics, our approach aims to reduce local segregation of residuals and errors. We applied our methodology to urban prediction tasks in Chicago, utilizing travel demand datasets as an example. Our model achieved a 48% significant improvement in fairness metrics with only a 9% increase in error metrics. Spatial analysis of residual distributions revealed that models with RAA Blocks produced more equitable prediction results, particularly by reducing errors clustered in central regions, supporting more balanced and equitable urban planning and policy-making.
WWW Companion ’25, Sydney, NSW, Australia
</description>
<pubDate>Fri, 23 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162583</guid>
<dc:date>2025-05-23T00:00:00Z</dc:date>
</item>
<item>
<title>Commonsense AI in the History of the Web</title>
<link>https://hdl.handle.net/1721.1/162582</link>
<description>Commonsense AI in the History of the Web
Kejriwal, Mayank; McGuinness, Deborah; Lieberman, Henry
Machine common sense (MCS)-the challenge of enabling computers to grasp everyday human knowledge-has been a grand challenge in Artificial Intelligence (AI) since the 1950s. While recent advances in large language models have led to impressive progress, there is still no consensus on how much common sense today's AI actually possesses. In this brief review, we revisit the historical development of MCS in the context of the Web, examining how the Web's evolution-from early knowledge representation efforts to knowledge graphs, the Semantic Web, and crowdsourcing-has shaped MCS research. We argue that key breakthroughs in Web technologies were instrumental in addressing longstanding challenges of scale and coverage in commonsense reasoning. At the same time, MCS research has influenced the development of core Web applications, including intelligent agents, plausibility-based reasoning, and robust evaluation of black-box AI systems.
WWW Companion ’25, Sydney, NSW, Australia
</description>
<pubDate>Fri, 23 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162582</guid>
<dc:date>2025-05-23T00:00:00Z</dc:date>
</item>
<item>
<title>Artificial Intelligence for Complex Network: Potential, Methodology and Application</title>
<link>https://hdl.handle.net/1721.1/162581</link>
<description>Artificial Intelligence for Complex Network: Potential, Methodology and Application
Ding, Jingtao; Zheng, Yu; Wang, Huandong; Cannistraci, Carlo Vittorio; Gao, Jianxi; Li, Yong; Shi, Chuan
This tutorial will explore the fascinating domain of empirical network modeling through artificial intelligence (AI) techniques, with&#13;
applications across social media, web systems, and urban environments. Participants will gain valuable insights into incorporating&#13;
advanced AI methods—such as graph machine learning, deep reinforcement learning, and generative models—within complex network science. The goal is to provide a comprehensive understanding&#13;
of how these models can effectively represent, predict, and control&#13;
empirical networked systems with heterogeneous structures and&#13;
dynamic processes. The tutorial will begin by introducing essential background knowledge, outlining motivations and challenges,&#13;
exploring recent methodological advances, and highlighting key&#13;
applications.
WWW Companion ’25, Sydney, NSW, Australia
</description>
<pubDate>Fri, 23 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162581</guid>
<dc:date>2025-05-23T00:00:00Z</dc:date>
</item>
<item>
<title>Wikipedia Contributions in the Wake of ChatGPT</title>
<link>https://hdl.handle.net/1721.1/162580</link>
<description>Wikipedia Contributions in the Wake of ChatGPT
Lyu, Liang; Siderius, James; Li, Hannah; Acemoglu, Daron; Huttenlocher, Daniel; Ozdaglar, Asuman
How has Wikipedia activity changed for articles with content similar to ChatGPT following its introduction? We estimate the impact using differences-in-differences models, with dissimilar Wikipedia articles as a baseline for comparison, to examine how changes in voluntary knowledge contributions and information-seeking behavior differ by article content. Our analysis reveals that newly created, popular articles whose content overlaps with ChatGPT 3.5 saw a greater decline in editing and viewership after the November 2022 launch of ChatGPT than dissimilar articles did. These findings indicate heterogeneous substitution effects, where users selectively engage less with existing platforms when AI provides comparable content. This points to potential uneven impacts on the future of human-driven online knowledge contributions.
WWW Companion ’25, Sydney, NSW, Australia
</description>
<pubDate>Fri, 23 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162580</guid>
<dc:date>2025-05-23T00:00:00Z</dc:date>
</item>
<item>
<title>Constant-Rate Entanglement Distillation for Fast Quantum Interconnects</title>
<link>https://hdl.handle.net/1721.1/162579</link>
<description>Constant-Rate Entanglement Distillation for Fast Quantum Interconnects
Pattison, Christopher; Baranes, Gefen; Bonilla Ataides, Juan Pablo; Lukin, Mikhail D.; Zhou, Hengyun
Distributed quantum computing allows the modular construction of large-scale quantum computers and enables new protocols for blind quantum computation. However, such applications in the large-scale, fault-tolerant regime place stringent demands on the fidelity and rate of entanglement generation, which are not met by existing methods for quantum interconnects.&#13;
In this work, we develop constant-rate entanglement distillation methods to address this bottleneck in the setting of noisy local operations. By using a sequence of two-way entanglement distillation protocols based on quantum error detecting codes with increasing rate, and combining with standard fault tolerance techniques, we achieve constant-rate entanglement distillation. We show that the scheme has constant-rate in expectation, and further numerically optimize to achieve low practical overhead under memory constraints. We find that compared to existing quantum interconnect schemes, our methods reduce the communication overhead by more than 10 × in relevant regimes, leading to a direct speed-up in the execution of distributed quantum algorithms.
ISCA ’25, Tokyo, Japan
</description>
<pubDate>Fri, 20 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162579</guid>
<dc:date>2025-06-20T00:00:00Z</dc:date>
</item>
<item>
<title>Advancing the Science of Teaching with Tutoring Data: A Collaborative Workshop with the National Tutoring Observatory</title>
<link>https://hdl.handle.net/1721.1/162578</link>
<description>Advancing the Science of Teaching with Tutoring Data: A Collaborative Workshop with the National Tutoring Observatory
Thomas, Danielle R.; Demszky, Dorottya; Koedinger, Kenneth R.; Marland, Joshua; Pietrzak, Doug; Reich, Justin; Slama, Rachel; Toutziaridi, Amalia; Kizilcec, Ren?
Effective teaching is among the most powerful influences on student learning, but scientific progress in understanding effective teaching moves has been held back by insufficient data on teaching. Despite extensive research efforts, progress is hindered by persistent challenges related to data de-identification and preprocessing, annotation and segmentation, multimodal analysis, predictive and causal modeling of student outcomes. Addressing these barriers requires a concerted, interdisciplinary approach. The National Tutoring Observatory (NTO) is a first-of-its-kind research infrastructure designed to unite researchers, developers, tutoring providers, and educational organizations in tackling common barriers to uncovering the dynamics of effective tutoring moves. The NTO is spearheading the creation of the Million Tutor Moves dataset, the largest open-access collection of tutoring interactions, leveraging artificial intelligence to unlock insights that accelerate the science of teaching at scale. This workshop aims to bring together the Learning at Scale community to share progress, identify common challenges, and explore collaborative solutions. The agenda will feature presentations of accepted papers, interactive demos, and a moderated panel bringing together researchers, developers, and tutoring providers. This workshop aims to advance a shared vision for uncovering the fundamental principles of impactful tutoring and teaching through the power of collaborative research and data-driven discovery.
L@S ’25, Palermo, Italy
</description>
<pubDate>Thu, 17 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162578</guid>
<dc:date>2025-07-17T00:00:00Z</dc:date>
</item>
<item>
<title>How Adding Metacognitive Requirements in Support of AI Feedback in Practice Exams Transforms Student Learning Behaviors</title>
<link>https://hdl.handle.net/1721.1/162577</link>
<description>How Adding Metacognitive Requirements in Support of AI Feedback in Practice Exams Transforms Student Learning Behaviors
Ahmad, Mak; Ravi, Prerna; Karger, David; Facciotti, Marc
Providing personalized, detailed feedback at scale in large undergraduate STEM courses remains a persistent challenge. We present&#13;
an empirically evaluated practice exam system that integrates AI&#13;
generated feedback with targeted textbook references, deployed in&#13;
a large introductory biology course. Our system specifically aims&#13;
to encourage metacognitive behavior by asking students to explain&#13;
their answers and declare their confidence. It uses OpenAI’s GPT4o to generate personalized feedback based on this information,&#13;
while directing them to relevant textbook sections. Through detailed interaction logs from consenting participants across three&#13;
midterms (541, 342, and 413 students respectively), totaling 28,313&#13;
question-student interactions across 146 learning objectives, along&#13;
with 279 post-exam surveys and 23 semi-structured interviews, we&#13;
examined the system’s impact on learning outcomes and student&#13;
engagement. Analysis showed that across all midterms, the different feedback types showed no statistically significant differences in&#13;
performance, though there were some trends suggesting potential&#13;
benefits worth further investigation. The system’s most substantial impact emerged through its required confidence ratings and&#13;
explanations, which students reported transferring to their actual&#13;
exam strategies. Approximately 40% of students engaged with textbook references when prompted by feedback—significantly higher&#13;
than traditional reading compliance rates. Survey data revealed&#13;
high student satisfaction (M=4.1/5), with 82.1% reporting increased&#13;
confidence on midterm topics they had practiced, and 73.4% indicating they could recall and apply specific concepts from practice&#13;
sessions. Our findings demonstrate how thoughtfully designed AIenhanced systems can scale formative assessment while promoting&#13;
sustainable study practices and self-regulated learning behaviors,&#13;
suggesting that embedding structured reflection requirements may&#13;
be more impactful than sophisticated feedback mechanisms.
L@S ’25, Palermo, Italy
</description>
<pubDate>Thu, 17 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162577</guid>
<dc:date>2025-07-17T00:00:00Z</dc:date>
</item>
<item>
<title>Bridging the Scientific Knowledge Gap and Reproducibility: A Survey of Provenance, Assertion and Evidence Ontologies</title>
<link>https://hdl.handle.net/1721.1/162576</link>
<description>Bridging the Scientific Knowledge Gap and Reproducibility: A Survey of Provenance, Assertion and Evidence Ontologies
Chhetri, Tek Raj; Halchenko, Yaroslav; Jarecka, Dorota; Trivedi, Puja; Ghosh, Satrajit; Ray, Patrick; Ng, Lydia
The rapid growth of scientific publications and evolving experimental paradigms create significant challenges in staying up-to-date with current advances. Assertions are often unstructured and have limited provenance, which hinders reproducibility. Ontologies and knowledge graphs (KGs) offer structured solutions by capturing assertions, evidence, and provenance to support reproducibility. This paper reviews 23 ontologies -- 13 focused on assertions and evidence and 10 on provenance -- providing an overview of the current landscape while highlighting key challenges and opportunities for improvement.
WWW Companion ’25, Sydney, NSW, Australia
</description>
<pubDate>Fri, 23 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162576</guid>
<dc:date>2025-05-23T00:00:00Z</dc:date>
</item>
<item>
<title>Tracing the stepwise Darwinian evolution of a plant halogenase.</title>
<link>https://hdl.handle.net/1721.1/162575</link>
<description>Tracing the stepwise Darwinian evolution of a plant halogenase.
Kim, Colin Y; Kastner, David W; Mitchell, Andrew J; Gutierrez, Michael A; Yao, Jocelyn S; Neumann, Edwin N; Kulik, Heather J; Weng, Jing-Ke
Biohalogenation is rare in plant metabolism, with the Menispermaceae's chloroalkaloid acutumine being an exception. This involves a specialized dechloroacutumine halogenase (DAH) from the iron- and 2-oxoglutarate-dependent dioxygenase (2ODD) family. While DAH is presumed to have evolved from an ancestral 2ODD, how enzyme specialization arises through Darwinian processes remains a fundamental question in understanding metabolic evolution. Here, we investigate the evolutionary history of DAH using the chromosomal-level genome of &lt;i&gt;Menispermum canadense&lt;/i&gt;. Phylogenomic dating and synteny analyses reveal DAH evolution through tandem duplication of an ancestral flavonol synthase (FLS) gene, followed by neofunctionalization and gene loss events. Structural modeling, molecular dynamics, and site-directed mutagenesis identify mutations enabling the catalytic switch from FLS to DAH. This required traversing a complex evolutionary landscape with deep fitness valleys separating intermediate states captured in the &lt;i&gt;M. canadense&lt;/i&gt; genome. Our findings illustrate how enzymatic functions evolve through lineage-specific pathways, reshaping active sites and enabling catalytic mechanism-switching mutations.
</description>
<pubDate>Wed, 13 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162575</guid>
<dc:date>2025-08-13T00:00:00Z</dc:date>
</item>
<item>
<title>Systematic Bandgap Engineering of a 2D Organic–Inorganic Chalcogenide Semiconductor via Ligand Modification</title>
<link>https://hdl.handle.net/1721.1/162574</link>
<description>Systematic Bandgap Engineering of a 2D Organic–Inorganic Chalcogenide Semiconductor via Ligand Modification
Sakurada, Tomoaki; Paritmongkol, Watcharaphol; Cho, Yeongsu; Lee, Woo Seok; Chatsiri, Petcharaphorn; Oppenheim, Julius J; Wan, Ruomeng; Su, Annlin; Samulewicz, Nicholas; Wannakan, Khemika; Müller, Peter; Dincă, Mircea; Kulik, Heather J; Tisdale, William A
Hybrid organic–inorganic semiconductors present new opportunities for optoelectronic materials design not available in all-organic or all-inorganic materials. One example is silver phenylselenide (AgSePh) – or “mithrene” – a blue-emitting 2D organic–inorganic semiconductor exhibiting strong optical and electronic anisotropy. Here, we show that the bandgap of mithrene can be systematically tuned by introducing electron-donating and electron-withdrawing groups to the phenyl ligands. We synthesized nine mithrene variants, eight of which formed 2D van der Waals crystals analogous to those of AgSePh. Density functional theory calculations reveal that these 2D mithrene variants are direct-gap or nearly direct gap semiconductors. Furthermore, we identify correlations between the optical gap and three experimental observables – the Hammett constant, 77Se chemical shift, and selenium partial charge – offering predictive power for bandgap tuning. These findings highlight new opportunities for applying the tools of chemical synthesis to semiconductor materials design.
</description>
<pubDate>Tue, 19 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162574</guid>
<dc:date>2025-08-19T00:00:00Z</dc:date>
</item>
<item>
<title>The Open Reaction Database</title>
<link>https://hdl.handle.net/1721.1/162573</link>
<description>The Open Reaction Database
Kearnes, Steven M; Maser, Michael R; Wleklinski, Michael; Kast, Anton; Doyle, Abigail G; Dreher, Spencer D; Hawkins, Joel M; Jensen, Klavs F; Coley, Connor W
Chemical reaction data in journal articles, patents, and even electronic laboratory notebooks are currently stored in various formats, often unstructured, which presents a significant barrier to downstream applications, including the training of machine-learning models. We present the Open Reaction Database (ORD), an open-access schema and infrastructure for structuring and sharing organic reaction data, including a centralized data repository. The ORD schema supports conventional and emerging technologies, from benchtop reactions to automated high-throughput experiments and flow chemistry. The data, schema, supporting code, and web-based user interfaces are all publicly available on GitHub. Our vision is that a consistent data representation and infrastructure to support data sharing will enable downstream applications that will greatly improve the state of the art with respect to computer-aided synthesis planning, reaction prediction, and other predictive chemistry tasks.
</description>
<pubDate>Tue, 02 Nov 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162573</guid>
<dc:date>2021-11-02T00:00:00Z</dc:date>
</item>
<item>
<title>Automation and Microfluidics for the Efficient, Fast, and Focused Reaction Development of Asymmetric Hydrogenation Catalysis</title>
<link>https://hdl.handle.net/1721.1/162572</link>
<description>Automation and Microfluidics for the Efficient, Fast, and Focused Reaction Development of Asymmetric Hydrogenation Catalysis
van Putten, Robbert; Eyke, Natalie S; Baumgartner, Lorenz M; Schultz, Victor L; Filonenko, Georgy A; Jensen, Klavs F; Pidko, Evgeny A
Automation and microfluidic tools potentially enable efficient, fast, and focused reaction development of complex chemistries, while minimizing resource- and material consumption. The introduction of automation-assisted workflows will contribute to the more sustainable development and scale-up of new and improved catalytic technologies. Herein, the application of automation and microfluidics to the development of a complex asymmetric hydrogenation reaction is described. Screening and optimization experiments were performed using an automated microfluidic platform, which enabled a drastic reduction in the material consumption compared to conventional laboratory practices. A suitable catalytic system was identified from a library of RuII-diamino precatalysts. In situ precatalyst activation was studied with 1H/31P nuclear magnetic resonance (NMR), and the reaction was scaled up to multigram quantities in a batch autoclave. These reactions were monitored using an automated liquid-phase sampling system. Ultimately, in less than a week of total experimental time, multigram quantities of the target enantiopure alcohol product were provided by this automation-assisted approach.
</description>
<pubDate>Tue, 26 Apr 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162572</guid>
<dc:date>2022-04-26T00:00:00Z</dc:date>
</item>
<item>
<title>Bayesian Optimization of Computer-Proposed Multistep Synthetic Routes on an Automated Robotic Flow Platform</title>
<link>https://hdl.handle.net/1721.1/162571</link>
<description>Bayesian Optimization of Computer-Proposed Multistep Synthetic Routes on an Automated Robotic Flow Platform
Nambiar, Anirudh MK; Breen, Christopher P; Hart, Travis; Kulesza, Timothy; Jamison, Timothy F; Jensen, Klavs F
Computer-aided synthesis planning (CASP) tools can propose retrosynthetic pathways and forward reaction conditions for the synthesis of organic compounds, but the limited availability of context-specific data currently necessitates experimental development to fully specify process details. We plan and optimize a CASP-proposed and human-refined multistep synthesis route toward an exemplary small molecule, sonidegib, on a modular, robotic flow synthesis platform with integrated process analytical technology (PAT) for data-rich experimentation. Human insights address catalyst deactivation and improve yield by strategic choices of order of addition. Multi-objective Bayesian optimization identifies optimal values for categorical and continuous process variables in the multistep route involving 3 reactions (including heterogeneous hydrogenation) and 1 separation. The platform's modularity, robotic reconfigurability, and flexibility for convergent synthesis are shown to be essential for allowing variation of downstream residence time in multistep flow processes and controlling the order of addition to minimize undesired reactivity. Overall, the work demonstrates how automation, machine learning, and robotics enhance manual experimentation through assistance with idea generation, experimental design, execution, and optimization.
</description>
<pubDate>Fri, 10 Jun 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162571</guid>
<dc:date>2022-06-10T00:00:00Z</dc:date>
</item>
<item>
<title>REEV SENSE IMUs for Gait Analysis in Stroke: A Clinical Study on Lower Limb Kinematics</title>
<link>https://hdl.handle.net/1721.1/162570</link>
<description>REEV SENSE IMUs for Gait Analysis in Stroke: A Clinical Study on Lower Limb Kinematics
Marsan, Thibault; Clauzade, Sacha; Zhang, Xiang; Grandin, Nicolas; Urman, Tatiana; Linton, Evan; Elsayed-Aly, Ingy; Ricciardi, Catherine E.; Temporelli, Robin
Human gait analysis is essential for clinical evaluation and rehabilitation monitoring, particularly in post-stroke individuals, where joint kinematics provide valuable insights into motor recovery. While optical motion capture (OMC) is the gold standard, its high cost and restricted use in laboratory settings limit its accessibility. This study aimed to evaluate the accuracy of REEV SENSE, a novel magnetometer-free inertial measurement unit (IMU), in capturing knee and ankle joint angles during overground walking in post-stroke individuals using assistive devices. Twenty participants with chronic stroke walked along a 10-m walkway with their usual assistive device (cane or walker), while joint kinematics were simultaneously recorded using OMC and IMUs. Agreement between the systems was assessed using the mean absolute error, root mean square error, 95% confidence intervals, and Pearson’s correlation coefficient. Knee angles measured with the IMUs showed a strong correlation with the OMC (r &gt; 0.9) and low errors (MAE &lt; 5°), consistent with clinical acceptability. Ankle angle accuracy was lower for participants using walkers, while knee measurements remained stable regardless of the assistive device. These findings demonstrate that REEV SENSE IMUs provide clinically relevant kinematic data and support their use as a practical wearable tool for gait analysis in real-world or remote clinical settings.
</description>
<pubDate>Mon, 18 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162570</guid>
<dc:date>2025-08-18T00:00:00Z</dc:date>
</item>
<item>
<title>Techno-Economic Analysis of Decarbonized Backup Power Systems Using Scenario-Based Stochastic Optimization</title>
<link>https://hdl.handle.net/1721.1/162569</link>
<description>Techno-Economic Analysis of Decarbonized Backup Power Systems Using Scenario-Based Stochastic Optimization
Schweiger, Jonas; Macdonald, Ruaridh
first_pagesettingsOrder Article Reprints&#13;
Open AccessArticle&#13;
Techno-Economic Analysis of Decarbonized Backup Power Systems Using Scenario-Based Stochastic Optimization&#13;
by Jonas Schweiger 1,2,*ORCID andRuaridh Macdonald 1ORCID&#13;
1&#13;
MIT Energy Initiative, Massachusetts Institute of Technology, 50 Ames St., Cambridge, MA 02142, USA&#13;
2&#13;
College of Management of Technology, École Polytechnique Fédérale de Lausanne, Station 5, CH-1015 Lausanne, Switzerland&#13;
*&#13;
Author to whom correspondence should be addressed.&#13;
Energies 2025, 18(16), 4388; https://doi.org/10.3390/en18164388&#13;
Submission received: 14 July 2025 / Revised: 4 August 2025 / Accepted: 14 August 2025 / Published: 18 August 2025&#13;
(This article belongs to the Section C: Energy Economics and Policy)&#13;
Downloadkeyboard_arrow_down Browse Figures Versions Notes&#13;
Abstract&#13;
In the context of growing concerns about power disruptions, grid reliability and the need for decarbonization, this study evaluates a broad range of clean backup power systems (BPSs) to replace traditional emergency diesel generators. A scenario-based stochastic optimization framework using actual load profiles and outage probabilities is proposed to assess the most promising options from a pool of 27 technologies. This framework allows a comparison of the cost effectiveness and environmental impact of individual technologies and hybrid BPSs across various scenarios. The results highlight the trade-off between total annual system cost and emissions. Significant emission reductions can be achieved at moderate cost increases but deep decarbonization levels incur higher costs. Primary and secondary batteries are included in optimal clean fuel-based systems across all decarbonization levels, combining cost-effective power delivery and long-term storage benefits. The findings highlight the often-overlooked importance of fuel replacement on both emissions and costs. Among the assessed technologies, ammonia generators and hydrogen fuel cells combined with secondary iron–air batteries emerge as cost-effective solutions for achieving decarbonization goals. To ensure a broad range of applicability, the study outlines the impact of emergency fuel purchases, varying demand patterns and demand response options on the optimal BPS. The research findings are valuable for optimizing the design of clean BPSs to economically meet the needs of many facility types and decarbonization targets.
</description>
<pubDate>Mon, 18 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162569</guid>
<dc:date>2025-08-18T00:00:00Z</dc:date>
</item>
<item>
<title>Time-Marching Quantum Algorithm for Simulation of Nonlinear Lorenz Dynamics</title>
<link>https://hdl.handle.net/1721.1/162568</link>
<description>Time-Marching Quantum Algorithm for Simulation of Nonlinear Lorenz Dynamics
Koukoutsis, Efstratios; Vahala, George; Soe, Min; Hizanidis, Kyriakos; Vahala, Linda; Ram, Abhay K.
Simulating nonlinear classical dynamics on a quantum computer is an inherently challenging task due to the linear operator formulation of quantum mechanics. In this work, we provide a systematic approach to alleviate this difficulty by developing an explicit quantum algorithm that implements the time evolution of a second-order time-discretized version of the Lorenz model. The Lorenz model is a celebrated system of nonlinear ordinary differential equations that has been extensively studied in the contexts of climate science, fluid dynamics, and chaos theory. Our algorithm possesses a recursive structure and requires only a linear number of copies of the initial state with respect to the number of integration time-steps. This provides a significant improvement over previous approaches, while preserving the characteristic quantum speed-up in terms of the dimensionality of the underlying differential equations system, which similar time-marching quantum algorithms have previously demonstrated. Notably, by classically implementing the proposed algorithm, we showcase that it accurately captures the structural characteristics of the Lorenz system, reproducing both regular attractors&amp;ndash;limit cycles&amp;ndash;and the chaotic attractor within the chosen parameter regime.
</description>
<pubDate>Sun, 17 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162568</guid>
<dc:date>2025-08-17T00:00:00Z</dc:date>
</item>
<item>
<title>A Formal Definition of Scale-Dependent Complexity and the Multi-Scale Law of Requisite Variety</title>
<link>https://hdl.handle.net/1721.1/162567</link>
<description>A Formal Definition of Scale-Dependent Complexity and the Multi-Scale Law of Requisite Variety
Siegenfeld, Alexander F.; Bar-Yam, Yaneer
Ashby’s law of requisite variety allows a comparison of systems with their environments, providing a necessary (but not sufficient) condition for system efficacy: A system must possess at least as much complexity as any set of environmental behaviors that require distinct responses from the system. However, to account for the dependence of a system’s complexity on the level of detail—or scale—of its description, a multi-scale generalization of Ashby’s law is needed. We define a class of complexity profiles (complexity as a function of scale) that is the first, to our knowledge, to exhibit a multi-scale law of requisite variety. This formalism provides a characterization of multi-scale complexity and generalizes the law of requisite variety’s single constraint on system behaviors to a class of multi-scale constraints. We show that these complexity profiles satisfy a sum rule, which reflects a tradeoff between smaller- and larger-scale degrees of freedom, and we extend our results to subdivided systems and systems with a continuum of components.
</description>
<pubDate>Wed, 06 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162567</guid>
<dc:date>2025-08-06T00:00:00Z</dc:date>
</item>
<item>
<title>Can a Global Climate Model Reproduce a Tornado Outbreak Atmospheric Pattern? Methodology and a Case Study</title>
<link>https://hdl.handle.net/1721.1/162544</link>
<description>Can a Global Climate Model Reproduce a Tornado Outbreak Atmospheric Pattern? Methodology and a Case Study
Ćwik, Paulina; McPherson, Renee A.; Li, Funing; Furtado, Jason C.
Tornado outbreaks can cause substantial damage, injuries, and fatalities, highlighting the need to understand their characteristics for assessing present and future risks. However, global climate models (GCMs) lack the resolution to explicitly simulate tornado outbreaks. As an alternative, researchers examine large-scale atmospheric ingredients that approximate tornado-conducive environments. Building on this approach, we tested whether patterns of covariability between WMAXSHEAR and 500-hPa geopotential height anomalies, previously identified in ERA5 reanalysis, could approximate major U.S. May tornado outbreaks in a GCM. We developed a proxy-based methodology by systematically testing pairs of thresholds for both variables to identify the combination that best reproduced the leading pattern selected for analysis. These thresholds were then applied to simulations from the high-resolution MPI-ESM1.2-HR model to assess its ability to reproduce the original pattern. Results show that the model closely mirrored the observed tornado outbreak pattern, as indicated by a low normalized root mean square error, high spatial correlation, and similar distributions. This study demonstrates a replicable approach for approximating tornado outbreak patterns, applied here to the leading pattern, within a GCM, providing a foundation for future research on how such environments might evolve in a warming climate.
</description>
<pubDate>Wed, 30 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162544</guid>
<dc:date>2025-07-30T00:00:00Z</dc:date>
</item>
<item>
<title>Evaluating the Suitability of Perfusion-Based PD Probes for Use in Altered Gravity Environments</title>
<link>https://hdl.handle.net/1721.1/162500</link>
<description>Evaluating the Suitability of Perfusion-Based PD Probes for Use in Altered Gravity Environments
MacRobbie, Madelyn; Chen, Vanessa Z.; Paige, Cody; Otuya, David; Stankovic, Aleksandra; Tearney, Guillermo
Measurable changes in electrophysiology have been documented in spaceflight, creating a pathway for disease genesis and progression in astronauts. These electrophysiology changes can be measured using potential difference (PD). A probe to measure PD was developed and is used clinically on Earth; this probe relies on fluid perfusion to establish an electrical connection to make PD measurements. The changes to fluid behavior in microgravity and partial gravity (including lunar and Martian gravity) drives the need to test this probe in a spaceflight environment. Here, we test the PD probe in a novel nasal cavity phantom in parabolic flight, simulating microgravity, lunar gravity, Martian gravity, and hypergravity conditions across 37 parabolas. The results are evaluated across gravity conditions using the Wilcoxon Rank Sum test. We record no statistically significant difference in probe PD measurements in 1 g, microgravity, lunar gravity, and hypergravity (approximately 1.8 g) conditions, reaching a NASA Technology Readiness Level 6. Martian gravity findings are inconclusive. Perfusion-based PD probes are therefore successfully demonstrated for use in spaceflight operation in microgravity, lunar gravity, and hypergravity environments; this establishes a foundation for moving towards the in-space testing of perfusion-based probes in astronauts.
</description>
<pubDate>Thu, 24 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162500</guid>
<dc:date>2025-07-24T00:00:00Z</dc:date>
</item>
<item>
<title>Topology optimization of compliant mechanisms using augmented IFEM with adaptive mesh refinement and level set method</title>
<link>https://hdl.handle.net/1721.1/162499</link>
<description>Topology optimization of compliant mechanisms using augmented IFEM with adaptive mesh refinement and level set method
Soltani, Zahra; Frecker, Mary
This paper presents an effective topology optimization framework for the design of compliant mechanisms, integrating the immersed finite element method with adaptive mesh refinement and radial basis function (RBFs)-interpolated level set method. The proposed approach addresses the challenges of representing complex material boundaries and enhancing resolution in critical interface regions, which are common in the optimization of compliant mechanisms. By leveraging the global support properties of RBFs, the method efficiently captures global changes in response to local adjustments in the level set function, resulting in a fast convergence to optimal designs. Parameterizing the level set function with global interpolation radial basis functions enables smooth variations of the function across the entire design domain during iterations. This capability holds significant importance, particularly in the context of topology optimization of compliant mechanisms, where intricate geometries with complex shapes and features may arise. The effectiveness of the proposed method is demonstrated through numerical examples, showcasing its ability to produce the optimum design starting from various initial configurations.
</description>
<pubDate>Mon, 21 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162499</guid>
<dc:date>2025-07-21T00:00:00Z</dc:date>
</item>
<item>
<title>Detection efficiency of fish tracking by autonomous sailboat while underway</title>
<link>https://hdl.handle.net/1721.1/162498</link>
<description>Detection efficiency of fish tracking by autonomous sailboat while underway
Hung, Ching-Tang; Sacarny, Michael J.; Zarrella-Smith, Katrina A.; Jordaan, Adrian; Benjamin, Michael R.; Triantafallou, Michael S.; Chen, Chi-Fang
Background Acoustic telemetry is a fundamental tool for studying aquatic organisms and offers powerful insights into their behavior across habitats. Researchers can choose from numerous deployment methods to suit specific species and habitats. Yet, shallow waters are particularly challenging in part because the tools available are reduced; most mobile tracking platforms cannot be used due to depth requirements. Furthermore, surface vehicle detection efficiency is limited by noise interference from the surface and the vehicle itself, rendering it an underutilized tool. Therefore, this work improved upon the design of sensor placement and the resulting acoustic detection efficiency of a mobile, near-surface receiver. Results An autonomous sailboat was outfitted with custom software, a common acoustic receiver, and a high-performance hydrophone to survey an area of Boston Harbor, USA for acoustically-tagged winter flounder. To enhance detection, the design incorporated a high-transmissivity, flooded cowling. Increased wind- and wave-induced bubble plumes decreased mobile receiver efficiency as compared to stationary receivers from a concurrent study, and the resulting efficiency was quantified over a range of wind speeds. The mobile receiver detected 10.6% of the known tagged population of winter flounder in less than two days, similar to 11.6% detected collectively by multiple stationary receivers in the same area over the same period. In addition, a probabilistic model using the hydrophone data was developed to estimate and map fish positions within the surveyed habitat while incorporating uncertainty. Conclusions The utility of the autonomous sailboat to track fish without the need to limit mobility or attach the receiver by trailing it at depth is demonstrated here. The addition of the sensor cowling minimized drag, shielded the sensors from turbulence, and reduced noise caused by vessel movement, and the hydrophone enabled continuous monitoring of detection efficiency. The fish distribution model has the potential to have greater accuracy of fish positions as compared to standard receiver-based inferences. Limitations still exist depending on sea state among other factors, as high winds greatly impaired detection efficiency and could impact the distribution estimation. Overall, these results provide essential design and analytical guidance for enhancing acoustic telemetry via surface platforms, providing further potential for broader adoption and innovation in the mobile tracking of aquatic organisms.
</description>
<pubDate>Sat, 14 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162498</guid>
<dc:date>2025-06-14T00:00:00Z</dc:date>
</item>
<item>
<title>Networks and interfaces as catalysts for polymer materials innovation</title>
<link>https://hdl.handle.net/1721.1/162495</link>
<description>Networks and interfaces as catalysts for polymer materials innovation
Deagen, Michael E; Walsh, Dylan J; Audus, Debra J; Kroenlein, Kenneth; de Pablo, Juan J; Aou, Kaoru; Chard, Kyle; Jensen, Klavs F; Olsen, Bradley D
Autonomous experimental systems offer a compelling glimpse into a future where closed-loop, iterative cycles—performed by machines and guided by artificial intelligence (AI) and machine learning (ML)—play a foundational role in materials research and development. This perspective draws attention to the roles of networks and interfaces—of and between humans and machines—for the purpose of generating knowledge and accelerating innovation. Polymers, a class of materials with massive global impact, present a unique opportunity for the application of informatics and automation to pressing societal challenges. To develop these networks and interfaces in polymer science, the Community Resource for Innovation in Polymer Technology (CRIPT)—a polymer data ecosystem based on novel polymer data model, representation, search, and visualization technologies—is introduced. The ongoing co-design efforts engage stakeholders in industry, academia, and government to uncover rapidly actionable, high-impact opportunities to build networks, bridge interfaces, and catalyze innovation in polymer technology.
</description>
<pubDate>Wed, 16 Nov 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162495</guid>
<dc:date>2022-11-16T00:00:00Z</dc:date>
</item>
<item>
<title>Machine-Learning-Guided Discovery of Electrochemical Reactions</title>
<link>https://hdl.handle.net/1721.1/162494</link>
<description>Machine-Learning-Guided Discovery of Electrochemical Reactions
Zahrt, Andrew F; Mo, Yiming; Nandiwale, Kakasaheb Y; Shprints, Ron; Heid, Esther; Jensen, Klavs F
The molecular structures synthesizable by organic chemists dictate the molecular functions they can create. The invention and development of chemical reactions are thus critical for chemists to access new and desirable functional molecules in all disciplines of organic chemistry. This work seeks to expedite the exploration of emerging areas of organic chemistry by devising a machine-learning-guided workflow for reaction discovery. Specifically, this study uses machine learning to predict competent electrochemical reactions. To this end, we first develop a molecular representation that enables the production of general models with limited training data. Next, we employ automated experimentation to test a large number of electrochemical reactions. These reactions are categorized as competent or incompetent mixtures, and a classification model was trained to predict reaction competency. This model is used to screen 38,865 potential reactions in silico, and the predictions are used to identify a number of reactions of synthetic or mechanistic interest, 80% of which are found to be competent. Additionally, we provide the predictions for the 38,865-member set in the hope of accelerating the development of this field. We envision that adopting a workflow such as this could enable the rapid development of many fields of chemistry.
</description>
<pubDate>Wed, 14 Dec 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162494</guid>
<dc:date>2022-12-14T00:00:00Z</dc:date>
</item>
<item>
<title>Heterogeneous photochemical reaction enabled by an ultrasonic microreactor</title>
<link>https://hdl.handle.net/1721.1/162493</link>
<description>Heterogeneous photochemical reaction enabled by an ultrasonic microreactor
Udepurkar, Aniket P; Nandiwale, Kakasaheb Y; Jensen, Klavs F; Kuhn, Simon
The presence of solids as starting reagents/reactants or products in flow photochemical reactions can lead to reactor clogging and yield reduction from side reactions. We address this limitation with a new ultrasonic microreactor for continuous solid-laden photochemical reactions. The ultrasonic photochemical microreactor is characterized by the liquid and solid residence time distribution (RTD) and the absorbed photon flux in the reactor via chemical actinometry. The solid-handling capability of the ultrasonic photochemical microreactor is demonstrated with a silyl radical-mediated metallaphotoredox cross-electrophile coupling with a solid base as a reagent.
</description>
<pubDate>Sun, 01 Jan 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162493</guid>
<dc:date>2023-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Integrating autonomy into automated research platforms</title>
<link>https://hdl.handle.net/1721.1/162492</link>
<description>Integrating autonomy into automated research platforms
Canty, Richard B; Koscher, Brent A; McDonald, Matthew A; Jensen, Klavs F
Integrating automation and autonomy into self-driving laboratories promises more efficient and reproducible experimentation while freeing scientists to focus on intellectual challenges. In the rapid advances being made towards self-driving laboratories, automation and autonomy techniques are often convoluted due to similarities between them and ambiguous language, leaving the trade-offs between them overlooked. In this perspective, we address differences between making a process occur without human intervention (automation) and providing agency and flexibility in action (autonomy). We describe the challenges of autonomy in terms of (1) orchestration, how tasks are organized and coordinated; (2) facilitation, how devices are connected and brought under automated control; and (3) scripting languages, how workflows are encoded into digital representations. Autonomous systems require advanced control architectures to handle a reactive, evolving workflow, involving control abstractions and scheduling beyond what current automation approaches provide. The specification of an autonomous system requires goal-oriented commands and context awareness, whereas automation needs exact, unambiguous instructions for reproducibility and efficiency. We contend that this contrast in design creates a need for improved standards in automation and a set of guiding principles to facilitate the development of autonomy-enabling technologies.
</description>
<pubDate>Sun, 01 Jan 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162492</guid>
<dc:date>2023-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Parallel multi-droplet platform for reaction kinetics and optimization</title>
<link>https://hdl.handle.net/1721.1/162491</link>
<description>Parallel multi-droplet platform for reaction kinetics and optimization
Eyke, Natalie S; Schneider, Timo N; Jin, Brooke; Hart, Travis; Monfette, Sebastien; Hawkins, Joel M; Morse, Peter D; Howard, Roger M; Pfisterer, David M; Nandiwale, Kakasaheb Y; Jensen, Klavs F
We present an automated droplet reactor platform possessing parallel reactor channels and a scheduling algorithm that orchestrates all of the parallel hardware operations and ensures droplet integrity as well as overall efficiency. We design and incorporate all of the necessary hardware and software to enable the platform to be used to study both thermal and photochemical reactions. We incorporate a Bayesian optimization algorithm into the control software to enable reaction optimization over both categorical and continuous variables. We demonstrate the capabilities of both the preliminary single-channel and parallelized versions of the platform using a series of model thermal and photochemical reactions. We conduct a series of reaction optimization campaigns and demonstrate rapid acquisition of the data necessary to determine reaction kinetics. The platform is flexible in terms of use case: it can be used either to investigate reaction kinetics or to perform reaction optimization over a wide range of chemical domains.
</description>
<pubDate>Sun, 01 Jan 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162491</guid>
<dc:date>2023-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Modest functional diversity decline and pronounced composition shifts of microbial communities in a mixed waste-contaminated aquifer</title>
<link>https://hdl.handle.net/1721.1/162490</link>
<description>Modest functional diversity decline and pronounced composition shifts of microbial communities in a mixed waste-contaminated aquifer
Fan, Yupeng; Wang, Dongyu; Yang, Joy X.; Ning, Daliang; He, Zhili; Zhang, Ping; Rocha, Andrea M.; Xiao, Naijia; Michael, Jonathan P.; Walker, Katie F.; Joyner, Dominique C.; Pan, Chongle; Adams, Michael W. W.; Fields, Matthew W.; Alm, Eric J.; Stahl, David A.
Background Microbial taxonomic diversity declines with increased environmental stress. Yet, few studies have explored whether phylogenetic and functional diversities track taxonomic diversity along the stress gradient. Here, we investigated microbial communities within an aquifer in Oak Ridge, Tennessee, USA, which is characterized by a broad spectrum of stressors, including extremely high levels of nitrate, heavy metals like cadmium and chromium, radionuclides such as uranium, and extremely low pH (&lt; 3). Results Both taxonomic and phylogenetic α-diversities were reduced in the most impacted wells, while the decline in functional α-diversity was modest and statistically insignificant, indicating a more robust buffering capacity to environmental stress. Differences in functional gene composition (i.e., functional β-diversity) were pronounced in highly contaminated wells, while convergent functional gene composition was observed in uncontaminated wells. The relative abundances of most carbon degradation genes were decreased in contaminated wells, but genes associated with denitrification, adenylylsulfate reduction, and sulfite reduction were increased. Compared to taxonomic and phylogenetic compositions, environmental variables played a more significant role in shaping functional gene composition, suggesting that niche selection could be more closely related to microbial functionality than taxonomy. Conclusions Overall, we demonstrated that despite a reduced taxonomic α-diversity, microbial communities under stress maintained functionality underpinned by environmental selection.
</description>
<pubDate>Mon, 28 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162490</guid>
<dc:date>2025-04-28T00:00:00Z</dc:date>
</item>
<item>
<title>Designing transit routes based on vehicle routing behavior determined through location-based services data</title>
<link>https://hdl.handle.net/1721.1/162489</link>
<description>Designing transit routes based on vehicle routing behavior determined through location-based services data
Tang, Yuhan; Alhadlaq, Abdullah; Bagabaldo, Alben R.; Gonzalez, Marta C.
The disparity between transit agency travel predictions and the unpredictable nature of real-world travel behavior contributes to inefficiencies within the transit system. To address this challenge, we propose a bottom-up transit planning approach that leverages extensive Location-Based Services (LBS) data and General Transit Feed Specification (GTFS) data for Dallas, Texas. The LBS dataset used in this study is comprised of approximately 12.43 billion records from 6.5 million users. This rich dataset is combined with GTFS data to analyze vehicle routing behavior and identify transit supply gaps. Hidden Markov Model (HMM)-based map matching aligns the LBS trajectories with a road network extracted from OpenStreetMap, allowing us to compare user demand against bus service frequency based on GTFS. To design transit improvements, we first apply k-means clustering based on Euclidean distances to group underserved road segments, and then refine these groups using a shortest-path-based clustering algorithm. This second step explicitly incorporates the actual connectivity of the road network, ensuring that proposed transit routes follow realistic travel paths. Our evaluation indicates that the proposed transit routes, whether via route extensions or new bus lines, can substantially serve the underserved areas and have the potential to significantly reduce Vehicle Miles Traveled (VMT).
</description>
<pubDate>Wed, 11 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162489</guid>
<dc:date>2025-06-11T00:00:00Z</dc:date>
</item>
<item>
<title>Application of the digital annealer unit in optimizing chemical reaction conditions for enhanced production yields</title>
<link>https://hdl.handle.net/1721.1/162488</link>
<description>Application of the digital annealer unit in optimizing chemical reaction conditions for enhanced production yields
Li, Shih-Cheng; Wang, Pei-Hua; Su, Jheng-Wei; Chiang, Wei-Yin; Yeh, Tzu-Lan; Zhavoronkov, Alex; Huang, Shih-Hsien; Lin, Yen-Chu; Ou, Chia-Ho; Chen, Chih-Yu
Finding optimal reaction conditions is crucial for chemical synthesis in the pharmaceutical and chemical industries. However, due to the vast chemical space, conducting experiments for all the possible combinations is impractical. Thus, quantitative structure–activity relationship (QSAR) models have been widely used to predict product yields, but evaluating all combinations is still computationally intensive. In this work, we demonstrate the use of Digital Annealer Unit (DAU) can tackle these large-scale optimization problems more efficiently. Two types of models are developed and tested on high-throughput experimentation (HTE) and Reaxys datasets. Our results suggest that the performance of models is comparable to classical machine learning (ML) methods (i.e., Random Forest and Multilayer Perceptron (MLP)), while the inference time of our models requires only seconds with a DAU. In active learning and autonomous reaction condition design, our model shows improvement for reaction yield prediction by incorporating new data, meaning that it can potentially be used in iterative processes. Our method can also accelerate the screening of billions of reaction conditions, achieving speeds millions of times faster than traditional computing units in identifying superior conditions.
</description>
<pubDate>Mon, 14 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162488</guid>
<dc:date>2025-07-14T00:00:00Z</dc:date>
</item>
<item>
<title>Taper: Creative Constraints and Minimalist Design in a Computational Poetry Publication</title>
<link>https://hdl.handle.net/1721.1/162487</link>
<description>Taper: Creative Constraints and Minimalist Design in a Computational Poetry Publication
Chang, Angela; Montfort, Nick
In an era defined by rapid technological evolution, digital publications are not only effective means of distribution; they also advance creativity, collaboration, and cultural impact.&#13;
This paper explores the seven-year journey of Taper, a magazine for computational poetry, broadly defined, that invites computational creativity and uses a minimal design. By embracing deliberate constraints, including a restriction on program/poem size and different themes for different issues, Taper fosters innovation through remix culture, experimentation, and collaboration. These approaches nurture a dynamic community of practice at the intersection of literary art and programming while advancing grassroots strategies for sustainable growth and long-term viability. Reflecting on Taper’s evolution, this paper illustrates how minimalist design principles and computational frameworks can amplify creative expression, strengthen community engagement, and cultivate ecosystems capable of addressing pressing societal challenges. These findings demonstrate how a collectively-edited project can spur artistic innovation and “creativity for change,” enabling lasting impact in a shifting creative landscape.
C&amp;C ’25, Virtual, United Kingdom
</description>
<pubDate>Sun, 22 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162487</guid>
<dc:date>2025-06-22T00:00:00Z</dc:date>
</item>
<item>
<title>CellMemory: hierarchical interpretation of out-of-distribution cells using bottlenecked transformer</title>
<link>https://hdl.handle.net/1721.1/162486</link>
<description>CellMemory: hierarchical interpretation of out-of-distribution cells using bottlenecked transformer
Wang, Qifei; Zhu, He; Hu, Yiwen; Chen, Yanjie; Wang, Yuwei; Li, Guochao; Li, Yun; Chen, Jinfeng; Zhang, Xuegong; Zou, James; Kellis, Manolis; Li, Yue; Liu, Dianbo; Jiang, Lan
Machine learning methods, especially Transformer architectures, have been widely employed in single-cell omics studies. However, interpretability and accurate representation of out-of-distribution (OOD) cells remains challenging. Inspired by the global workspace theory in cognitive neuroscience, we introduce CellMemory, a bottlenecked Transformer with improved generalizability designed for the hierarchical interpretation of OOD cells. Without pre-training, CellMemory outperforms existing single-cell foundation models and accurately deciphers spatial transcriptomics at high resolution. Leveraging its robust representations, we further elucidate malignant cells and their founder cells across patients, providing reliable characterizations of the cellular changes caused by the disease.
</description>
<pubDate>Mon, 23 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162486</guid>
<dc:date>2025-06-23T00:00:00Z</dc:date>
</item>
<item>
<title>A Spectrum of Inclusion: Climate and Environmental Justice Teaching in a Technical Context</title>
<link>https://hdl.handle.net/1721.1/162485</link>
<description>A Spectrum of Inclusion: Climate and Environmental Justice Teaching in a Technical Context
Rabe, Christopher; Schlegel, Madeline; Reich, Julia; Mclean, Ashanti; Girand, Olivia; Webster, Claire; Araujo-Elorza, Sabrina
Although a substantial body of research has examined how sustainability is integrated into higher education curricula, relatively few studies have explored how instructors incorporate climate and environmental justice (CEJ) content and pedagogies across disciplines. Emerging scholarship has begun to address how interdisciplinary environmental and sustainability (IES) programs and institutional sustainability requirements include environmental justice (EJ) course content. Yet when CEJ knowledge is not broadly embedded across disciplines by IES faculty or institutional leaders, the resulting curricular gaps often exclude the content, practices, and lived experiences of those most affected by global environmental and climate challenges. This exclusion, especially common in STEM-focused areas, can have significant consequences for underrepresented students. This study investigates these dynamics within a STEM institutional context, examining how faculty and instructors incorporated CEJ content and pedagogy across 11 courses at a four-year, private, technical institution. Using a case-study approach and multiple data sources, the study identifies four clusters within a spectrum of CEJ inclusion, each characterized by distinct experiential pedagogies such as community engagement, transdisciplinary methods, and the use of diverse epistemologies. The findings also reveal that CEJ content was most often integrated into STEM courses through collaboration with the social sciences and humanities. Based on these results, the authors offer recommendations for instructors and campus leaders, including improving CEJ visibility in course catalogs, strengthening CEJ integration within STEM and computer science, expanding initiatives and training focused on community engagement, and formally recognizing the contributions of community partners through instructional titles.
</description>
<pubDate>Mon, 07 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162485</guid>
<dc:date>2025-07-07T00:00:00Z</dc:date>
</item>
<item>
<title>SlideCNA: spatial copy number alteration detection from Slide-seq-like spatial transcriptomics data</title>
<link>https://hdl.handle.net/1721.1/162484</link>
<description>SlideCNA: spatial copy number alteration detection from Slide-seq-like spatial transcriptomics data
Zhang, Diane; Segerstolpe, Åsa; Slyper, Michal; Waldman, Julia; Murray, Evan; Strasser, Robert; Watter, Jan; Cohen, Ofir; Ashenberg, Orr; Abravanel, Daniel; Jané-Valbuena, Judit; Mages, Simon; Lako, Ana
Solid tumors are spatially heterogeneous in their genetic, molecular, and cellular composition, but recent spatial profiling studies have mostly charted genetic and RNA variation in tumors separately. To leverage the potential of RNA to identify copy number alterations (CNAs), we develop SlideCNA, a computational tool to extract CNA signals from sparse spatial transcriptomics data with near single cellular resolution. SlideCNA uses expression-aware spatial binning to overcome sparsity limitations while maintaining spatial signal to recover CNA patterns. We test SlideCNA on simulated and real Slide-seq data of (metastatic) breast cancer and demonstrate its potential for spatial subclone detection.
</description>
<pubDate>Fri, 02 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162484</guid>
<dc:date>2025-05-02T00:00:00Z</dc:date>
</item>
<item>
<title>InSituCor: exploring spatially correlated genes conditional on the cell type landscape</title>
<link>https://hdl.handle.net/1721.1/162483</link>
<description>InSituCor: exploring spatially correlated genes conditional on the cell type landscape
Danaher, Patrick; McGuire, Dan; Wu, Lidan; Patrick, Michael; Kroeppler, David; Zhai, Haiyan; Olgun, Deniz G.; Gong, Dennis; Cao, Jingyi; Hwang, William L.; Schmid, Joachim; Beechem, Joseph M.
In spatial transcriptomics data, spatially correlated genes promise to reveal high-interest phenomena like cell–cell interactions and latent variables. But in practice, most spatial correlations arise from the spatial arrangement of cell types, obscuring the more interesting relationships we hope to discover. We introduce InSituCor, a toolkit for discovering modules of spatially correlated genes. InSituCor returns only correlations not explainable by already-known factors like the cell type landscape; this spares precious analyst effort. InSituCor supports both unbiased discovery of whole-dataset correlations and knowledge-driven exploration of genes of interest. As a special case, it evaluates ligand-receptor pairs for spatial co-regulation.
</description>
<pubDate>Thu, 24 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162483</guid>
<dc:date>2025-04-24T00:00:00Z</dc:date>
</item>
<item>
<title>Open-Source Chromatographic Data Analysis for Reaction Optimization and Screening</title>
<link>https://hdl.handle.net/1721.1/162482</link>
<description>Open-Source Chromatographic Data Analysis for Reaction Optimization and Screening
Haas, Christian P; Lübbesmeyer, Maximilian; Jin, Edward H; McDonald, Matthew A; Koscher, Brent A; Guimond, Nicolas; Di Rocco, Laura; Kayser, Henning; Leweke, Samuel; Niedenführ, Sebastian; Nicholls, Rachel; Greeves, Emily; Barber, David M; Hillenbrand, Julius; Volpin, Giulio; Jensen, Klavs F
Automation and digitalization solutions in the field of small molecule synthesis face new challenges for chemical reaction analysis, especially in the field of high-performance liquid chromatography (HPLC). Chromatographic data remains locked in vendors' hardware and software components, limiting their potential in automated workflows and data science applications. In this work, we present an open-source Python project called MOCCA for the analysis of HPLC-DAD (photodiode array detector) raw data. MOCCA provides a comprehensive set of data analysis features, including an automated peak deconvolution routine of known signals, even if overlapped with signals of unexpected impurities or side products. We highlight the broad applicability of MOCCA in four studies: (i) a simulation study to validate MOCCA's data analysis features; (ii) a reaction kinetics study on a Knoevenagel condensation reaction demonstrating MOCCA's peak deconvolution feature; (iii) a closed-loop optimization study for the alkylation of 2-pyridone without human control during data analysis; (iv) a well plate screening of categorical reaction parameters for a novel palladium-catalyzed cyanation of aryl halides employing &lt;i&gt;O&lt;/i&gt;-protected cyanohydrins. By publishing MOCCA as a Python package with this work, we envision an open-source community project for chromatographic data analysis with the potential of further advancing its scope and capabilities.
</description>
<pubDate>Thu, 09 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162482</guid>
<dc:date>2023-02-09T00:00:00Z</dc:date>
</item>
<item>
<title>Community Resource for Innovation in Polymer Technology (CRIPT): A Scalable Polymer Material Data Structure</title>
<link>https://hdl.handle.net/1721.1/162481</link>
<description>Community Resource for Innovation in Polymer Technology (CRIPT): A Scalable Polymer Material Data Structure
Walsh, Dylan J; Zou, Weizhong; Schneider, Ludwig; Mello, Reid; Deagen, Michael E; Mysona, Joshua; Lin, Tzyy-Shyang; de Pablo, Juan J; Jensen, Klavs F; Audus, Debra J; Olsen, Bradley D
Polymeric materials are integral components of nearly every aspect of modern life. However, developing cheminformatic solutions for polymers has been difficult since they are large stochastic molecules with hierarchical structures spanning multiple length scales. Here we present the design for a general material data model that underpins the Community Resource for Innovation in Polymer Technology (CRIPT) data ecosystem.
</description>
<pubDate>Mon, 20 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162481</guid>
<dc:date>2023-02-20T00:00:00Z</dc:date>
</item>
<item>
<title>Automated processing of chromatograms: a comprehensive python package with a GUI for intelligent peak identification and deconvolution in chemical reaction analysis</title>
<link>https://hdl.handle.net/1721.1/162480</link>
<description>Automated processing of chromatograms: a comprehensive python package with a GUI for intelligent peak identification and deconvolution in chemical reaction analysis
Obořil, Jan; Haas, Christian P; Lübbesmeyer, Maximilian; Nicholls, Rachel; Gressling, Thorsten; Jensen, Klavs F; Volpin, Giulio; Hillenbrand, Julius
Reaction screening and high-throughput experimentation (HTE) coupled with liquid chromatography (HPLC and UHPLC) are becoming more important than ever in synthetic chemistry. With a growing number of experiments, it is increasingly difficult to ensure correct peak identification and integration, especially due to unknown side components which often overlap with the peaks of interest. We developed an improved version of the MOCCA Python package with a web-based graphical user interface (GUI) for automated processing of chromatograms, including baseline correction, intelligent peak picking, peak purity checks, deconvolution of overlapping peaks, and compound tracking. The individual automatic processing steps have been improved compared to the previous version of MOCCA to make the software more dependable and versatile. The algorithm accuracy was benchmarked using three datasets and compared to the previous MOCCA implementation and published results. The processing is fully automated with the possibility to include calibration and internal standards. The software supports chromatograms with photo-diode array detector (DAD) data from most commercial HPLC systems, and the Python package and GUI implementation are open-source to allow addition of new features and further development.
</description>
<pubDate>Mon, 01 Jan 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162480</guid>
<dc:date>2024-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A father’s crusade in rare disease drug development: a case study of Elpida therapeutics and Melpida</title>
<link>https://hdl.handle.net/1721.1/162479</link>
<description>A father’s crusade in rare disease drug development: a case study of Elpida therapeutics and Melpida
Portero, Deanna; Xu, Qingyang; Hussain, Aaliya; Lo, Andrew W.
Therapeutic development for rare diseases is difficult for pharmaceutical companies due to significant scientific challenges, extensive costs, and low financial returns. It is increasingly common for caregivers and patient advocacy groups to partner with biomedical professionals to finance and develop treatments for rare diseases. This case study illustrates the story of Terry Pirovolakis, a father who partnered with biomedical professionals to develop the novel gene therapy, Melpida, within 36 months of the diagnosis of his infant son. We identify the factors that led to the success of Melpida and analyze the business model of Elpida Therapeutics, a social purpose corporation founded by Pirovolakis to reproduce the success of Melpida for other rare diseases. We conclude with four lessons from Melpida to inform caregivers like Pirovolakis on developing novel gene therapies to save their loved ones.
</description>
<pubDate>Wed, 16 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162479</guid>
<dc:date>2025-07-16T00:00:00Z</dc:date>
</item>
<item>
<title>Novel presentation and pathophysiology of heavy parasitic burdens in Weddell seals (Leptonychotes weddellii) during sedation</title>
<link>https://hdl.handle.net/1721.1/162478</link>
<description>Novel presentation and pathophysiology of heavy parasitic burdens in Weddell seals (Leptonychotes weddellii) during sedation
Shero, Michelle R.; Burek-Huntington, Kathy; McCorkell, Robert; Nadler, Steven A.; Rzucidlo, Caroline L.; Klink, Amy C.; Hindle, Allyson G.; Burns, Jennifer M.; Johnson, Shawn
Background Marine mammals act as sentinel species, with top predators’ overall health reflecting their ecosystem, integrated across multiple trophic levels. Yet apparently healthy wild animals may have significant subclinical pathology that goes undetected due to unknown medical histories. Marine mammals, particularly phocid seals, often suffer from heavy parasite burdens. While there are documented cases of severe respiratory infections resulting in complications during sedation, there have been no reports of gastrointestinal parasites contributing to poor outcomes during examinations requiring sedation or anesthesia. This report describes two unique presentations of high intestinal parasite loads that purportedly predisposed Weddell seals (Leptonychotes weddellii) to complications under sedation, and characterizes underlying pathology. Case presentation Two adult female Weddell seals exhibited prolonged apnea and vomiting while under intravenous sedation, that led to aspiration and mortality despite resuscitation attempts. Post-mortem examination revealed a severe Diphyllobothrium tapeworm impaction in the duodenum, with the parasitic mass causing a partial or complete obstruction. In both cases, the stomach was remarkably distended, suggesting the parasitic mass slowed gastric emptying. Both animals’ stomachs contained a high parasite burden with roundworms embedded into the mucosa. Histological analysis identified underlying pathological conditions that were likely parasite related, including chronic pneumonia associated with lungworm infestations, reactive, depleted and fibrosed lymph nodes, granulomatous lymphadenitis and hepatitis. Further examination in one of the animals revealed severe gastritis and necrotizing duodenitis at the site of the cestode infection. Conclusions To our knowledge, this is the first description of a significant gastrointestinal parasitic impaction being linked to acute distress during sedation in a marine mammal. We provide an in-situ depiction of the severe cestode infection. It is noteworthy that both animals in this case study exhibited histopathology consistent with chronic inflammation across multiple organ systems. Whether animals were sufficiently immunocompromised that rapid parasite growth became unchecked, or whether the parasite infestation led to dysfunction in other organs remains unresolved. We discuss the potential for premedication with prokinetic agents that increase esophageal sphincter tone to mitigate complications in future late-summer Weddell seal handlings.
</description>
<pubDate>Tue, 29 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162478</guid>
<dc:date>2025-04-29T00:00:00Z</dc:date>
</item>
<item>
<title>The story of pain in people with dementia: a rationale for digital measures</title>
<link>https://hdl.handle.net/1721.1/162477</link>
<description>The story of pain in people with dementia: a rationale for digital measures
Patrascu, Monica; Berge, Line I.; Vahia, Ipsit V.; Marty, Brice; Achterberg, Wilco P.; Allore, Heather; Fletcher, Richard R.; Husebo, Bettina S.
Background The increasingly older world population presents new aging-related challenges, especially for persons with dementia unable to express their suffering. Pain intensity and the effect of pain treatment are difficult to assess via proxy rating and both under- and overtreatment lead to neuropsychiatric symptoms, inactivity, care-dependency and reduced quality of life. In this debate piece, we provide a rationale on why valid digitalization, sensing technology, and artificial intelligence should be explored to improve the assessment of pain in people with dementia. Main text In dementia care, traditional pain assessment relies on observing the manifestations of typical pain behavior. At the same time, pain treatment is complicated by polypharmacy, potential side effects, and a lack of around-the-clock, timely measures. But proper pain treatment requires objective and accurate measures that capture both the levels of pain and the treatment effects. Sensing systems research for personalized pain assessment is underway, with some promising results regarding associations between physiological signals and pain. Digital phenotyping, making use of everyday sensor data for monitoring health behaviors such as patterns of sleep or movement, has shown potential in clinical trials and for future continuous observation. This emerging approach requires transdisciplinary collaboration between medical and engineering sciences, with user involvement and adherence to ethical practices. Conclusion Digital phenotyping based on physiological parameters and sensing technology may increase pain assessment objectivity in older adults with dementia. This technology must be designed with user involvement and validated; however, it opens possibilities to improve pain relief and care.
</description>
<pubDate>Thu, 17 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162477</guid>
<dc:date>2025-04-17T00:00:00Z</dc:date>
</item>
<item>
<title>Energy correlators beyond angles</title>
<link>https://hdl.handle.net/1721.1/162476</link>
<description>Energy correlators beyond angles
Alipour-fard, Samuel; Waalewijn, Wouter J.
Energy correlators are theoretically simple and physically intuitive observables that bridge experimental and theoretical particle physics. They have for example enabled the most precise jet substructure determination of the strong coupling constant to date, and recent proposals suggest that they may be used to precisely determine the top quark mass with calculable, small theoretical uncertainties. However, existing energy correlators all measure correlations in angles between particles, from which other observables such as mass must be inferred through potentially complicated procedures. In this work, we generalize energy correlators to enable straightforward measurements of non-angular correlations, which we call Energy Weighted Observable Correlations (EWOCs). To enforce collinear safety, EWOCs quantify correlations between subjets rather than particles. The subjet radius can be tuned to control both the physical scales probed by EWOCs and their sensitivity to non-perturbative physics. We focus on the phenomenologically relevant example of the mass EWOC, which measures mass correlations between pairs of subjets, in the task of extracting mass scales from jets. In jet substructure determinations of the mass of a hadronically-decaying W boson, we show that the mass EWOC outperforms the angle-based energy correlator, and performs comparably to the soft-drop groomed jet mass. As a first exploration of the theoretical properties of EWOCs, we also calculate the mass EWOC on light-quark jets and compare to results obtained with Pythia 8.309.
</description>
<pubDate>Wed, 02 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162476</guid>
<dc:date>2025-07-02T00:00:00Z</dc:date>
</item>
<item>
<title>An entropic puzzle in periodic dilaton gravity and DSSYK</title>
<link>https://hdl.handle.net/1721.1/162475</link>
<description>An entropic puzzle in periodic dilaton gravity and DSSYK
Blommaert, Andreas; Levine, Adam; Mertens, Thomas G.; Papalini, Jacopo; Parmentier, Klaas
We study 2d dilaton gravity theories with a periodic potential, with special emphasis on sine dilaton gravity, which is holographically dual to double-scaled SYK. The periodicity of the potentials implies a symmetry under (discrete) shifts in the momentum conjugate to the length of geodesic slices. This results in divergences. The correct definition is to gauge this symmetry. This discretizes the geodesic lengths. Lengths below a certain threshold are null states. Because of these null states, the entropy deviates drastically from Bekenstein-Hawking and the Hilbert space becomes finite dimensional. The spacetimes have a periodic radial coordinate. These are toy models of 2d quantum cosmology with a normalizable wavefunction. We study two limiting dualities: one between flat space quantum gravity and the Heisenberg algebra, and one between topological gravity and the Gaussian matrix integral. We propose an exact density of states for certain classes of periodic dilaton gravity models.
</description>
<pubDate>Mon, 07 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162475</guid>
<dc:date>2025-07-07T00:00:00Z</dc:date>
</item>
<item>
<title>Implementing Artificial Intelligence in Critical Care Medicine: a consensus of 22</title>
<link>https://hdl.handle.net/1721.1/162474</link>
<description>Implementing Artificial Intelligence in Critical Care Medicine: a consensus of 22
Cecconi, Maurizio; Greco, Massimiliano; Shickel, Benjamin; Angus, Derek C.; Bailey, Heatherlee; Bignami, Elena; Calandra, Thierry; Celi, Leo A.; Einav, Sharon; Elbers, Paul; Ercole, Ari; Gómez, Hernando; Gong, Michelle N.; Komorowski, Matthieu; Liu, Vincent
Artificial Intelligence (AI) is rapidly transforming the landscape of critical care, offering opportunities for enhanced diagnostic precision and personalized patient management. However, its integration into ICU clinical practice presents significant challenges related to equity, transparency, and the patient-clinician relationship. To address these concerns, a multidisciplinary team of experts was established to assess the current state and future trajectory of AI in critical care. This consensus identified key challenges and proposed actionable recommendations to guide AI implementation in this high-stakes field. Here we present a call to action for the critical care community, to bridge the gap between AI advancements and the need for humanized, patient-centred care. Our goal is to ensure a smooth transition to personalized medicine while, (1) maintaining equitable and unbiased decision-making, (2) fostering the development of a collaborative research network across ICUs, emergency departments, and operating rooms to promote data sharing and harmonization, and (3) addressing the necessary educational and regulatory shifts required for responsible AI deployment. AI integration into critical care demands coordinated efforts among clinicians, patients, industry leaders, and regulators to ensure patient safety and maximize societal benefit. The recommendations outlined here provide a foundation for the ethical and effective implementation of AI in critical care medicine.
</description>
<pubDate>Tue, 08 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162474</guid>
<dc:date>2025-07-08T00:00:00Z</dc:date>
</item>
<item>
<title>Matters Arising: Safety of SARS-CoV-2 vaccination during pregnancy - obstetric outcomes from a large cohort study: methodological biases in study design with potential impact on the study’s interpretation</title>
<link>https://hdl.handle.net/1721.1/162473</link>
<description>Matters Arising: Safety of SARS-CoV-2 vaccination during pregnancy - obstetric outcomes from a large cohort study: methodological biases in study design with potential impact on the study’s interpretation
Levi, Retsef; Schurr, Efrat
In the paper “Safety of SARS-CoV-2 vaccination during pregnancy - obstetric outcomes from a large cohort study”, BMC Pregnancy and Childbirth 22 [1], Dick et al. compare the retrospective maternal and neonatal outcomes of vaccinated and unvaccinated women, who delivered in a large tertiary care center in Jerusalem, Israel, during the period December 2020 throughout July 2021. Two of the main outcomes in their study are ‘Preterm birth’ (prior to gestational week 37) and ‘intrauterine fetal demise’ (stillbirth) rates.&#13;
&#13;
The authors find no statistical difference between the respective rates of these two outcomes among vaccinated and unvaccinated women with singleton pregnancy and no history of prior SARS-CoV-2 infection (0.87% and 1% stillbirth rate and 5.5% and 6.2% preterm birth rate for vaccinated and unvaccinated women, respectively). They optimistically conclude that “SARS-CoV-2 appears to be safe during pregnancy” since there is no association between vaccination during pregnancy and negative maternal and neonatal outcomes. They do acknowledge however, that women who first vaccinated in the second trimester had a higher rate of preterm birth compared to unvaccinated women, and hypothesized that there could be “unmeasured confounding that may contribute to the result”. They also acknowledge the retrospective design as a limitation of the study and call for further studies to address some of the limitations.&#13;
&#13;
The purpose of this Matters Arising is to highlight that the analysis by Dick et al. seems to have several inherent biases with potentially significant impact on the interpretation of the results.
</description>
<pubDate>Tue, 01 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162473</guid>
<dc:date>2025-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>In situ manipulation of electron beam irradiation-activated nanoscale tips formation from amorphous and metal modified silica nanowires</title>
<link>https://hdl.handle.net/1721.1/162472</link>
<description>In situ manipulation of electron beam irradiation-activated nanoscale tips formation from amorphous and metal modified silica nanowires
Khan, Imran; Żak, Andrzej M.; Gilani, S. M. S.; Lan, Jinshen; Huang, Shengli
Escalating use of amorphous silica nanowires (a-SiOx NWs) in potential applications demonstrates the demand of novel processing techniques at nanoscale. Due to the imperfect structure and porous morphology, a-SiOx NWs can be metal-modified which allows for electrical conduction under visible light. Unfortunately, their brittle nature at room temperature and nanometric-size make it demanding to precisely process and change shape from an elongated fiber to a sharply pointed tip. Here energetic electron beam (e-beam) irradiation of a-SiOx and a-SiOx NWs with gold-nanoparticles (Au-NPs) (Au–SiOx NWs) is performed to develop diverse shaped nanoscale tips by optimizing e-beam parameters. Sharp amorphous tips (6 and 11 nm), extremely sharp Au-tips (4 and 6 nm), and relatively thick (16 and 18 nm) amorphous tips with average lengths of 50, 30, and 20 nm are formed at the centers of a-SiOx and Au–SiOx NWs when a tightly focused e-beam with beam spot size (~ 42 nm) equal to the diameters of NWs is centered at their axes and edge positions respectively. Au-tips thickening (4 or 6 to 22 nm) with reduction (20–16 nm) in length is observed when a uniform e-beam with beam spot size ~ 200 nm is employed. In-situ electron microscopy evaluation demonstrates that during e-beam processing, evaporation, diffusion, plastic flow, and dewetting are driven by positive curvature and e-beam activation effect. The combination of beam spot size and position can be used to tailor atomically sharp tips for wide applications, such as interconnects, biochemical sensing, scanning near-field optical microscopes, blue light emitters, and manipulations.
</description>
<pubDate>Sat, 19 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162472</guid>
<dc:date>2025-07-19T00:00:00Z</dc:date>
</item>
<item>
<title>CuddleCard: Protocol for a randomized controlled trial evaluating the effect of providing financial support to low-income mothers of preterm infants on parental caregiving in the neonatal intensive care unit (NICU)</title>
<link>https://hdl.handle.net/1721.1/162471</link>
<description>CuddleCard: Protocol for a randomized controlled trial evaluating the effect of providing financial support to low-income mothers of preterm infants on parental caregiving in the neonatal intensive care unit (NICU)
McConnell, Margaret; Alsager, Alya; Fuchu, Plyce; Sriprasad, Shrivaths; Simoncini, Lindsey; Drainoni, Mari-Lynn; Cordova-Ramos, Erika G.; Peña, Michelle-Marie; Madore, Laura; Kalluri, Nikita S.; Silverstein, Michael; Schofield, Heather; Farah, Martha J.; Fink, Günther; Parker, Margaret G.
Background Preterm birth is a leading cause of childhood mortality and developmental disabilities, with persistent socioeconomic disparities in incidence and outcomes. Maternal presence during prolonged neonatal intensive care unit (NICU) hospitalization is critical for preterm infant health, enabling mothers to provide breast milk, directly breastfeed, and engage in skin-to-skin care—all of which promote infant physiological stability and neurodevelopment. Low-income mothers face significant barriers to visiting the NICU and participating in caregiving due to financial burdens and the psychological impact of financial stress. This randomized controlled trial aims to evaluate the effectiveness of financial transfers in promoting maternal caregiving behaviors that directly impact preterm infant health outcomes during NICU hospitalization. Methods We will conduct a two-arm, single-blinded randomized controlled trial with 420 Medicaid-eligible mothers of infants born between 24 weeks 0 days to 34 weeks 1 day gestation across four Level 3 NICUs in Georgia and Massachusetts. Mothers in the intervention arm will receive standard of care enhanced with weekly financial transfers and will be informed that these funds are intended to help them spend more time with their infants in the NICU. All participants will be provided with a hospital-grade breast pump and educational materials on the benefits of breast milk and skin-to-skin care. Participants will complete surveys during their infant’s hospitalization and following discharge, capturing outcomes related to maternal mental and physical health, caregiving behaviors, cognitive function, financial and socioeconomic factors, infant health and growth, and perceptions of NICU care quality. Primary outcomes are the provision of breast milk and engagement in skin-to-skin care. Secondary outcomes include infant growth and health outcomes, NICU visitation, financial and socioeconomic hardship, maternal physical and mental health measures, cognitive function, and perception of NICU care quality. Discussion This study will provide evidence of the impact of financial transfers on maternal caregiving behaviors in the NICU, addressing critical gaps in our understanding of how financial stress affects low-income mothers. Findings may inform health policy, particularly regarding Medicaid coverage of non-medical services, and contribute to understanding how to address disparities in preterm infant care. Trial registration The trial was prospectively registered with the American Economic Association Trial Registry, the primary registry for academic economists conducting policy trials, on 16 April 2024 (AEARCTR-0013256). It was also registered on ClinicalTrials.gov (NCT06362798) on 10 April 2024.
</description>
<pubDate>Thu, 15 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162471</guid>
<dc:date>2025-05-15T00:00:00Z</dc:date>
</item>
<item>
<title>Assessing concordance between RNA-Seq and NanoString technologies in Ebola-infected nonhuman primates using machine learning</title>
<link>https://hdl.handle.net/1721.1/162470</link>
<description>Assessing concordance between RNA-Seq and NanoString technologies in Ebola-infected nonhuman primates using machine learning
Rezapour, Mostafa; Narayanan, Aarthi; Mowery, Wyatt H.; Gurcan, Metin N.
This study evaluates the concordance between RNA sequencing (RNA-Seq) and NanoString technologies for gene expression analysis in non-human primates (NHPs) infected with Ebola virus (EBOV). A detailed comparison of both platforms revealed a strong correlation, with Spearman coefficients for 56 out of 62 samples ranging from 0.78 to 0.88. The mean and median coefficients were 0.83 and 0.85, respectively. Bland-Altman analysis confirmed high consistency across most measurements, with values falling within the 95% limits of agreement. Using a machine learning approach with the Supervised Magnitude-Altitude Scoring (SMAS) method trained on NanoString data, OAS1 was identified as a key gene signature for distinguishing RT-qPCR positive from negative samples. Remarkably, when used as the sole predictor in a logistic regression model, OAS1 maintained its predictive power on RNA-Seq data from the same cohort of EBOV-infected NHPs, achieving 100% accuracy in distinguishing infected from non-infected samples. OAS1 was also tested in a completely independent held-out test set, consisting of human monocyte-derived dendritic cells (DC) isolated and infected with different strains of the Ebola virus: wild-type (wt), VP35m, VP24m, along with a double mutant VP35m &amp; VP24m, and again demonstrated a 100% accuracy rate in differentiating EBOV-infected from mock-infected samples, confirming its effectiveness as a predictive marker across diverse experimental setups and virus strains. Further differential expression analysis across both platforms identified 12 common genes (including ISG15, OAS1, IFI44, IFI27, IFIT2, IFIT3, IFI44L, MX1, MX2, OAS2, RSAD2, and OASL) that showed the highest levels of statistical significance and biological relevance. Gene Ontology (GO) analysis confirmed the involvement of these genes in key immune and viral infection pathways, highlighting their importance in EBOV infection. RNA-Seq uniquely identified genes such as CASP5, USP18, and DDX60, which are important in immune regulation and antiviral defense and were not detected by NanoString, demonstrating the broader detection capabilities of RNA-Seq. This study indicates a very strong agreement between RNA-Seq and NanoString platforms in gene expression analysis, with RNA-Seq displaying broader capabilities in identifying gene signatures.
</description>
<pubDate>Thu, 10 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162470</guid>
<dc:date>2025-04-10T00:00:00Z</dc:date>
</item>
<item>
<title>ORAKLE: Optimal Risk prediction for mAke30 in patients with sepsis associated AKI using deep LEarning</title>
<link>https://hdl.handle.net/1721.1/162469</link>
<description>ORAKLE: Optimal Risk prediction for mAke30 in patients with sepsis associated AKI using deep LEarning
Oh, Wonsuk; Veshtaj, Marinela; Sawant, Ashwin; Agrawal, Pulkit; Gomez, Hernando; Suarez-Farinas, Mayte; Oropello, John; Kohli-Seth, Roopa; Kashani, Kianoush; Kellum, John A.; Nadkarni, Girish; Sakhuja, Ankit
Background Major Adverse Kidney Events within 30 days (MAKE30) is an important patient-centered outcome for assessing the impact of acute kidney injury (AKI). Existing prediction models for MAKE30 are static and overlook dynamic changes in clinical status. We introduce ORAKLE, a novel deep-learning model that utilizes evolving time-series data to predict MAKE30, enabling personalized, patient-centered approaches to AKI management and outcome improvement. Methods We conducted a retrospective study using three publicly available critical care databases: MIMIC-IV as the development cohort, and SiCdb and eICU-CRD as external validation cohorts. Patients with sepsis-3 criteria who developed AKI within 48 h of intensive care unit admission were identified. Our primary outcome was MAKE30, defined as a composite of death, new dialysis or persistent kidney dysfunction within 30 days of ICU admission. We developed ORAKLE using Dynamic DeepHit framework for time-series survival analysis and its performance against Cox and XGBoost models. We further assessed model calibration using Brier score. Results We analyzed 16,671 patients from MIMIC-IV, 2665 from SICdb, and 11,447 from eICU-CRD. ORAKLE outperformed the XGBoost and Cox models in predicting MAKE30, achieving AUROCs of 0.84 (95% CI: 0.83–0.86) vs. 0.81 (95% CI: 0.79–0.83) vs. 0.80 (95% CI: 0.78–0.82) in MIMIC-IV internal test set, 0.83 (95% CI: 0.81–0.85) vs. 0.80 (95% CI: 0.78–0.83) vs. 0.79 (95% CI: 0.77–0.81) in SICdb, and 0.85 (95% CI: 0.84–0.85) vs. 0.83 (95% CI: 0.83–0.84) vs. 0.81 (95% CI: 0.80–0.82) in eICU-CRD. The AUPRC values for ORAKLE were also significantly better than that of XGBoost and Cox models. The Brier score for ORAKLE was 0.21 across the internal test set, SICdb, and eICU-CRD, suggesting good calibration. Conclusions ORAKLE is a robust deep-learning model for predicting MAKE30 in critically ill patients with AKI that utilizes evolving time series data. By incorporating dynamically changing time series features, the model captures the evolving nature of kidney injury, treatment effects, and patient trajectories more accurately. This innovation facilitates tailored risk assessments and identifies varying treatment responses, laying the groundwork for more personalized and effective management approaches.
</description>
<pubDate>Mon, 26 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162469</guid>
<dc:date>2025-05-26T00:00:00Z</dc:date>
</item>
<item>
<title>Smartphone Mindfulness Intervention Reduces Anxiety Symptoms and Perceived Stress in Autistic Adults: A Randomized Controlled Trial</title>
<link>https://hdl.handle.net/1721.1/162468</link>
<description>Smartphone Mindfulness Intervention Reduces Anxiety Symptoms and Perceived Stress in Autistic Adults: A Randomized Controlled Trial
Li, Cindy E.; Wang, Kimberly L.; Treves, Isaac N.; Bungert, Lindsay; Gabrieli, John D. E.; Rozenkrantz, Liron
Objectives In-person mindfulness-based interventions (MBIs) have been shown to decrease symptoms of anxiety and stress in autistic adults, who often report high levels of these symptoms. Little is known about the effectiveness of remote MBIs for this population, which may be particularly useful given the common barriers autistic adults face in accessing in-person treatment. This study examined the feasibility and effectiveness of an app-based mindfulness intervention for autistic adults. Method This randomized controlled trial (RCT) examined whether a 6-week remote intervention, using a customized version of the Healthy Minds Program app, reduced symptoms of anxiety and perceived stress in 89 autistic adults. Participants were randomly assigned to either the mindfulness intervention or a wait-list control (WLC) group. The WLC group received the intervention after the RCT. Self-report measures of anxiety, perceived stress, positive and negative affect, and trait mindfulness were administered at several timepoints. Results The mindfulness group showed significant decreases in anxiety symptoms and perceived stress relative to the control group, with medium to large between-groups effect sizes (ηp2 0.07 to 0.14). These benefits, as well as significant decreases in negative affect and increases in trait mindfulness, were replicated when the WLC group subsequently received the intervention, and were retained in both groups 6 weeks after conclusion of the intervention. Conclusions Results demonstrate both the feasibility and effectiveness of a remote mindfulness self-guided intervention for reducing perceived stress and anxiety symptoms in autistic adults. Future research can investigate the specific processes of how such an intervention exerts its effects. Preregistration ClinicalTrials.gov TRN: NCT05880498, 5/30/23, retrospectively registered.
</description>
<pubDate>Tue, 08 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162468</guid>
<dc:date>2025-04-08T00:00:00Z</dc:date>
</item>
<item>
<title>Interplay of ALP couplings at a muon collider</title>
<link>https://hdl.handle.net/1721.1/162467</link>
<description>Interplay of ALP couplings at a muon collider
Chigusa, So; Girmohanta, Sudhakantha; Nakai, Yuichiro; Zhang, Yufei
Axion-like particles can couple to Standard Model gluons, electroweak gauge bosons, and massive fermions. A future multi-TeV muon collider provides a favorable environment to probe axion-like particles through multiple production channels, including vector boson fusion via electroweak gauge boson couplings and the top-associated production mediated by direct fermionic couplings. Motivated by the quality issue of the QCD axion, we focus on axion-like particles with masses and decay constants around the TeV scale. We explore how different axion-like particle couplings shape its production and decay modes, revealing a rich and intricate phenomenological landscape.
</description>
<pubDate>Tue, 01 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162467</guid>
<dc:date>2025-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Dynamics of heavy quarks in strongly coupled N = 4 SYM plasma</title>
<link>https://hdl.handle.net/1721.1/162466</link>
<description>Dynamics of heavy quarks in strongly coupled N = 4 SYM plasma
Rajagopal, Krishna; Scheihing-Hitschfeld, Bruno; Wiedemann, Urs A.
We calculate the probability distribution P(k) for a heavy quark with velocity v propagating through strongly coupled N = 4 SYM plasma in the ’t Hooft limit (Nc → ∞, λ = g2Nc → ∞) at a temperature T to acquire a momentum k due to interactions with the plasma. This distribution encodes the well-known drag coefficient ηD and the transverse and longitudinal momentum diffusion coefficients κT and κL. The jet quenching parameter q ̂ can be extracted from P(k) for v = 1. Going beyond these known Gaussian characteristics of P(k), our calculation determines all of the higher order and mixed moments to leading order in 1/ λ for the first time. These non-Gaussian features of P(k) include qualitatively novel correlations between longitudinal energy loss and transverse momentum broadening at nonzero v. We show that all higher moments scale characteristically with an effective temperature of the boosted plasma in the heavy quark rest frame, and we demonstrate that these non-Gaussian characteristics can be sizable in magnitude and even dominant in physically relevant situations. We use these results to derive a Kolmogorov equation for the evolution of the probability distribution for the total momentum of a heavy quark that propagates through strongly coupled plasma. This evolution equation accounts for all higher order correlations between transverse momentum broadening and longitudinal energy loss, which we have calculated from first principles. It reduces to a Fokker-Planck equation when truncated to only include the effects of ηD, κT and κL. Remarkably, while heavy quarks do not reach kinetic equilibrium with the plasma if evolved with this Fokker-Planck equation, by showing that the Boltzmann distribution is a static solution of the all-order Kolmogorov equation that we have derived we demonstrate that heavy quarks do reach kinetic equilibrium if evolved with this equation. Our results thus provide a dynamically complete framework for understanding the thermalization of a heavy quark that may be initially far from equilibrium in the strongly coupled N = 4 SYM plasma — as well as new insight into heavy quark transport and equilibration in quark-gluon plasma.
</description>
<pubDate>Tue, 01 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162466</guid>
<dc:date>2025-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>A long exact sequence in symmetry breaking: order parameter constraints, defect anomaly-matching, and higher Berry phases</title>
<link>https://hdl.handle.net/1721.1/162465</link>
<description>A long exact sequence in symmetry breaking: order parameter constraints, defect anomaly-matching, and higher Berry phases
Debray, Arun; Devalapurkar, Sanath K.; Krulewski, Cameron; Liu, Yu L.; Pacheco-Tallaj, Natalia; Thorngren, Ryan
We study defects in symmetry breaking phases, such as domain walls, vortices, and hedgehogs. In particular, we focus on the localized gapless excitations which sometimes occur at the cores of these objects. These are topologically protected by an ’t Hooft anomaly. We classify different symmetry breaking phases in terms of the anomalies of these defects, and relate them to the anomaly of the broken symmetry by an anomaly-matching formula. We also derive the obstruction to the existence of a symmetry breaking phase with a local defect. We obtain these results using a long exact sequence of groups of invertible field theories, which we call the “symmetry breaking long exact sequence” (SBLES). The mathematical backbone of the SBLES is studied in a companion paper [1]. Our work further develops the theory of higher Berry phase and its bulk-boundary correspondence, and serves as a new computational tool for classifying symmetry protected topological phases.
</description>
<pubDate>Tue, 01 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162465</guid>
<dc:date>2025-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Gauge hierarchy and metastability from Higgs-driven crunching</title>
<link>https://hdl.handle.net/1721.1/162464</link>
<description>Gauge hierarchy and metastability from Higgs-driven crunching
Benevedes, Sean; Ismail, Ameen; Steingasser, Thomas
We present a new solution to the Higgs hierarchy problem based on dynamical vacuum selection in a landscape scanning the Higgs mass. In patches where the Higgs mass parameter takes a natural value, the Higgs potential only admits a minimum with a large and negative energy density. This causes a cosmological crunch, removing such patches from the landscape. Conversely, in patches where the Higgs mass parameter is smaller than a critical value, the Higgs potential admits a metastable minimum with the standard cosmological history. This critical value is determined by the instability scale, where the quartic coupling turns negative due to its running. The ability of this mechanism to explain the observed Higgs mass hinges on new physics at the TeV scale, such as vector-like fermions. We study two simple realizations of this scenario in a heavy neutral lepton model and in the singlet-doublet model, the latter mimicking a Higgsino-bino system. We show that the relevant parts of their parameter spaces can be probed by proposed future colliders, such as the FCC-ee or a muon collider.
</description>
<pubDate>Tue, 24 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162464</guid>
<dc:date>2025-06-24T00:00:00Z</dc:date>
</item>
<item>
<title>Love numbers of black p-branes: fine tuning, Love symmetries, and their geometrization</title>
<link>https://hdl.handle.net/1721.1/162463</link>
<description>Love numbers of black p-branes: fine tuning, Love symmetries, and their geometrization
Charalambous, Panagiotis; Dubovsky, Sergei; Ivanov, Mikhail M.
We compute scalar static response coefficients (Love numbers) of non-dilatonic black p-brane solutions in higher dimensional supergravity. This calculation reveals a fine-tuning behavior similar to that of higher dimensional black holes, which we explain by “hidden” near-zone Love symmetries. In general, these symmetries act on equations for perturbations but they are not background isometries. The Love symmetry of charged p = 0 branes is described by the usual SL(2, ℝ) algebra. For p = 1 the Love symmetry has an algebraic structure SL(2, ℝ) × SL(2, ℝ). The p = 0, 1 Love symmetries reduce to isometries of the near-horizon Schwarzschild-AdSp+2 metric in the near-extremal finite temperature limit. They further reduce to the AdSp+2 isometries in the extremal zero-temperature limit. We call this process geometrization. In contrast, for the p &gt; 1 cases, the Love symmetry is always an SL(2, ℝ), and there is no limit in which it becomes geometric. We interpret geometrization and its absence as a consequence of the local equivalence between the Schwarzschild-AdSp+2 and pure AdSp+2 spaces for p = 0, 1, which does not hold for p &gt; 1. We also show that the static Love numbers of extremal p-branes are always zero regardless of spacetime dimensionality, which contrasts starkly with the non-extremal case. Overall, our results suggest that the Love symmetry is hidden by nature, and it can acquire a geometric meaning only if the background has an AdS2 or AdS3 limit.
</description>
<pubDate>Wed, 18 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162463</guid>
<dc:date>2025-06-18T00:00:00Z</dc:date>
</item>
<item>
<title>Modular evolutions and causality in two-dimensional conformal field theory</title>
<link>https://hdl.handle.net/1721.1/162462</link>
<description>Modular evolutions and causality in two-dimensional conformal field theory
Jovanovic, Dobrica; Mintchev, Mihail; Tonni, Erik
In two-dimensional conformal field theories (CFT) in Minkowski spacetime, we study the spacetime distance between two events along two distinct modular trajectories. When the spatial line is bipartite by a single interval, we consider both the ground state and the state at finite different temperatures for the left and right moving excitations. For the free massless Dirac field in the ground state, the bipartition of the line given by the union of two disjoint intervals is also investigated. The modular flows corresponding to connected subsystems preserve relativistic causality. Locality along the modular flows of some fields is explored by evaluating their (anti-)commutators. In particular, the bilocal nature of the modular Hamiltonian of two disjoint intervals for the massless Dirac field provide multiple trajectories leading to Dirac delta contributions in the (anti-)commutators even when the initial points belong to different intervals, thus being spacelike separated.
</description>
<pubDate>Thu, 19 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162462</guid>
<dc:date>2025-06-19T00:00:00Z</dc:date>
</item>
<item>
<title>Quark masses and mixing in string-inspired models</title>
<link>https://hdl.handle.net/1721.1/162461</link>
<description>Quark masses and mixing in string-inspired models
Constantin, Andrei; Fraser-Taliente, Cristofero S.; Harvey, Thomas R.; Leung, Lucas T. Y.; Lukas, Andre
We study a class of supersymmetric Froggatt-Nielsen (FN) models with multiple U(1) symmetries and Standard Model (SM) singlets inspired by heterotic string compactifications on Calabi-Yau threefolds. The string-theoretic origin imposes a particular charge pattern on the SM fields and FN singlets, dividing the latter into perturbative and non-perturbative types. Employing systematic and heuristic search strategies, such as genetic algorithms, we identify charge assignments and singlet VEVs that replicate the observed mass and mixing hierarchies in the quark sector, and subsequently refine the Yukawa matrix coefficients to accurately match the observed values for the Higgs VEV, the quark and charged lepton masses and the CKM matrix. This bottom-up approach complements top-down string constructions and our results demonstrate that string FN models possess a sufficiently rich structure to account for flavour physics. On the other hand, the limited number of distinct viable charge patterns identified here indicates that flavour physics imposes tight constraints on string theory models, adding new constraints on particle spectra that are essential for achieving a realistic phenomenology.
</description>
<pubDate>Wed, 18 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162461</guid>
<dc:date>2025-06-18T00:00:00Z</dc:date>
</item>
<item>
<title>Dynamic flow experiments for Bayesian optimization of a single process objective</title>
<link>https://hdl.handle.net/1721.1/162459</link>
<description>Dynamic flow experiments for Bayesian optimization of a single process objective
Florit, Federico; Nandiwale, Kakasaheb Y; Armstrong, Cameron T; Grohowalski, Katharina; Diaz, Angel R; Mustakis, Jason; Guinness, Steven M; Jensen, Klavs F
A new method, named dynamic experiment optimization (DynO), is developed for the current needs of chemical reaction optimization by leveraging for the first time both Bayesian optimization and data-rich dynamic experimentation in flow chemistry. DynO is readily implementable in automated systems and it is augmented with simple stopping criteria to guide non-expert users in fast and reagent-efficient optimization campaigns. The developed algorithms is compared in silico with the algorithm Dragonfly and an optimizer based on random selection, showing remarkable results in Euclidean design spaces superior to Dragonfly. Finally, DynO is validated with an ester hydrolysis reaction on an automated platform showcasing the simplicity of the method.
</description>
<pubDate>Wed, 11 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162459</guid>
<dc:date>2024-12-11T00:00:00Z</dc:date>
</item>
<item>
<title>Bayesian Optimization over Multiple Experimental Fidelities Accelerates Automated Discovery of Drug Molecules</title>
<link>https://hdl.handle.net/1721.1/162458</link>
<description>Bayesian Optimization over Multiple Experimental Fidelities Accelerates Automated Discovery of Drug Molecules
McDonald, Matthew A; Koscher, Brent A; Canty, Richard B; Zhang, Jason; Ning, Angelina; Jensen, Klavs F
Different experiments of differing fidelities are commonly used in the search for new drug molecules. In classic experimental funnels, libraries of molecules undergo sequential rounds of virtual, coarse, and refined experimental screenings, with each level balanced between the cost of experiments and the number of molecules screened. Bayesian optimization offers an alternative approach, using iterative experiments to locate optimal molecules with fewer experiments than large-scale screening, but without the ability to weigh the costs and benefits of different types of experiments. In this work, we combine the multifidelity approach of the experimental funnel with Bayesian optimization to search for drug molecules iteratively, taking full advantage of different types of experiments, their costs, and the quality of the data they produce. We first demonstrate the utility of the multifidelity Bayesian optimization (MF-BO) approach on a series of drug targets with data reported in ChEMBL, emphasizing what properties of the chemical search space result in substantial acceleration with MF-BO. Then we integrate the MF-BO experiment selection algorithm into an autonomous molecular discovery platform to illustrate the prospective search for new histone deacetylase inhibitors using docking scores, single-point percent inhibitions, and dose-response IC&lt;sub&gt;50&lt;/sub&gt; values as low-, medium-, and high-fidelity experiments. A chemical search space with appropriate diversity and fidelity correlation for use with MF-BO was constructed with a genetic generative algorithm. The MF-BO integrated platform then docked more than 3,500 molecules, automatically synthesized and screened more than 120 molecules for percent inhibition, and selected a handful of molecules for manual evaluation at the highest fidelity. Many of the molecules screened have never been reported in any capacity. At the end of the search, several new histone deacetylase inhibitors were found with submicromolar inhibition, free of problematic hydroxamate moieties that constrain the use of current inhibitors.
</description>
<pubDate>Wed, 05 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162458</guid>
<dc:date>2025-02-05T00:00:00Z</dc:date>
</item>
<item>
<title>Search for medium effects using jet axis decorrelation in inclusive jets from PbPb collisions at $$\sqrt{{s}_{\text{NN}}}$$ = 5.02 TeV</title>
<link>https://hdl.handle.net/1721.1/162457</link>
<description>Search for medium effects using jet axis decorrelation in inclusive jets from PbPb collisions at $$\sqrt{{s}_{\text{NN}}}$$ = 5.02 TeV
Chekhovsky, V.; Hayrapetyan, A.; Makarenko, V.; Tumasyan, A.; Adam, W.; Andrejkovic, J. W.; Benato, L.; Bergauer, T.; Chatterjee, S.; Damanakis, K.; Dragicevic, M.; Hussain, P. S.; Jeitler, M.; Krammer, N.; Li, A.; Liko, D.; The CMS collaboration, /
The jet axis decorrelation in inclusive jets is studied using lead-lead (PbPb) collisions at a center-of-mass energy per nucleon pair of 5.02 TeV. The jet axis decorrelation is defined as the angular difference between two definitions of the jet axis. It is obtained by applying two recombination schemes on all the constituents of a given jet reconstructed by the anti-kT sequential algorithm with a distance parameter of R = 0.4. The data set, corresponding to an integrated luminosity of 0.66 nb−1, was collected in 2018 with the CMS detector at the CERN LHC. The jet axis decorrelations are examined across collision centrality selections and intervals of jet transverse momentum. A centrality dependent evolution of the measured distributions is observed, with a progressive narrowing seen in more central events. This narrowing could result from medium-induced modification of the internal jet structure or reflect color charge effects in energy loss. This new measurement probes jet substructure in previously unexplored kinematic domains and show great promise for providing new insights on the color charge dependence of energy loss to jet-quenching models.
</description>
<pubDate>Thu, 12 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162457</guid>
<dc:date>2025-06-12T00:00:00Z</dc:date>
</item>
<item>
<title>Search for bosons of an extended Higgs sector in b quark final states in proton-proton collisions at √s = 13 TeV</title>
<link>https://hdl.handle.net/1721.1/162456</link>
<description>Search for bosons of an extended Higgs sector in b quark final states in proton-proton collisions at √s = 13 TeV
Chekhovsky, V.; Hayrapetyan, A.; Makarenko, V.; Tumasyan, A.; Adam, W.; Andrejkovic, J. W.; Benato, L.; Bergauer, T.; Chatterjee, S.; Damanakis, K.; Dragicevic, M.; Hussain, P. S.; Jeitler, M.; Krammer, N.; Li, A.; Liko, D.; The CMS Collaboration, /
A search for beyond-the-standard-model neutral Higgs bosons decaying to a pair of bottom quarks, and produced in association with at least one additional bottom quark, is performed with the CMS detector. The data were recorded in proton-proton collisions at a centre-of-mass energy of 13 TeV at the CERN LHC and correspond to an integrated luminosity of 36.7–126.9 fb−1, depending on the probed mass range. No signal above the standard model background expectation is observed. Upper limits on the production cross section times branching fraction are set for Higgs bosons in the mass range of 125–1800 GeV. The results are interpreted in benchmark scenarios of the minimal supersymmetric standard model, as well as suitable classes of two-Higgs-doublet models.
</description>
<pubDate>Fri, 13 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162456</guid>
<dc:date>2025-06-13T00:00:00Z</dc:date>
</item>
<item>
<title>Angular analysis of B0 → K*0e+e− decays</title>
<link>https://hdl.handle.net/1721.1/162455</link>
<description>Angular analysis of B0 → K*0e+e− decays
Aaij, R.; Abdelmotteleb, A. S. W.; Abellan Beteta, C.; Abudinén, F.; Ackernley, T.; Adefisoye, A. A.; Adeva, B.; Adinolfi, M.; Adlarson, P.; Agapopoulou, C.; Aidala, C. A.; Ajaltouni, Z.; Akar, S.; Akiba, K.; Albicocco, P.; Albrecht, J.; The LHCb Collaboration, /
An angular analysis of B0 → K*0e+e− decays is presented using proton-proton collision data collected by the LHCb experiment at centre-of-mass energies of 7, 8 and 13 TeV, corresponding to an integrated luminosity of 9 fb−1. The analysis is performed in the region of the dilepton invariant mass squared of 1.1–6.0 GeV2/c4. In addition, a test of lepton flavour universality is performed by comparing the obtained angular observables with those measured in B0 → K*0μ+μ− decays. In general, the angular observables are found to be consistent with the Standard Model expectations as well as with global analyses of other b → sℓ+ℓ− processes, where ℓ is either a muon or an electron. No sign of lepton-flavour-violating effects is observed.
</description>
<pubDate>Fri, 13 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162455</guid>
<dc:date>2025-06-13T00:00:00Z</dc:date>
</item>
<item>
<title>Search for excited tau leptons in the ττγ final state in proton-proton collisions at √s = 13 TeV</title>
<link>https://hdl.handle.net/1721.1/162454</link>
<description>Search for excited tau leptons in the ττγ final state in proton-proton collisions at √s = 13 TeV
Hayrapetyan, A.; Tumasyan, A.; Adam, W.; Andrejkovic, J. W.; Benato, L.; Bergauer, T.; Chatterjee, S.; Damanakis, K.; Dragicevic, M.; Hussain, P. S.; Jeitler, M.; Krammer, N.; Li, A.; Liko, D.; Mikulec, I.; Schieck, J.; Schöfbeck, R.; The CMS collaboration
Results are presented for a test of the compositeness of the heaviest charged lepton, τ, using data collected by the CMS experiment in proton-proton collisions at a center-of-mass energy of 13 TeV at the CERN LHC. The data were collected in 2016–2018 and correspond to an integrated luminosity of 138 fb−1. This analysis searches for tau lepton pair production in which one of the tau leptons is produced in an excited state and decays to a ground state tau lepton and a photon. The event selection consists of two isolated tau lepton decay candidates and a high-energy photon. The mass of the excited tau lepton is reconstructed using the missing transverse momentum in the event, assuming the momentum of the neutrinos from each tau lepton decay are aligned with the visible decay products. No excess of events above the standard model background prediction is observed. This null result is used to set lower bounds on the excited tau lepton mass. For a compositeness scale Λ equal to the excited tau lepton mass, excited tau leptons with masses below 4700 GeV are excluded at 95% confidence level; for Λ = 10 TeV this exclusion is set at 2800 GeV. This is the first experimental result covering this production and decay process in the excited tau mass range above 175 GeV.
</description>
<pubDate>Tue, 03 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162454</guid>
<dc:date>2025-06-03T00:00:00Z</dc:date>
</item>
<item>
<title>Measurement of the inhomogeneity of the KATRIN tritium source electric potential by high-resolution spectroscopy of conversion electrons from 83 m Kr</title>
<link>https://hdl.handle.net/1721.1/162453</link>
<description>Measurement of the inhomogeneity of the KATRIN tritium source electric potential by high-resolution spectroscopy of conversion electrons from 83 m Kr
KATRIN Collaboration, /
Precision spectroscopy of the electron spectrum of the tritium β -decay near the kinematic endpoint is a direct method to determine the effective electron antineutrino mass. The KArlsruhe TRItium Neutrino (KATRIN) experiment aims to determine this quantity with a sensitivity of better than 0.3 eV ( 90 %  C.L.). An inhomogeneous electric potential in the tritium source of KATRIN can lead to distortions of the β -spectrum, which directly impact the neutrino-mass observable. This effect can be quantified through precision spectroscopy of the conversion-electrons of co-circulated metastable 83 m Kr . Therefore, dedicated, several-weeks long measurement campaigns have been performed within the KATRIN data taking schedule. In this work, we infer the tritium source potential observables from these measurements, and present their implications for the neutrino-mass determination.
</description>
<pubDate>Wed, 09 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162453</guid>
<dc:date>2025-07-09T00:00:00Z</dc:date>
</item>
<item>
<title>Alternative Shakespeares 3</title>
<link>https://hdl.handle.net/1721.1/162402</link>
<description>Alternative Shakespeares 3
Henderson, Diana E.
</description>
<pubDate>Thu, 18 Oct 2007 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162402</guid>
<dc:date>2007-10-18T00:00:00Z</dc:date>
</item>
<item>
<title>Toward smart carbon capture with machine learning</title>
<link>https://hdl.handle.net/1721.1/162399</link>
<description>Toward smart carbon capture with machine learning
Rahimi, Mohammad; Moosavi, Seyed Mohamad; Smit, Berend; Hatton, T Alan
Machine learning (ML) is emerging as a powerful approach that has recently shown potential to affect various frontiers of carbon capture, a key interim technology to assist in the mitigation of climate change. In this perspective, we reveal how ML implementations have improved this process in many aspects, for both absorption- and adsorption-based approaches, ranging from the molecular to process level. We discuss the role of ML in predicting the thermodynamic properties of absorbents and in improving the absorption process. For adsorption processes, we discuss the promises of ML techniques for exploring many options to find the most cost-effective process scheme, which involves choosing a solid adsorbent and designing a process configuration. We also highlight the advantages of ML and the associated risks, elaborate on the importance of the features needed to train ML models, and identify promising future opportunities for ML in carbon capture processes.
</description>
<pubDate>Wed, 21 Apr 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162399</guid>
<dc:date>2021-04-21T00:00:00Z</dc:date>
</item>
<item>
<title>CUPID, the Cuore upgrade with particle identification</title>
<link>https://hdl.handle.net/1721.1/162397</link>
<description>CUPID, the Cuore upgrade with particle identification
Alfonso, K.; Armatol, A.; Augier, C.; Avignone III, F. T.; Azzolini, O.; Barabash, A. S.; Bari, G.; Barresi, A.; Baudin, D.; Bellini, F.; Benato, G.; Benussi, L.; Berest, V.; Beretta, M.; Bergé, L.; Bettelli, M.; Biassoni, M.; Billard, J.; Boffelli, F.; Boldrini, V.; CUPID Collaboration
CUPID, the CUORE Upgrade with Particle Identification, is a next-generation experiment to search for neutrinoless double beta decay ( 0 ν β β ) and other rare events using enriched Li 2 100 MoO 4 scintillating bolometers. It will be hosted by the CUORE cryostat located at the Laboratori Nazionali del Gran Sasso in Italy. The main physics goal of CUPID is to search for 0 ν β β of 100 Mo with a discovery sensitivity covering the full neutrino mass regime in the inverted ordering scenario, as well as the portion of the normal ordering regime with lightest neutrino mass larger than 10 meV. With a conservative background index of 10 - 4  cts / ( keV · kg · yr ) , 240 kg isotope mass, 5 keV FWHM energy resolution at 3 MeV and 10 live-years of data taking, CUPID will have a 90% C.L. half-life exclusion sensitivity of 1.8 · 10 27  yr, corresponding to an effective Majorana neutrino mass ( m β β ) sensitivity of 9–15 meV, and a 3 σ discovery sensitivity of 1 · 10 27  yr, corresponding to an m β β range of 12–21 meV.
</description>
<pubDate>Thu, 03 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162397</guid>
<dc:date>2025-07-03T00:00:00Z</dc:date>
</item>
<item>
<title>Neutrino interaction vertex reconstruction in DUNE with Pandora deep learning</title>
<link>https://hdl.handle.net/1721.1/162392</link>
<description>Neutrino interaction vertex reconstruction in DUNE with Pandora deep learning
Abud, A. A.; Acciarri, R.; Acero, M. A.; Adames, M. R.; Adamov, G.; Adamowski, M.; Adams, D.; Adinolfi, M.; Adriano, C.; Aduszkiewicz, A.; Aguilar, J.; Akbar, F.; Alemanno, F.; Alex, N. S.; Allison, K.; Alrashed, M.; Alton, A.; Alvarez, R.; Alves, T.; Aman, A.; The DUNE Collaboration
The Pandora Software Development Kit and algorithm libraries perform reconstruction of neutrino interactions in liquid argon time projection chamber detectors. Pandora is the primary event reconstruction software used at the Deep Underground Neutrino Experiment, which will operate four large-scale liquid argon time projection chambers at the far detector site in South Dakota, producing high-resolution images of charged particles emerging from neutrino interactions. While these high-resolution images provide excellent opportunities for physics, the complex topologies require sophisticated pattern recognition capabilities to interpret signals from the detectors as physically meaningful objects that form the inputs to physics analyses. A critical component is the identification of the neutrino interaction vertex. Subsequent reconstruction algorithms use this location to identify the individual primary particles and ensure they each result in a separate reconstructed particle. A new vertex-finding procedure described in this article integrates a U-ResNet neural network performing hit-level classification into the multi-algorithm approach used by Pandora to identify the neutrino interaction vertex. The machine learning solution is seamlessly integrated into a chain of pattern-recognition algorithms. The technique substantially outperforms the previous BDT-based solution, with a more than 20% increase in the efficiency of sub-1 cm vertex reconstruction across all neutrino flavours.
</description>
<pubDate>Wed, 25 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162392</guid>
<dc:date>2025-06-25T00:00:00Z</dc:date>
</item>
<item>
<title>A parallel algorithm for fast reconstruction of primary vertices on heterogeneous architectures</title>
<link>https://hdl.handle.net/1721.1/162391</link>
<description>A parallel algorithm for fast reconstruction of primary vertices on heterogeneous architectures
Dziurda, Agnieszka; Giza, Maciej; Gligorov, Vladimir V.; Hulsbergen, Wouter; Kutsenko, Bogdan; Mariani, Saverio; Nolte, Niklas; Reiss, Florian; Spradlin, Patrick; vom Bruch, Dorothea; Wojton, Tomasz
The physics programme of the LHCb experiment at the Large Hadron Collider requires an efficient and precise reconstruction of the particle collision vertices. The LHCb Upgrade detector relies on a fully software-based trigger with an online reconstruction rate of 30 MHz , necessitating fast vertex finding algorithms. This paper describes a new approach to vertex reconstruction developed for this purpose. The algorithm is based on cluster finding within a histogram of the particle trajectory projections along the beamline and on an adaptive vertex fit. Its implementations and optimisations on x86 and GPU architectures and its performance on simulated samples are also discussed.
</description>
<pubDate>Mon, 02 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162391</guid>
<dc:date>2025-06-02T00:00:00Z</dc:date>
</item>
<item>
<title>Measurements of ψ ( 2 S ) and χ c 1 ( 3872 ) production within fully reconstructed jets</title>
<link>https://hdl.handle.net/1721.1/162390</link>
<description>Measurements of ψ ( 2 S ) and χ c 1 ( 3872 ) production within fully reconstructed jets
LHCb Collaboration
This paper presents the first measurement of ψ ( 2 S ) and χ c 1 ( 3872 ) meson production within fully reconstructed jets. Each quarkonium state (tag) is reconstructed via its decay to the J / ψ ( → μ + μ - ) π + π - final state in the forward region using proton-proton collision data collected by the LHCb experiment at the center-of-mass-energy of 13 TeV in 2016, corresponding to an integrated luminosity of 1.64 \,fb - 1 . The fragmentation function, presented as the ratio of the quarkonium-tag transverse momentum to the full jet transverse momentum ( p T ( tag ) / p T ( jet ) ), is measured differentially in p T ( jet ) and p T ( tag ) bins. The distributions are separated into promptly produced quarkonia from proton-proton collisions and quarkonia produced from displaced b-hadron decays. While the displaced quarkonia fragmentation functions are in general well described by parton-shower predictions, the prompt quarkonium distributions differ significantly from fixed-order non-relativistic QCD (NRQCD) predictions followed by a QCD parton shower.
</description>
<pubDate>Thu, 22 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162390</guid>
<dc:date>2025-05-22T00:00:00Z</dc:date>
</item>
<item>
<title>Reweighting simulated events using machine-learning techniques in the CMS experiment</title>
<link>https://hdl.handle.net/1721.1/162389</link>
<description>Reweighting simulated events using machine-learning techniques in the CMS experiment
Hayrapetyan, A.; Tumasyan, A.; Adam, W.; Andrejkovic, J. W.; Benato, L.; Bergauer, T.; Chatterjee, S.; Damanakis, K.; Dragicevic, M.; Hussain, P. S.; Jeitler, M.; Krammer, N.; Li, A.; Liko, D.; Mikulec, I.; Schieck, J.; Schöfbeck, R.; Schwarz, D.; Sonawane, M.; Waltenberger, W.
Data analyses in particle physics rely on an accurate simulation of particle collisions and a detailed simulation of detector effects to extract physics knowledge from the recorded data. Event generators together with a geant-based simulation of the detectors are used to produce large samples of simulated events for analysis by the LHC experiments. These simulations come at a high computational cost, where the detector simulation and reconstruction algorithms have the largest CPU demands. This article describes how machine-learning (ML) techniques are used to reweight simulated samples obtained with a given set of parameters to samples with different parameters or samples obtained from entirely different simulation programs. The ML reweighting method avoids the need for simulating the detector response multiple times by incorporating the relevant information in a single sample through event weights. Results are presented for reweighting to model variations and higher-order calculations in simulated top quark pair production at the LHC. This ML-based reweighting is an important element of the future computing model of the CMS experiment and will facilitate precision measurements at the High-Luminosity LHC.
</description>
<pubDate>Tue, 06 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162389</guid>
<dc:date>2025-05-06T00:00:00Z</dc:date>
</item>
<item>
<title>Search for dark matter from the center of the Earth with 10 years of IceCube data</title>
<link>https://hdl.handle.net/1721.1/162388</link>
<description>Search for dark matter from the center of the Earth with 10 years of IceCube data
The IceCube Collaboration
The nature of dark matter remains unresolved in fundamental physics. Weakly Interacting Massive Particles (WIMPs), which could explain the nature of dark matter, can be captured by celestial bodies like the Sun or Earth, leading to enhanced self-annihilation into Standard Model particles including neutrinos detectable by neutrino telescopes such as the IceCube Neutrino Observatory. This article presents a search for muon neutrinos from the center of the Earth performed with 10 years of IceCube data using a track-like event selection. We considered a number of WIMP annihilation channels ( χ χ → τ + τ - / W + W - / b b ¯ ) and masses ranging from 10 GeV to 10 TeV. No significant excess over background due to a dark matter signal was found while the most significant result corresponds to the annihilation channel χ χ → b b ¯ for the mass m χ = 250  GeV with a post-trial significance of 1.06 σ . Our results are competitive with previous such searches and direct detection experiments. Our upper limits on the spin-independent WIMP scattering are world-leading among neutrino telescopes for WIMP masses m χ &gt; 100  GeV.
</description>
<pubDate>Mon, 05 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162388</guid>
<dc:date>2025-05-05T00:00:00Z</dc:date>
</item>
<item>
<title>Atypical Pressure Dependent Structural Phonon and Thermodynamic Characteristics of Zinc Blende BeO</title>
<link>https://hdl.handle.net/1721.1/162387</link>
<description>Atypical Pressure Dependent Structural Phonon and Thermodynamic Characteristics of Zinc Blende BeO
Talwar, Devki N.; Becla, Piotr
Under normal conditions, the novel zinc blende beryllium oxide (zb BeO) exhibits in a metastable crystalline phase, which is less stable than its wurtzite counterpart. Ultrathin zb BeO epifilms have recently gained significant interest to create a wide range of advanced high-resolution, high-frequency, flexible, transparent, nano-electronic and nanophotonic modules. BeO-based ultraviolet photodetectors and biosensors are playing important roles in providing safety and efficiency to nuclear reactors for their optimum operations. In thermal management, BeO epifilms have also been used for many high-tech devices including medical equipment. Phonon characteristics of zb BeO at ambient and high-pressure P &amp;ne; 0 GPa are required in the development of electronics that demand enhanced heat dissipation for improving heat sink performance to lower the operating temperature. Here, we have reported methodical simulations to comprehend P-dependent structural, phonon and thermodynamical properties by using a realistic rigid-ion model (RIM). Unlike zb ZnO, the study of the Gr&amp;uuml;neisen parameter &amp;gamma;(T) and thermal expansion coefficient &amp;alpha;(T) in zb BeO has revealed atypical behavior. Possible reasons for such peculiar trends are attributed to the combined effect of the short bond length and strong localization of electron charge close to the small core size Be atom in BeO. Results of RIM calculations are compared/contrasted against the limited experimental and first-principle data.
</description>
<pubDate>Tue, 05 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162387</guid>
<dc:date>2025-08-05T00:00:00Z</dc:date>
</item>
<item>
<title>Search for pair production of heavy particles decaying to a top quark and a gluon in the lepton+jets final state in proton–proton collisions at √s = 13 Te V</title>
<link>https://hdl.handle.net/1721.1/162386</link>
<description>Search for pair production of heavy particles decaying to a top quark and a gluon in the lepton+jets final state in proton–proton collisions at √s = 13 Te V
CMS Collaboration
A search is presented for the pair production of new heavy resonances, each decaying into a top quark (t) or antiquark and a gluon (g). The analysis uses data recorded with the CMS detector from proton–proton collisions at a center-of-mass energy of 13 Te V at the LHC, corresponding to an integrated luminosity of 138 fb - 1 . Events with one muon or electron, multiple jets, and missing transverse momentum are selected. After using a deep neural network to enrich the data sample with signal-like events, distributions in the scalar sum of the transverse momenta of all reconstructed objects are analyzed in the search for a signal. No significant deviations from the standard model prediction are found. Upper limits at 95% confidence level are set on the product of cross section and branching fraction squared for the pair production of excited top quarks in the t ∗ → tg decay channel. The upper limits range from 120 to 0.8 fb for a t ∗ with spin-1/2 and from 15 to 1.0 fb for a t ∗ with spin-3/2. These correspond to mass exclusion limits up to 1050 and 1700 Ge V for spin-1/2 and spin-3/2 t ∗ particles, respectively. These are the most stringent limits to date on the existence of t ∗ → tg resonances.
</description>
<pubDate>Tue, 25 Mar 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162386</guid>
<dc:date>2025-03-25T00:00:00Z</dc:date>
</item>
<item>
<title>Theory of Faradaically Modulated Redox Active Electrodes for Electrochemically Mediated Selective Adsorption Processes</title>
<link>https://hdl.handle.net/1721.1/162385</link>
<description>Theory of Faradaically Modulated Redox Active Electrodes for Electrochemically Mediated Selective Adsorption Processes
He, Fan; Bazant, Martin Z; Hatton, T Alan
Electrochemically mediated selective adsorption is an emerging electrosorption technique that utilizes Faradaically enhanced redox active electrodes, which can adsorb ions not only electrostatically, but also electrochemically. The superb selectivity (&gt;100) of this technique enables selective removal of toxic or high-value target ions under low energy consumption. Here, we develop a general theoretical framework to describe the competitive electrosorption phenomena involving multiple ions and surface-bound redox species. The model couples diffusion, convection and electromigration with competitive surface adsorption reaction kinetics, consistently derived from non-equilibrium thermodynamics. To optimize the selective removal of the target ions, design criteria were derived analytically from physically relevant dimensionless groups and time scales, where the propagation of the target anion’s concentration front is the limiting step. Detailed computational studies are reported for three case studies that cover a wide range of inlet concentration ratios between the competing ions. And in all three cases, target anions in the electrosorption cell forms a self-sharpening reaction-diffusion wave front. Based on the model, a three-step stop-flow operation scheme with a pure stripping solution of target anions is proposed that optimizes the ion adsorption performance and increases the purity of the regeneration stream to almost 100%, which is beneficial for downstream processing.
</description>
<pubDate>Tue, 18 May 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162385</guid>
<dc:date>2021-05-18T00:00:00Z</dc:date>
</item>
<item>
<title>Electrochemical Carbon Dioxide Capture and Release with a Redox-Active Amine</title>
<link>https://hdl.handle.net/1721.1/162384</link>
<description>Electrochemical Carbon Dioxide Capture and Release with a Redox-Active Amine
Seo, Hyowon; Rahimi, Mohammad; Hatton, T Alan
Anthropogenic carbon dioxide (CO2) emission from the combustion of fossil fuels is a major contributor to global climate change and ocean acidification. The implementation of carbon capture and storage technologies has been proposed to mitigate the buildup of this greenhouse gas in the atmosphere. Among these technologies, direct air capture is regarded as a plausible CO2 removal tool whereby net negative emissions can be achieved. However, the separation of CO2 from air is particularly challenging due to the ultradilute concentration of CO2 in the presence of high concentrations of dioxygen and water. Here, we report a robust electrochemical redox-active amine system demonstrating a high electron utilization (i.e., mole of CO2 per mole of electrons) of up to 1.25 with the capture of two CO2 molecules per amine in an aqueous solution with a work of 101 kJe per moles of CO2. The capture of CO2 directly from ambient air as the feed gas presented an electron utilization of 0.78.
</description>
<pubDate>Wed, 12 Jan 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162384</guid>
<dc:date>2022-01-12T00:00:00Z</dc:date>
</item>
<item>
<title>Search for the B c + → χc1(3872)π+ decay</title>
<link>https://hdl.handle.net/1721.1/162383</link>
<description>Search for the B c + → χc1(3872)π+ decay
Aaij, R.; Abdelmotteleb, A. S. W.; Abellan Beteta, C.; Abudinén, F.; Ackernley, T.; Adefisoye, A. A.; Adeva, B.; Adinolfi, M.; Adlarson, P.; Agapopoulou, C.; Aidala, C. A.; Ajaltouni, Z.; Akar, S.; Akiba, K.; Albicocco, P.; Albrecht, J.
A search for the decay B c + → χc1(3872)π+ is reported using proton-proton collision data collected with the LHCb detector between 2011 and 2018 at centre-of-mass energies of 7, 8, and 13 TeV, corresponding to an integrated luminosity of 9 fb−1. No significant signal is observed. Using the decay B c + → ψ(2S)π+ as a normalisation channel, an upper limit for the ratio of branching fractions R ψ 2 S χ c 1 3872 = B B c + → χ c 1 3872 π + B B c + → ψ 2 S π + × B χ c 1 3872 → J / ψ π + π − B ψ 2 S → J / ψ π + π − &lt; 0.05 0.06 , is set at the 90 (95)% confidence level.
</description>
<pubDate>Tue, 03 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162383</guid>
<dc:date>2025-06-03T00:00:00Z</dc:date>
</item>
<item>
<title>Image of the Forest: Cognitive and Affective Responses to Spectral Frequencies in Virtual Nature</title>
<link>https://hdl.handle.net/1721.1/162382</link>
<description>Image of the Forest: Cognitive and Affective Responses to Spectral Frequencies in Virtual Nature
Litwin, Sonia; Vujic, Angela
At present, the average user spends approximately seven hours per day consuming various types of digital media that aim to approximate or enhance real-world experiences. Image of the Forest is an immersive experience that provides a thorough account of the questions central to research on human interaction with artifacts that mimic aspects of nature within computationally generated environments. With the use of affective brain-computer interfaces (aBCIs), we aim to critically reflect on the ability of machine models of reality to provide real world experiences. We also address the research gap in understanding how spectral frequency manipulations can influence cognitive and affective responses.
C&amp;C ’25, Virtual, United Kingdom
</description>
<pubDate>Sun, 22 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162382</guid>
<dc:date>2025-06-22T00:00:00Z</dc:date>
</item>
<item>
<title>Dirac traces and the Tutte polynomial</title>
<link>https://hdl.handle.net/1721.1/162381</link>
<description>Dirac traces and the Tutte polynomial
Lin, Joshua
Perturbative calculations involving fermion loops in quantum field theories require tracing over Dirac matrices. A simple way to regulate the divergences that generically appear in these calculations is dimensional regularisation, which has the consequence of replacing 4-dimensional Dirac matrices with d-dimensional counterparts for arbitrary complex values of d. In this work, a connection between traces of d-dimensional Dirac matrices and computations of the Tutte polynomial of associated graphs is proven. The time complexity of computing Dirac traces is analysed by this connection, and improvements to algorithms for computing Dirac traces are proposed.
</description>
<pubDate>Wed, 28 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162381</guid>
<dc:date>2025-05-28T00:00:00Z</dc:date>
</item>
<item>
<title>Effective field theories of dissipative fluids with one-form symmetries</title>
<link>https://hdl.handle.net/1721.1/162379</link>
<description>Effective field theories of dissipative fluids with one-form symmetries
Vardhan, Shreya; Grozdanov, Sašo; Leutheusser, Samuel; Liu, Hong
A system with a one-form global symmetry at finite temperature can be viewed as a dissipative fluid of string-like objects. In this work, we classify and construct the most general effective field theories for hydrodynamics of such string fluids, in a probe limit where the one-form charge density is decoupled from the energy-momentum tensor. We show that at leading order in the derivative expansion, there are two distinct types of diffusive transport in a string fluid depending on the discrete spacetime symmetries present in it. One particular application of interest is magnetohydrodynamics (MHD), where the effective field theories describe the diffusion of magnetic field lines. Due to the distinction between the effective field theories for different discrete symmetries, we show that the MHD of a fluid with charge conjugation symmetry is qualitatively different from that of a neutron star, which we previously discussed in [1]. The explicit effective actions that we write down can be used to obtain the dispersion relations ω(k) up to cubic order in momenta for each of the different discrete symmetry choices. As another application of this formalism, we show that when the one-form symmetry is spontaneously broken, the effective action reduces to the Maxwell theory. This confirms the interpretation of the photon as a Goldstone boson arising from the spontaneous breaking of a one-form symmetry.
</description>
<pubDate>Wed, 21 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162379</guid>
<dc:date>2025-05-21T00:00:00Z</dc:date>
</item>
<item>
<title>Search for jet quenching with dijets from high-multiplicity pPb collisions at √s NN = 8.16 TeV</title>
<link>https://hdl.handle.net/1721.1/162378</link>
<description>Search for jet quenching with dijets from high-multiplicity pPb collisions at √s NN = 8.16 TeV
Chekhovsky, V.; Hayrapetyan, A.; Makarenko, V.; Tumasyan, A.; Adam, W.; Andrejkovic, J. W.; Benato, L.; Bergauer, T.; Damanakis, K.; Dragicevic, M.; Giordano, C.; Hussain, P. S.; Jeitler, M.; Krammer, N.; Li, A.; Liko, D.; Mikulec, I.; Schieck, J.
The first measurement of the dijet transverse momentum balance xj in proton-lead (pPb) collisions at a nucleon-nucleon center-of-mass energy of $$\sqrt{{s}_{\text{NN}}}$$ = 8.16 TeV is presented. The xj observable, defined as the ratio of the subleading over leading jet transverse momentum in a dijet pair, is used to search for jet quenching effects. The data, corresponding to an integrated luminosity of 174.6 nb−1, were collected with the CMS detector in 2016. The xj distributions and their average values are studied as functions of the charged-particle multiplicity of the events and for various dijet rapidity selections. The latter enables probing hard scattering of partons carrying distinct nucleon momentum fractions x in the proton- and lead-going directions. The former, aided by the high-multiplicity triggers, allows probing for potential jet quenching effects in high-multiplicity events (with up to 400 charged particles), for which collective phenomena consistent with quark-gluon plasma (QGP) droplet formation were previously observed. The ratios of xj distributions for high- to low-multiplicity events are used to quantify the possible medium effects. These ratios are consistent with simulations of the hard-scattering process that do not include QGP production. These measurements set an upper limit on medium-induced energy loss of the subleading jet of 1.26% of its transverse momentum at the 90% confidence level in high multiplicity pPb events.
</description>
<pubDate>Wed, 09 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162378</guid>
<dc:date>2025-07-09T00:00:00Z</dc:date>
</item>
<item>
<title>On the normalization of open-closed string amplitudes</title>
<link>https://hdl.handle.net/1721.1/162377</link>
<description>On the normalization of open-closed string amplitudes
Sen, Ashoke; Zwiebach, Barton
We use the factorization constraints of open-closed string field theory to determine the signs and normalizations of general string amplitudes with both open and closed string external states. The normalization of all amplitudes is controlled by the genus, the number of boundaries, the number of open and closed string insertions, the string coupling and the D-brane tension. The challenge with signs arises because the relevant moduli spaces are not complex manifolds and have no obvious orientation. We deal with this by fixing a specific convention for the sign of the integration measure over the moduli space and adopting a consistent prescription for the ordering of operators and ghost insertions inside correlators.
</description>
<pubDate>Wed, 21 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162377</guid>
<dc:date>2025-05-21T00:00:00Z</dc:date>
</item>
<item>
<title>Observation of the decay B s 0 → K 0 p p ¯ and measurement of the B s 0 → K 0 p p ¯ branching fractions</title>
<link>https://hdl.handle.net/1721.1/162376</link>
<description>Observation of the decay B s 0 → K 0 p p ¯ and measurement of the B s 0 → K 0 p p ¯ branching fractions
Aaij, R.; Abdelmotteleb, A. S. W.; Abellan Beteta, C.; Abudinén, F.; Ackernley, T.; Adefisoye, A. A.; Adeva, B.; Adinolfi, M.; Adlarson, P.; Agapopoulou, C.; Aidala, C. A.; Ajaltouni, Z.; Akar, S.; Akiba, K.; Albicocco, P.; Albrecht, J.
A study of the charmless baryonic decays B s 0 → K 0 p p ¯ is presented, where B s 0 denotes either a B0 or a B s 0 meson. The analysis is based on proton-proton collision data collected by the LHCb experiment at centre-of-mass energies of 7, 8, and 13 TeV, corresponding to an integrated luminosity of 9 fb−1. The decay B s 0 → K 0 p p ¯ is observed for the first time, with a measured branching fraction of (9.14 ± 1.69 ± 0.90 ± 0.33 ± 0.20) × 10−7 and a significance of 5.6σ. The uncertainties respectively account for statistical and systematic contributions, the precision of the branching fraction of the normalisation channel B0 → K0π+π− and the fragmentation fraction ratio fs/fd. The branching fraction determined for B 0 → K 0 p p ¯ is (2.82 ± 0.08 ± 0.12 ± 0.10) × 10−6, which is the most precise measurement to date.
</description>
<pubDate>Wed, 09 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162376</guid>
<dc:date>2025-07-09T00:00:00Z</dc:date>
</item>
<item>
<title>FairRAG: A Privacy-Preserving Framework for Fair Financial Decision-Making</title>
<link>https://hdl.handle.net/1721.1/162375</link>
<description>FairRAG: A Privacy-Preserving Framework for Fair Financial Decision-Making
Nagpal, Rashmi; Usua, Unyimeabasi; Palacios, Rafael; Gupta, Amar
Customer churn prediction has become crucial for businesses, yet it poses significant challenges regarding privacy preservation and prediction accuracy. In this paper, we address two fundamental questions: (1) How can customer churn be effectively predicted while ensuring robust privacy protection of sensitive data? (2) How can large language models enhance churn prediction accuracy while maintaining data privacy? To address these questions, we propose FairRAG, a robust architecture that combines differential privacy, retrieval-augmented generation, and LLMs. Our approach leverages OPT-125M as the core language model along with a sentence transformer for semantic similarity matching while incorporating differential privacy mechanisms to generate synthetic training data. We evaluate FairRAG on two diverse datasets: Bank Churn and Telco Churn. The results demonstrate significant improvements over both traditional machine learning approaches and standalone LLMs, achieving accuracy improvements of up to 11% on the Bank Churn dataset and 12% on the Telco Churn dataset. These improvements were maintained when using differentially private synthetic data, thus indicating robust privacy and accuracy trade-offs.
</description>
<pubDate>Fri, 25 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162375</guid>
<dc:date>2025-07-25T00:00:00Z</dc:date>
</item>
<item>
<title>We Do Hard Things: Embracing Dissonance and Finding Harmony in Academic Libraries</title>
<link>https://hdl.handle.net/1721.1/162374</link>
<description>We Do Hard Things: Embracing Dissonance and Finding Harmony in Academic Libraries
Sardis, Heather; Stalberg, Erin S.
</description>
<pubDate>Wed, 01 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162374</guid>
<dc:date>2025-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Development of a Lentiviral Vector for High-Yield Production of Synthetic and Recombinant GCase for Gaucher Disease Therapy</title>
<link>https://hdl.handle.net/1721.1/162371</link>
<description>Development of a Lentiviral Vector for High-Yield Production of Synthetic and Recombinant GCase for Gaucher Disease Therapy
Coelho, Ana Carolina; Wiezel, Claudia Emília Vieira; de Campos, Alline Cristina; Figueiredo, Lílian Louise Souza; Suardi, Gabriela Aparecida Marcondes; de Paula Bernardes, Juliana; da Cunha Tirapelli, Daniela Pretti; Faça, Vitor Marcel; Abraham, Kuruvilla Joseph; Carlotti-Júnior, Carlos Gilberto; Siciliano, Velia; Weiss, Ron; Gerson, Stanton; Fontes, Aparecida Maria
Gaucher disease (GD) is an autosomal recessive disorder caused by the deficient activity of the lysosomal enzyme glucocerebrosidase (GCase). Although enzyme replacement therapy (ERT) remains the standard of care for non-neuropathic GD patients, its high cost significantly limits accessibility. To enhance production efficiency, we developed a lentiviral system encoding a codon-optimized GCase gene driven by the human elongation factor 1a (hEF1&amp;alpha;) promoter for stable production in human cell lines. A functional lentiviral vector, LV_EF1&amp;alpha;_GBA_Opt, was generated at a titer of 7.88 &amp;times; 10&lt;sup&gt;8&lt;/sup&gt; LV particles/mL as determined by qPCR. Six transduction cycles were performed at a multiplicity of infection of 30&amp;ndash;50. The transduced heterogeneous human cell population showed GCase-specific activity of 307.5 &amp;plusmn; 53.49 nmol/mg protein/h, which represents a 3.21-fold increase compared to wild-type 293FT cells (95.58 &amp;plusmn; 16.5 nmol/mg protein/h). Following single-cell cloning, two clones showed specific activity of 763.8 &amp;plusmn; 135.1 and 752.0 &amp;plusmn; 152.1 nmol/mg/h (clones 15 and 16, respectively). These results show that codon optimization, a lentiviral delivery system, and clonal selection together enable the establishment of stable human cell lines capable of producing high levels of biologically active, synthetic recombinant GCase in vitro. Further studies are warranted for the functional validation in GD patient-derived fibroblasts and animal models.
</description>
<pubDate>Wed, 23 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162371</guid>
<dc:date>2025-07-23T00:00:00Z</dc:date>
</item>
<item>
<title>Angular analysis of the decay B s 0 → ϕe+e−</title>
<link>https://hdl.handle.net/1721.1/162370</link>
<description>Angular analysis of the decay B s 0 → ϕe+e−
Aaij, R.; Abdelmotteleb, A. S. W.; Abellan Beteta, C.; Abudinén, F.; Ackernley, T.; Adefisoye, A. A.; Adeva, B.; Adinolfi, M.; Adlarson, P.; Agapopoulou, C.; Aidala, C. A.; Ajaltouni, Z.; Akar, S.; Akiba, K.; Albicocco, P.; Albrecht, J.
An angular analysis of the decay B s 0 → ϕe+e− is presented, using proton-proton collision data collected with the LHCb detector between 2011 and 2018 at centre-of-mass energies of 7, 8 and 13 TeV. The combined dataset corresponds to an integrated luminosity of 9 fb−1. Observables are determined by fitting time-integrated projections of the angular distribution in three bins of dielectron mass squared, q2, corresponding to [0.1, 1.1], [1.1, 6.0] and [15.0, 19.0] GeV2/c4. The results are compatible with predictions based on the Standard Model of particle physics.
</description>
<pubDate>Thu, 03 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162370</guid>
<dc:date>2025-07-03T00:00:00Z</dc:date>
</item>
<item>
<title>Detecting and Mitigating the Clever Hans Effect in Medical Imaging: A Scoping Review</title>
<link>https://hdl.handle.net/1721.1/162369</link>
<description>Detecting and Mitigating the Clever Hans Effect in Medical Imaging: A Scoping Review
Vásquez-Venegas, Constanza; Wu, Chenwei; Sundar, Saketh; Prôa, Renata; Beloy, Francis J.; Medina, Jillian R.; McNichol, Megan; Parvataneni, Krishnaveni; Kurtzman, Nicholas; Mirshawka, Felipe; Aguirre-Jerez, Marcela; Ebner, Daniel K.; Celi, Leo A.
The Clever Hans effect occurs when machine learning models rely on spurious correlations instead of clinically relevant features and poses significant challenges to the development of reliable artificial intelligence (AI) systems in medical imaging. This scoping review provides an overview of methods for identifying and addressing the Clever Hans effect in medical imaging AI algorithms. A total of 173 papers published between 2010 and 2024 were reviewed, and 37 articles were selected for detailed analysis, with classification into two categories: detection and mitigation approaches. Detection methods include model-centric, data-centric, and uncertainty and bias-based approaches, while mitigation strategies encompass data manipulation techniques, feature disentanglement and suppression, and domain knowledge-driven approaches. Despite the progress in detecting and mitigating the Clever Hans effect, the majority of current machine learning studies in medical imaging do not report or test for shortcut learning, highlighting the need for more rigorous validation and transparency in AI research. Future research should focus on creating standardized benchmarks, developing automated detection tools, and exploring the integration of detection and mitigation strategies to comprehensively address shortcut learning. Establishing community-driven best practices and leveraging interdisciplinary collaboration will be crucial for ensuring more reliable, generalizable, and equitable AI systems in healthcare.
</description>
<pubDate>Mon, 25 Nov 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162369</guid>
<dc:date>2024-11-25T00:00:00Z</dc:date>
</item>
<item>
<title>Secondary Lund jet plane as a gluon-enriched sample</title>
<link>https://hdl.handle.net/1721.1/162366</link>
<description>Secondary Lund jet plane as a gluon-enriched sample
Baldenegro, Cristian; Soto-Ontoso, Alba; Soyez, Gregory
We propose a new strategy to obtain a high-purity sample of gluon-initiated jets at the LHC. Our approach, inspired by the Lund jet plane picture, is to perform a dijet selection where the two jets are collinear to each other and their momentum fraction share is highly asymmetric, and to measure the primary Lund plane density of emissions of the subleading jet. The subleading jet in this topology is practically equivalent to a secondary Lund jet plane. We demonstrate by means of fixed-order calculations that such a simple setup yields (Born-level) gluon jet fractions of around 90% for the subleading jet for both quark- and gluon-initiated jets. This observation is confirmed using hadron-level Monte Carlo generated events. We also show that the extracted gluon purities are highly resilient to the overall colour structure of the event, to the flavour of the hard-scattering process, and to the parton distribution functions. This strategy is well-suited for constraining the radiation pattern of gluon-initiated jets using a set of fiducial cuts that can readily be tested at the LHC, without relying on taggers or statistical demixing.
</description>
<pubDate>Mon, 07 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162366</guid>
<dc:date>2025-07-07T00:00:00Z</dc:date>
</item>
<item>
<title>Systematic identification of disease-causing promoter and untranslated region variants in 8040 undiagnosed individuals with rare disease</title>
<link>https://hdl.handle.net/1721.1/162365</link>
<description>Systematic identification of disease-causing promoter and untranslated region variants in 8040 undiagnosed individuals with rare disease
Martin-Geary, Alexandra C.; Blakes, Alexander J.; Dawes, Ruebena; Findlay, Scott D.; Lord, Jenny; Dong, Shan; Walker, Susan; Talbot-Martin, Jonathan; Wieder, Nechama; D’Souza, Elston N.; Fernandes, Maria; Hilton, Sarah; Lahiri, Nayana; Campbell, Christopher
Background Both promoters and untranslated regions (UTRs) have critical regulatory roles, yet variants in these regions are largely excluded from clinical genetic testing due to difficulty in interpreting pathogenicity. The extent to which these regions may harbour diagnoses for individuals with rare disease is currently unknown. Methods We present a framework for the identification and annotation of potentially deleterious proximal promoter and UTR variants in known dominant disease genes. We use this framework to annotate de novo variants (DNVs) in 8040 undiagnosed individuals in the Genomics England 100,000 genomes project, which were subject to strict region-based filtering, clinical review, and validation studies where possible. In addition, we performed region and variant annotation-based burden testing in 7862 unrelated probands against matched unaffected controls. Results We prioritised eleven DNVs and identified an additional variant overlapping one of the eleven. Ten of these twelve variants (82%) are in genes that are a strong match to the individual’s phenotype and six had not previously been identified. Through burden testing, we did not observe a significant enrichment of potentially deleterious promoter and/or UTR variants in individuals with rare disease collectively across any of our region or variant annotations. Conclusions Whilst screening promoters and UTRs can uncover additional diagnoses for individuals with rare disease, including these regions in diagnostic pipelines is not likely to dramatically increase diagnostic yield. Nevertheless, we provide a framework to aid identification of these variants.
</description>
<pubDate>Mon, 14 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162365</guid>
<dc:date>2025-04-14T00:00:00Z</dc:date>
</item>
<item>
<title>QPlacer: Frequency-Aware Component Placement for Superconducting Quantum Computers</title>
<link>https://hdl.handle.net/1721.1/162364</link>
<description>QPlacer: Frequency-Aware Component Placement for Superconducting Quantum Computers
Zhang, Junyao; Wang, Hanrui; Ding, Qi; Gu, Jiaqi; Assouly, Reouven; Oliver, William; Han, Song; Brown, Kenneth; Li, Hai; Chen, Yiran
Quantum Computers face a critical limitation in qubit numbers, hindering their progression towards large-scale and fault-tolerant quantum computing. A significant challenge impeding scaling is crosstalk, characterized by unwanted interactions among neighboring components on quantum chips, including qubits, resonators, and substrates. We motivate a general approach to systematically resolving multifaceted crosstalks in a limited substrate area. We propose QPlacer, a frequency-aware electrostatic-based placement framework tailored for superconducting quantum computers, to alleviate crosstalk by isolating these components in spatial and frequency domains alongside compact substrate design.&#13;
QPlacer commences with a frequency assigner that ensures frequency domain isolation for qubits and resonators. It then incorporates a padding strategy and resonator partitioning for layout flexibility. Central to our approach is the conceptualization of quantum components as charged particles, enabling strategic spatial isolation through a ‘frequency repulsive force’ concept. Our results demonstrate that QPlacer carefully crafts the physical component layout in mitigating various crosstalk impacts while maintaining a compact substrate size. On various device topologies and NISQ benchmarks, QPlacer improves fidelity by an average of 37.5 × and reduces spatial violations (susceptible to crosstalk) by an average of 12.76 ×, compared to classical placement engines. Regarding area optimization, compared to manual designs, QPlacer can reduce the required layout area by 2.14 × on average.
ISCA ’25, Tokyo, Japan
</description>
<pubDate>Fri, 20 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162364</guid>
<dc:date>2025-06-20T00:00:00Z</dc:date>
</item>
<item>
<title>CLEANN: Lock-Free Augmented Trees for Low-Dimensional k-Nearest Neighbor Search</title>
<link>https://hdl.handle.net/1721.1/162363</link>
<description>CLEANN: Lock-Free Augmented Trees for Low-Dimensional k-Nearest Neighbor Search
Manohar, Magdalen; Wei, Yuanhao; Blelloch, Guy
We develop a linearizable lock-free data structure, the CLEANN-tree (Concurrent Linearizable Efficient Augmented Nearest Neighbor tree), for low dimensional κ-nearest-neighbor searching. The data structure maintains a set of points P in d dimensions under insertion and deletion, while supporting queries that, given a point, return the k nearest points in P. The CLEANN-tree is constructed by modifying a kd-tree, a type of spatial decomposition commonly used for κ-nearest neighbor searching, for the concurrent environment. It is the first such concurrent structure---two previous structures were either not linearizable or only supported κ=1. Furthermore CLEANN-tree stores an augmented value (more specifically, a bounding box) in each internal node of the kd-tree. These bounding boxes significantly improve query performance by allowing more aggresive pruning. However, correctly and efficiently maintaining these augmented values is challenging in the linearizable lock-free setting because queries can examine large parts of the structure, which might be changing, and an insert or delete can require updating all the augmented values from the leaf to the root.&#13;
We develop new approaches for maintaining concurrent augmented trees which leverage recent work on lock-free locks and snapshotting. Based on these, we implement two variations of the CLEANN-tree and present experimental results for both. Both variations significantly outperform previous concurrent κ-nearest-neighbor search structures and get near linear speedup over optimized sequential structures.
SPAA ’25, July 28–August 1, 2025, Portland, OR, USA
</description>
<pubDate>Wed, 16 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162363</guid>
<dc:date>2025-07-16T00:00:00Z</dc:date>
</item>
<item>
<title>Towards Zero Spawn Overhead: Work Stealing Without Deques</title>
<link>https://hdl.handle.net/1721.1/162362</link>
<description>Towards Zero Spawn Overhead: Work Stealing Without Deques
Handleman, Aaron; Singer, Kyle; Schardl, Tao B.; Lee, I-Ting Angelina
In a randomized work-stealing scheduler, parallel speedup depends on the spawn overhead, which workers pay to allow tasks to execute in parallel, and the steal overhead, which thieves pay to start executing new work. The importance of minimizing the spawn overhead in a randomized work-stealing scheduler is first formalized by Frigo et al., coined as the work-first principle [15], which states that one should minimize spawn overhead even at the expense of a larger steal overhead. Since then, many strategies have been proposed to reduce the spawn overhead, which is dominated by maintaining a per-worker double-ended queue, or deque, to keep track of available parallel work.&#13;
In pursuit of zero spawn overhead, this work considers a strategy that eliminates the use of deques entirely, obviating the need for a worker to perform explicit bookkeeping or set up a deque to enable parallelism. To that end, we propose DLite, a compiler and runtime ABI (Application Binary Interface) that incurs near-zero spawn overhead, empirically measured to be about 6% compared to a regular function invocation. DLite pushes the tradeoffs advocated by the work-first principle to the extreme, which decreases the spawn overhead to almost nil, at the expense of a high steal cost. Specifically, DLite employs a backtracking strategy: When a steal attempt occurs, the victim provides its current stack and base pointers to the thief, and the thief then reconstructs the necessary state to realize the parallel execution.&#13;
We have implemented Cilk-DLite, which extends the OpenCilk platform [33] to implement DLite. When the application has ample parallelism, Cilk-DLite exhibits similar scalability to OpenCilk with much lower spawn overhead. When the application lacks parallelism, the high steal cost in Cilk-DLite can impede scalability due to slower work distribution. We also implemented variants of Cilk-DLite that make different design choices to evaluate the tradeoffs between spawn overhead and steal cost.
SPAA ’25, July 28-August 1, 2025, Portland, OR, USA
</description>
<pubDate>Wed, 16 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162362</guid>
<dc:date>2025-07-16T00:00:00Z</dc:date>
</item>
<item>
<title>Nondimensional Analysis of a Hollow Fiber Membrane Contactor for Direct Air Capture</title>
<link>https://hdl.handle.net/1721.1/162361</link>
<description>Nondimensional Analysis of a Hollow Fiber Membrane Contactor for Direct Air Capture
Diederichsen, Kyle M; Hatton, T Alan
Negative emissions technologies, including the direct capture of carbon dioxide from the atmosphere, are increasingly seen as important components in solving the coming climate crisis. While contacting units for solid sorbents have been studied extensively, little work has been directed at the design of large-scale air–liquid contacting units for CO2 capture. Herein, we examine a conceptual large-scale gas–liquid contacting unit using hollow fiber membranes filled with a flowing, reactive sorbent liquid. In the proposed concept, the sorbent liquid is fed to a bank of hollow fibers exposed to a blown stream of air, and a sorbent inside the liquid reacts with CO2 in the air. We employ commonly used modeling techniques to describe the reactive absorption of CO 2 in the liquid, though in a generalized nondimensional form. Extending this solution to a bank of fibers, we demonstrate a means of extending the solution for few fibers to many fibers and discuss the trade-offs involved in achieving high sorbent utilization. The methodology described here produces a highly general solution to the design of a fiber tube bank for air contacting, and we demonstrate the use of this solution to size an example fiber contacting unit. The proposed design is envisioned to enable new conceptual liquid sorbent chemistries in direct air capture, particularly those envisioned for use with electrochemically mediated regeneration.
</description>
<pubDate>Tue, 02 Aug 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162361</guid>
<dc:date>2022-08-02T00:00:00Z</dc:date>
</item>
<item>
<title>Advanced hybrid electro-separation/electro-conversion systems for wastewater treatment, reuse and recovery: Compromise between symmetric and asymmetric constraints</title>
<link>https://hdl.handle.net/1721.1/162360</link>
<description>Advanced hybrid electro-separation/electro-conversion systems for wastewater treatment, reuse and recovery: Compromise between symmetric and asymmetric constraints
Mousset, Emmanuel; Hatton, T Alan
Wastewater treatment with reuse and recovery of value-added compounds for valorization is of rising interest, and the combination of electro-separation and electro-conversion processes could be a promising solution to both environmental and resource availability problems. The more recent concomitant development of both electrochemical advanced oxidation processes and materials with new properties make the older electro-separation technologies regain visibility and interest. The electrofiltration/electrooxidation or electroreduction, electrosorption/electrooxidation, electrocoagulation/electro-Fenton, electroprecipitation/electrooxidation and electrodeposition/electrooxidation have been particularly critically reviewed. The conventional flow-by or flow-through parallel-plate and concentric cylinders design do not suffice to face the antagonist requirements in such simultaneous multiple electroproccesses. Innovative designs are needed and emerging concepts such as reactive electro-mixing are a possibility. Further modeling and scale-up studies based on revised theory are required in the future.
</description>
<pubDate>Sat, 01 Oct 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162360</guid>
<dc:date>2022-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Changes of the adsorption parameters under the influence of static magnetic field</title>
<link>https://hdl.handle.net/1721.1/162359</link>
<description>Changes of the adsorption parameters under the influence of static magnetic field
Rajczykowski, Krzysztof; Loska, Krzysztof; Alan Hatton, T
During the study, the influence of the external magnetic field on the adsorption of copper, nickel and cadmium were analysed. For that purpose, two different types of charcoals were used, paramagnetic and ferromagnetic. Obtained results showed clear changes in the effectiveness of adsorption processes under the influence of the strong magnetic field, but only for the ferromagnetic activated carbon. The stimulating nature of the modification on the copper removal processes has been proved while for nickel adsorption systems, an inhibiting effect of the same kind of field was demonstrated. For cadmium removal, there were no statistically significant changes in the processes before and after magnetic modifications. Besides that, additional parameters were studied like kinetic and thermodynamic analyses of the systems and it also proved the important influence of the static magnetic field, such as increase of the initial adsorption rate or changes in entropy, and the average Dubinin–Radushkevich energy of the process.
</description>
<pubDate>Tue, 01 Nov 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162359</guid>
<dc:date>2022-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Measurement of the 40Ar(e,e′) elastic scattering cross section with a novel gas-jet target</title>
<link>https://hdl.handle.net/1721.1/162356</link>
<description>Measurement of the 40Ar(e,e′) elastic scattering cross section with a novel gas-jet target
Littich, M.; Doria, L.; Brand, P.; Achenbach, P.; Aulenbacher, S.; Bacca, S.; Bernauer, J. C.; Biroth, M.; Bonaventura, D.; Bosnar, D.; Christmann, M.; Cline, E.; Denig, A.; Distler, M.; Esser, A.; Friščić, I.; Geimer, J.; Gülker, P.; Hoek, M.; Klag, P.
We report on a measurement of elastic electron scattering on argon performed with a novel cryogenic gas-jet target at the Mainz Microtron accelerator MAMI. The luminosity is estimated with the thermodynamical parameters of the target and by comparison to a calculation in distorted-wave Born approximation. The cross section, measured at new momentum transfers of 1.24  fm - 1 and 1.55  fm - 1 is in agreement with previous experiments performed with a traditional high-pressure gas target, as well as with modern ab-initio calculations employing state-of-the-art nuclear forces from chiral effective field theory. The nearly background-free measurement highlights the optimal properties of the gas-jet target for elements heavier than hydrogen, enabling new applications in hadron and nuclear physics.
</description>
<pubDate>Thu, 03 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162356</guid>
<dc:date>2025-07-03T00:00:00Z</dc:date>
</item>
<item>
<title>Revisiting single inclusive jet production: timelike factorization and reciprocity</title>
<link>https://hdl.handle.net/1721.1/162355</link>
<description>Revisiting single inclusive jet production: timelike factorization and reciprocity
Lee, Kyle; Moult, Ian; Zhang, Xiaoyuan
Factorization theorems for single inclusive jet production play a crucial role in the study of jets and their substructure. In the case of small radius jets, the dynamics of the jet clustering can be factorized from both the hard production dynamics, and the dynamics of the low scale jet substructure measurement, and is described by a matching coefficient that can be computed in perturbative Quantum Chromodynamics (QCD). A proposed factorization formula describing this process has been previously presented in the literature, and is referred to as the semi-inclusive, or fragmenting jets formalism. By performing an explicit two-loop calculation, we show the inconsistency of this factorization formula, in agreement with another recent result in the literature. Building on recent progress in the factorization of single logarithmic observables, and the understanding of reciprocity, we then derive a new all-order factorization theorem for inclusive jet production. The use of a jet algorithm, being only a modification of the infrared structure of the measurement, modifies the structure of convolutions in the factorization theorem, as compared to inclusive fragmentation, but maintains the universality of the inclusive hard function and its associated Dokshitzer-Gribov-Lipatov-Altarelli-Parisi (DGLAP) evolution, which are ultraviolet properties. However, the non-trivial structure of convolutions in the factorization theorem implies that the jet functions exhibit a modified evolution. We perform an explicit two-loop calculation of the jet function in both N = 4 super Yang-Mills (SYM), and for all color channels in QCD, finding exact agreement with the structure derived from our renormalization group equations. In addition, we derive several new results, including an extension of our factorization formula to jet substructure observables, a jet algorithm definition of a generating function for the energy correlators, and new results for exclusive jet functions. Our results are a key ingredient for achieving precision jet substructure at colliders.
</description>
<pubDate>Thu, 15 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162355</guid>
<dc:date>2025-05-15T00:00:00Z</dc:date>
</item>
<item>
<title>Data formats in analytical DBMSs: performance trade-offs and future directions</title>
<link>https://hdl.handle.net/1721.1/162354</link>
<description>Data formats in analytical DBMSs: performance trade-offs and future directions
Liu, Chunwei; Pavlenko, Anna; Interlandi, Matteo; Haynes, Brandon
This paper evaluates the suitability of Apache Arrow, Parquet, and ORC as formats for subsumption in an analytical DBMS. We systematically identify and explore the high-level features that are important to support efficient querying in modern OLAP DBMSs and evaluate the ability of each format to support these features. We find that each format has trade-offs that make it more or less suitable for use as a format in a DBMS and identify opportunities to more holistically co-design a unified in-memory and on-disk data representation. Notably, for certain popular machine learning tasks, none of these formats perform optimally, highlighting significant opportunities for advancing format design. Our hope is that this study can be used as a guide for system developers designing and using these formats, as well as provide the community with directions to pursue for improving these common open formats.
</description>
<pubDate>Wed, 19 Mar 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162354</guid>
<dc:date>2025-03-19T00:00:00Z</dc:date>
</item>
<item>
<title>Entropy and spectrum of near-extremal black holes: semiclassical brane solutions to non-perturbative problems</title>
<link>https://hdl.handle.net/1721.1/162353</link>
<description>Entropy and spectrum of near-extremal black holes: semiclassical brane solutions to non-perturbative problems
Hernández-Cuenca, Sergio
The black hole entropy has been observed to generically turn negative at exponentially low temperatures T ~ e − S 0 in the extremal Bekenstein-Hawking entropy S0, a seeming pathology often attributed to missing non-perturbative effects. In fact, we show that this negativity must happen for any effective theory of quantum gravity with an ensemble description. To do so, we identify the usual gravitational entropy as an annealed entropy Sa, and prove that this quantity gives S0 at extremality if and only if the ground-state energy is protected by supersymmetry, and diverges negatively otherwise. The actual thermodynamically-behaved quantity is the average or quenched entropy Sq, whose calculation is poorly understood in gravity: it involves replica wormholes in a regime where the topological expansion breaks down. Using matrix integrals we find new instanton saddles that dominate gravitational correlators at T ~ e − S 0 and are dual to semiclassical wormholes involving dynamical branes. These brane solutions give the leading contribution to any black hole very near extremality, and a duality with matrix ensembles would not make sense without them. In the non-BPS case, they are required to make Sq non-negative and also enhance the negativity of Sa, both effects consistent with matrix integrals evaluated exactly. Our instanton results are tested against the on-shell action of D3-branes dual to multiply wrapped Wilson loops in N = 4 super-YM, and a precise match is found. Our analysis of low-energy random matrix spectra also explains the origin of spectral gaps in supersymmetric theories, not only when there are BPS states at zero energy, but also for purely non-BPS supermultiplets. In the former, our quantitative prediction for the gap in terms of the degeneracy of BPS states agrees with the R-charge scaling in gapped multiplets of N = 2 super-JT gravity.
</description>
<pubDate>Fri, 02 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162353</guid>
<dc:date>2025-05-02T00:00:00Z</dc:date>
</item>
<item>
<title>Parallel Batch-Dynamic Coreness Decomposition with Worst-Case Guarantees</title>
<link>https://hdl.handle.net/1721.1/162352</link>
<description>Parallel Batch-Dynamic Coreness Decomposition with Worst-Case Guarantees
Ghaffari, Mohsen; Koo, Jaehyun
We present the first parallel batch-dynamic algorithm for approximating coreness decomposition with worst-case update times. Given any batch of edge insertions and deletions, our algorithm processes all these updates in poly(log n) depth, using a worst-case work bound of b. poly(log n) where b denotes the batch size. This means the batch gets processed in Õ(b/p) time, given p processors, which is optimal up to logarithmic factors. Previously, an algorithm with similar guarantees was known by the celebrated work of Liu, Shi, Yu, Dhulipala, and Shun [SPAA'22], but with the caveat of the work bound, and thus the runtime, being only amortized.
SPAA ’25, Portland, OR, USA
</description>
<pubDate>Wed, 16 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162352</guid>
<dc:date>2025-07-16T00:00:00Z</dc:date>
</item>
<item>
<title>PipeRAG: Fast Retrieval-Augmented Generation via Adaptive Pipeline Parallelism</title>
<link>https://hdl.handle.net/1721.1/162351</link>
<description>PipeRAG: Fast Retrieval-Augmented Generation via Adaptive Pipeline Parallelism
Jiang, Wenqi; Zhang, Shuai; Han, Boran; Wang, Jie; Wang, Bernie; Kraska, Tim
Retrieval-augmented generation (RAG) can enhance the generation quality of large language models (LLMs) by incorporating external token databases. However, retrievals from large databases can constitute a substantial portion of the overall generation time, particularly when retrievals are periodically performed to align the retrieved content with the latest states of generation. In this paper, we introduce PipeRAG, a novel algorithm-system co-design approach to reduce generation latency and enhance generation quality. PipeRAG integrates (1) pipeline parallelism to enable concurrent retrieval and generation processes, (2) flexible retrieval intervals to maximize the efficiency of pipeline parallelism, and (3) a performance model to automatically balance retrieval quality and latency based on the generation states and underlying hardware. Our evaluation shows that, by combining the three aforementioned methods, PipeRAG achieves up to 2.6× speedup in end-to-end generation latency while improving generation quality. These promising results showcase the effectiveness of co-designing algorithms with underlying systems, paving the way for the adoption of PipeRAG in future RAG systems.
KDD ’25, August 3–7, 2025, Toronto, ON, Canada
</description>
<pubDate>Sun, 20 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162351</guid>
<dc:date>2025-07-20T00:00:00Z</dc:date>
</item>
<item>
<title>Parallel Batch-Dynamic Algorithms for Spanners, and Extensions</title>
<link>https://hdl.handle.net/1721.1/162350</link>
<description>Parallel Batch-Dynamic Algorithms for Spanners, and Extensions
Ghaffari, Mohsen; Koo, Jaehyun
This paper presents the first parallel batch-dynamic algorithms for computing spanners and sparsifiers. Our algorithms process any batch of edge insertions and deletions in an n-node undirected graph, in poly(log n) depth and using amortized work near-linear in the batch size. Our concrete results are as follows:&#13;
• Our base algorithm maintains a spanner with (2k -1) stretch and Õ(n1+1/k) edges, for any k ≥ 1.&#13;
• Our first extension maintains a sparse spanner with only O(n) edges, and Õ(log n) stretch.&#13;
• Our second extension maintains a t-bundle of spanners --- i.e., t spanners, each of which is the spanner of the graph remaining after removing the previous ones---and allows us to maintain cut/spectral sparsifiers with Õ(n) edges.
SPAA ’25, Portland, OR, USA
</description>
<pubDate>Wed, 16 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162350</guid>
<dc:date>2025-07-16T00:00:00Z</dc:date>
</item>
<item>
<title>Diversity Optimization for Travelling Salesman Problem via Deep Reinforcement Learning</title>
<link>https://hdl.handle.net/1721.1/162349</link>
<description>Diversity Optimization for Travelling Salesman Problem via Deep Reinforcement Learning
Li, Qi; Cao, Zhiguang; Ma, Yining; Wu, Yaoxin; Gong, Yue-Jiao
Existing neural methods for the Travelling Salesman Problem (TSP) mostly aim at finding a single optimal solution. To discover diverse yet high-quality solutions for Multi-Solution TSP (MSTSP), we propose a novel deep reinforcement learning based neural solver, which is primarily featured by an encoder-decoder structured policy. Concretely, on the one hand, a Relativization Filter (RF) is designed to enhance the robustness of the encoder to affine transformations of the instances, so as to potentially improve the quality of the found solutions. On the other hand, a Multi-Attentive Adaptive Active Search (MA3S) is tailored to allow the decoders to strike a balance between the optimality and diversity. Experimental evaluations on benchmark instances demonstrate the superiority of our method over recent neural baselines across different metrics, and its competitive performance against state-of-the-art traditional heuristics with significantly reduced computational time, ranging from 1.3× to 15× faster. Furthermore, we demonstrate that our method can also be applied to the Capacitated Vehicle Routing Problem (CVRP).
KDD ’25, Toronto, ON, Canada
</description>
<pubDate>Sun, 20 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162349</guid>
<dc:date>2025-07-20T00:00:00Z</dc:date>
</item>
<item>
<title>Electrochemical direct air capture of CO2 using neutral red as reversible redox-active material</title>
<link>https://hdl.handle.net/1721.1/162348</link>
<description>Electrochemical direct air capture of CO2 using neutral red as reversible redox-active material
Seo, Hyowon; Hatton, T Alan
Direct air capture of carbon dioxide is a viable option for the mitigation of CO2&#13;
emissions and their impact on global climate change. Conventional processes&#13;
for carbon capture from ambient air require 230 to 800 kJ thermal per mole of&#13;
CO2, which accounts for most of the total cost of capture. Here, we demonstrate electrochemical direct air capture using neutral red as a redox-active&#13;
material in an aqueous solution enabled by the inclusion of nicotinamide as a&#13;
hydrotropic solubilizing agent. The electrochemical system demonstrates a&#13;
high electron utilization of 0.71 in a continuous flow cell with an estimated&#13;
minimum work of 35 kJe per mole of CO2 from 15% CO2. Further exploration&#13;
using ambient air (410 ppm CO2 in the presence of 20% oxygen) as a feed gas&#13;
shows electron utilization of 0.38 in a continuous flow cell to provide an&#13;
estimated minimum work of 65 kJe per mole of CO2.
</description>
<pubDate>Thu, 19 Jan 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162348</guid>
<dc:date>2023-01-19T00:00:00Z</dc:date>
</item>
<item>
<title>Electrochemistry-Based CO2 Removal Technologies</title>
<link>https://hdl.handle.net/1721.1/162347</link>
<description>Electrochemistry-Based CO2 Removal Technologies
Biel‐Nielsen, Tessa Lund; Hatton, T Alan; Villadsen, Sebastian NB; Jakobsen, Jan S; Bonde, Jacob L; Spormann, Alfred M; Fosbøl, Philip L
Unprecedented increase in atmospheric CO2 levels calls for&#13;
efficient, sustainable, and cost-effective technologies for CO2&#13;
removal, including both capture and conversion approaches.&#13;
Current CO2 abatement is largely based on energy-intensive&#13;
thermal processes with a high degree of inflexibility. In this&#13;
Perspective, it is argued that future CO2 technologies will follow&#13;
the general societal trend towards electrified systems. This&#13;
transition is largely promoted by decreasing electricity prices,&#13;
continuous expansion of renewable energy infrastructure, and&#13;
breakthroughs in carbon electrotechnologies, such as electrochemically modulated amine regeneration, redox-active quinones and other species, and microbial electrosynthesis. In&#13;
addition, new initiatives make electrochemical carbon capture&#13;
an integrated part of Power-to-X applications, for example, by&#13;
linking it to H2 production. Selected electrochemical technologies crucial for a future sustainable society are reviewed.&#13;
However, significant further development of these technologies&#13;
within the next decade is needed, to meet the ambitious&#13;
climate goals.
</description>
<pubDate>Thu, 02 Mar 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162347</guid>
<dc:date>2023-03-02T00:00:00Z</dc:date>
</item>
<item>
<title>ObjecTier: Non-Invasively Boosting Memory Tiering Performance</title>
<link>https://hdl.handle.net/1721.1/162346</link>
<description>ObjecTier: Non-Invasively Boosting Memory Tiering Performance
Pawar, Vinita; Bhardwaj, Ankit; Stutsman, Ryan
Recent research has developed page-based memory-tiering systems that place hot pages in fast tiers and cold pages in slower, more capacious tiers. However, applications place many objects together within pages, and most pages contain some objects that are hot and some that are cold. Our simulations of a key-value workload confirm this; even the hottest pages in the fast tier can contain 50% cold data.&#13;
To improve fast tier utilization, we describe the design of a new framework, ObjecTier, that uses application knowledge to efficiently consolidate hot data and cold data. This allows ObjecTier-enabled applications to boost fast tier hit rates and improve performance regardless of which underlying memory tiering system they use underneath, even if that system is page based.&#13;
With simulations, we show that ObjecTier may improve average memory access time (AMAT) by 2× without adding any memory space overhead for our simulated key-value store workload. We conclude by outlining the next steps to make the ObjecTier framework a reality for easy adaptation of applications like key-value stores and other indexed databases.
ICPE Companion ’25, Toronto, ON, Canada
</description>
<pubDate>Mon, 05 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162346</guid>
<dc:date>2025-05-05T00:00:00Z</dc:date>
</item>
<item>
<title>Lost in Siting: The Hidden Carbon Cost of Inequitable Residential Solar Installations</title>
<link>https://hdl.handle.net/1721.1/162345</link>
<description>Lost in Siting: The Hidden Carbon Cost of Inequitable Residential Solar Installations
Sigrist, Cooper; Lechowicz, Adam; Champ, Jovan; Bashir, Noman; Hajiesmaili, Mohammad
The declining cost of solar photovoltaics (PV) combined with strong federal and state-level incentives have resulted in a high number of residential solar PV installations in the US. However, these installations are concentrated in particular regions, such as California, and demographics, such as high-income Asian neighborhoods. This inequitable distribution creates an illusion that further increasing residential solar installations will become increasingly challenging if it is not already prohibitive. Furthermore, while the inequity in solar installations has received attention, no prior comprehensive work has been done on understanding whether our current trajectory of residential solar adoption is energy- and carbon-efficient.&#13;
In this paper, we reveal the hidden energy and carbon cost of the inequitable distribution of existing installations. Using US-based data on carbon offset potential—the amount of avoided carbon emissions from using rooftop PV instead of electric grid energy—and the number of existing solar installations, we surprisingly observe that locations and demographics with a higher carbon offset potential have fewer existing installations. For instance, neighborhoods with relatively higher black population have 7.4% higher carbon offset potential than average but 36.7% fewer installations; lower-income neighborhoods have 14.7% higher potential and 47% fewer installations; Republican-leaning states have 23.8% higher potential and 60.8% fewer installations. We propose several equity- and carbon-aware solar siting strategies that prioritize developing solar in certain areas based on their characteristics – these strategies may inform, for example, the development of targeted incentives. In evaluating these strategies, we develop SunSight, a toolkit that combines simulation/visualization tools and our relevant datasets, which we are releasing publicly. Our projections show that a multi-objective siting strategy can address two problems at once – namely, it can improve societal outcomes in terms of distributional equity and simultaneously improve the carbon-efficiency (i.e., climate impact) of current installation trends by up to 39.8%.
E-ENERGY ’25, Rotterdam, Netherlands
</description>
<pubDate>Mon, 16 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162345</guid>
<dc:date>2025-06-16T00:00:00Z</dc:date>
</item>
<item>
<title>Tiered-Indexing: Optimizing Access Methods for Skew</title>
<link>https://hdl.handle.net/1721.1/162344</link>
<description>Tiered-Indexing: Optimizing Access Methods for Skew
Zhou, Xinjing; Hao, Xiangpeng; Yu, Xiangyao; Stonebraker, Michael
Real-world DBMS workloads invariably exhibit skewed access patterns, where a small number of "hot" records are accessed much more frequently than the remaining "cold" records. Page-oriented data structures, such as B+trees, dynamic hash tables, heap files, and LSM-tree, are sub-optimal in terms of memory utilization under skewed access conditions. Hot records might be co-located with cold ones on pages in the data structure. Caching those lukewarm pages in the buffer pool lowers memory utilization due to the mismatch of caching granularity (page) and access granularity (record), leading to sub-optimal performance. Recently, the 2-Tree approach was proposed to improve caching efficiency for B+trees using record-level migration. In this paper, we generalize the 2-Tree approach to Tiered-Indexing that can be applied to common buffer-managed data structures to efficiently handle skew using record migration. Using this architecture, we extend hash tables, heap files, and LSM-trees with I/O-efficient record migration. Moreover, we design a general mechanism to ensure data structure consistency for Tiered-Indexing data structures during record migration using optimistic lock coupling. Compared to traditional 1-Tier and state-of-the-art record-caching designs, we observe significant throughput and memory utilization improvement across B+tree, hash table, heap file, and LSM-tree under skewed workloads.
</description>
<pubDate>Sat, 24 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162344</guid>
<dc:date>2025-05-24T00:00:00Z</dc:date>
</item>
<item>
<title>Adiabatic Hydrodynamization and the emergence of attractors: a unified description of hydrodynamization in kinetic theory</title>
<link>https://hdl.handle.net/1721.1/162340</link>
<description>Adiabatic Hydrodynamization and the emergence of attractors: a unified description of hydrodynamization in kinetic theory
Rajagopal, Krishna; Scheihing-Hitschfeld, Bruno; Steinhorst, Rachel
“Attractor” solutions for the pre-hydrodynamic, far-from-equilibrium, evolution of the matter produced in relativistic heavy ion collisions have emerged as crucial descriptors of the rapid hydrodynamization of quark-gluon plasma (QGP). Adiabatic Hydrodynamization (AH) has been proposed as a framework with which to describe, explain, and predict attractor behavior that draws upon an analogy to the adiabatic approximation in quantum mechanics. In this work, we systematize the description of pre-hydrodynamic attractors in kinetic theory by showing how to use the AH framework to identify these long-lived solutions to which varied initial conditions rapidly evolve, demonstrating the robustness of this framework. In a simplified QCD kinetic theory in the small-angle scattering limit, we use AH to explain both the early- and late-time scaling behavior of a longitudinally expanding gluon gas in a unified framework. In this context, we show that AH provides a unified description of, and intuition for, all the stages of what in QCD would be bottom-up thermalization, starting from a pre-hydrodynamic attractor and ending with hydrodynamization. We additionally discuss the connection between the notions of scaling behavior and adiabaticity and the crucial role of time-dependent coordinate redefinitions in identifying the degrees of freedom of kinetic theories that give rise to attractor solutions. The tools we present open a path to the intuitive explanation of how attractor behavior arises and how the attractor evolves in all stages of the hydrodynamization of QGP in heavy ion collisions.
</description>
<pubDate>Wed, 02 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162340</guid>
<dc:date>2025-04-02T00:00:00Z</dc:date>
</item>
<item>
<title>Search for charge-parity violation in semileptonically tagged D0 → K+π− decays</title>
<link>https://hdl.handle.net/1721.1/162339</link>
<description>Search for charge-parity violation in semileptonically tagged D0 → K+π− decays
Aaij, R.; Abdelmotteleb, A. S. W.; Abellan Beteta, C.; Abudinén, F.; Ackernley, T.; Adefisoye, A. A.; Adeva, B.; Adinolfi, M.; Adlarson, P.; Agapopoulou, C.; Aidala, C. A.; Ajaltouni, Z.; Akar, S.; Akiba, K.; Albicocco, P.; Albrecht, J.
An analysis of the flavour oscillations of the charmed neutral meson is presented. The ratio of D0 → K+π− and D0 → K−π+ decay rates is measured as a function of the decay time of the D0 meson and compared with the charge-conjugated system to search for charge-parity violation. The meson flavour at production is double-tagged by the charges of the muon and pion in the preceding B ¯ → D ∗ 2010 + μ − X and D∗(2010)+ → D0π+ decays, respectively. These decays are selected from proton-proton collision data collected by the LHCb experiment at a centre-of-mass energy of 13 TeV and corresponding to an integrated luminosity of 5.4 fb−1. The flavour oscillation parameters, relating to the differences in mass and width of the mass eigenstates, are found to be y′ = (5.8 ± 1.6) × 10−3 and (x′)2 = (0.0 ± 1.2) × 10−4. No evidence for charge-parity violation is seen either in the flavour oscillations or in the decay, where the direct charge-parity asymmetry is measured to be AD = (2.3 ± 1.7) %.
</description>
<pubDate>Wed, 19 Mar 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162339</guid>
<dc:date>2025-03-19T00:00:00Z</dc:date>
</item>
<item>
<title>Measurement of CP asymmetry in B s 0 → D s ∓ K ± decays</title>
<link>https://hdl.handle.net/1721.1/162338</link>
<description>Measurement of CP asymmetry in B s 0 → D s ∓ K ± decays
Aaij, R.; Abdelmotteleb, A. S. W.; Abellan Beteta, C.; Abudinén, F.; Ackernley, T.; Adefisoye, A. A.; Adeva, B.; Adinolfi, M.; Adlarson, P.; Agapopoulou, C.; Aidala, C. A.; Ajaltouni, Z.; Akar, S.; Akiba, K.; Albicocco, P.; Albrecht, J.
A measurement of the CP-violating parameters in B s 0 → D s ∓ K ± decays is reported, based on the analysis of proton-proton collision data collected by the LHCb experiment corresponding to an integrated luminosity of 6 fb−1 at a centre-of-mass energy of 13 TeV. The measured parameters are obtained with a decay-time dependent analysis yielding Cf = 0.791 ± 0.061 ± 0.022, A f ∆ Γ = −0.051 ± 0.134 ± 0.058, A f ¯ ∆ Γ = −0.303 ± 0.125 ± 0.055, Sf = −0.571 ± 0.084 ± 0.023 and S f ¯ = −0.503 ± 0.084 ± 0.025, where the first uncertainty is statistical and the second systematic. This corresponds to CP violation in the interference between mixing and decay of about 8.6 σ. Together with the value of the B s 0 mixing phase −2βs, these parameters are used to obtain a measurement of the CKM angle γ equal to (74 ± 12)° modulo 180°, where the uncertainty contains both statistical and systematic contributions. This result is combined with the previous LHCb measurement in this channel using 3 fb−1 resulting in a determination of γ = 81 − 11 + 12 ◦.
</description>
<pubDate>Wed, 19 Mar 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162338</guid>
<dc:date>2025-03-19T00:00:00Z</dc:date>
</item>
<item>
<title>The complexity of learning (pseudo)random dynamics of black holes and other chaotic systems</title>
<link>https://hdl.handle.net/1721.1/162337</link>
<description>The complexity of learning (pseudo)random dynamics of black holes and other chaotic systems
Yang, Lisa; Engelhardt, Netta
It has been recently proposed that the naive semiclassical prediction of non-unitary black hole evaporation can be understood in the fundamental description of the black hole as a consequence of ignorance of high-complexity information. Validity of this conjecture implies that any algorithm which is polynomially bounded in computational complexity cannot accurately reconstruct the black hole dynamics. In this work, we prove that such bounded quantum algorithms cannot accurately predict (pseudo)random unitary dynamics, even if they are given access to an arbitrary set of polynomially complex observables under this time evolution; this shows that “learning” a (pseudo)random unitary is computationally hard. We use the common simplification of modeling black holes and more generally chaotic systems via (pseudo)random dynamics. The quantum algorithms that we consider are completely general, and their attempted guess for the time evolution of black holes is likewise unconstrained: it need not be a linear operator, and may be as general as an arbitrary (e.g. decohering) quantum channel.
</description>
<pubDate>Thu, 20 Mar 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162337</guid>
<dc:date>2025-03-20T00:00:00Z</dc:date>
</item>
<item>
<title>Measurement of ϕ(1020) meson production in fixed-target pNe collisions at s NN = 68.5 GeV</title>
<link>https://hdl.handle.net/1721.1/162336</link>
<description>Measurement of ϕ(1020) meson production in fixed-target pNe collisions at s NN = 68.5 GeV
Aaij, R.; Abdelmotteleb, A. S. W.; Abellan Beteta, C.; Abudinén, F.; Ackernley, T.; Adefisoye, A. A.; Adeva, B.; Adinolfi, M.; Adlarson, P.; Agapopoulou, C.; Aidala, C. A.; Ajaltouni, Z.; Akar, S.; Akiba, K.; Albicocco, P.; Albrecht, J.
The first measurement of ϕ(1020) meson production in fixed-target pNe collisions at s NN = 68.5 GeV is presented. The ϕ(1020) mesons are reconstructed in their K+K− decay in a data sample consisting of proton collisions on neon nuclei at rest, corresponding to an integrated luminosity of 21.7 ± 1.4 nb−1, collected by the LHCb detector at CERN. The ϕ(1020) production cross-section in the centre-of-mass rapidity range of −1.8 &lt; y* &lt; 0 and transverse momentum range of 800 &lt; pT &lt; 6500 MeV/c is found to be σ = 182.7 ± 2.7 (stat.) ± 14.1 (syst) μb/nucleon. A double-differential measurement of the cross-section is also provided in four regions of rapidity and six regions of transverse momentum of the ϕ(1020) meson and compared with the predictions from Pythia and EPOS4, which are found to underestimate the experimental values.
</description>
<pubDate>Wed, 19 Mar 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162336</guid>
<dc:date>2025-03-19T00:00:00Z</dc:date>
</item>
<item>
<title>Exotic phases in finite-density ℤ3 theories</title>
<link>https://hdl.handle.net/1721.1/162285</link>
<description>Exotic phases in finite-density ℤ3 theories
Ogilvie, Michael C.; Schindler, Moses A.; Schindler, Stella T.
Lattice ℤ3 theories with complex actions share many key features with finite- density QCD including a sign problem and CK symmetry. Complex ℤ3 spin and gauge models exhibit a generalized Kramers-Wannier duality mapping them onto chiral ℤ3 spin and gauge models, which are simulatable with standard lattice methods in large regions of parameter space. The Migdal-Kadanoff real-space renormalization group (RG) preserves this duality, and we use it to compute the approximate phase diagram of both spin and gauge ℤ3 models in dimensions one through four. Chiral ℤ3 spin models are known to exhibit a Devil’s Flower phase structure, with inhomogeneous phases that can be thought of as ℤ3 analogues of chiral spirals. Out of the large class of models we study, we find that only chiral spin models and their duals have a Devil’s Flower structure with an infinite set of inhomogeneous phases, a result we attribute to Elitzur’s theorem. We also find that different forms of the Migdal-Kadanoff RG produce different numbers of phases, a violation of the expectation for universal behavior from a real-space RG. We discuss extensions of our work to ℤN models, SU(N) models and nonzero temperature.
</description>
<pubDate>Wed, 12 Mar 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162285</guid>
<dc:date>2025-03-12T00:00:00Z</dc:date>
</item>
<item>
<title>Constraints on standard model effective field theory for a Higgs boson produced in association with W or Z bosons in the H → b b ¯ decay channel in proton-proton collisions at s = 13 TeV</title>
<link>https://hdl.handle.net/1721.1/162284</link>
<description>Constraints on standard model effective field theory for a Higgs boson produced in association with W or Z bosons in the H → b b ¯ decay channel in proton-proton collisions at s = 13 TeV
Chekhovsky, V.; Hayrapetyan, A.; Makarenko, V.; Tumasyan, A.; Adam, W.; Andrejkovic, J. W.; Benato, L.; Bergauer, T.; Chatterjee, S.; Damanakis, K.; Dragicevic, M.; Hussain, P. S.; Jeitler, M.; Krammer, N.; Li, A.; Liko, D.; Mikulec, I.; Schieck, J.
A standard model effective field theory (SMEFT) analysis with dimension-six operators probing nonresonant new physics effects is performed in the Higgs-strahlung process, where the Higgs boson is produced in association with a W or Z boson, in proton-proton collisions at a center-of-mass energy of 13 TeV. The final states in which the W or Z boson decays leptonically and the Higgs boson decays to a pair of bottom quarks are considered. The analyzed data were collected by the CMS experiment between 2016 and 2018 and correspond to an integrated luminosity of 138 fb−1. An approach designed to simultaneously optimize the sensitivity to Wilson coefficients of multiple SMEFT operators is employed. Likelihood scans as functions of the Wilson coefficients that carry SMEFT sensitivity in this final state are performed for different expansions in SMEFT. The results are consistent with the predictions of the standard model.
</description>
<pubDate>Fri, 14 Mar 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162284</guid>
<dc:date>2025-03-14T00:00:00Z</dc:date>
</item>
<item>
<title>Light dark matter search with nitrogen-vacancy centers in diamonds</title>
<link>https://hdl.handle.net/1721.1/162283</link>
<description>Light dark matter search with nitrogen-vacancy centers in diamonds
Chigusa, So; Hazumi, Masashi; Herbschleb, Ernst D.; Mizuochi, Norikazu; Nakayama, Kazunori
We propose an approach to directly search for light dark matter, such as the axion or the dark photon, by using magnetometry with nitrogen-vacancy centers in diamonds. If the dark matter couples to the electron spin, it affects the evolution of the Bloch vectors consisting of the spin triplet states, which may be detected through several magnetometry techniques. We give several concrete examples with the use of dc and ac magnetometry and estimate the sensitivity on dark matter couplings.
</description>
<pubDate>Wed, 12 Mar 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162283</guid>
<dc:date>2025-03-12T00:00:00Z</dc:date>
</item>
<item>
<title>Concavity for elliptic and parabolic equations in locally symmetric spaces with nonnegative curvature</title>
<link>https://hdl.handle.net/1721.1/162280</link>
<description>Concavity for elliptic and parabolic equations in locally symmetric spaces with nonnegative curvature
Aryan, Shrey; Law, Michael B.
We establish a concavity principle for solutions to elliptic and parabolic equations on locally symmetric spaces with nonnegative sectional curvature, extending the results of Langford and Scheuer (Commun Partial Differ Equ 46(6):1005–1016, 2021). To the best of our knowledge, this is the first general concavity principle established on spaces with non-constant sectional curvature.
</description>
<pubDate>Thu, 10 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162280</guid>
<dc:date>2025-07-10T00:00:00Z</dc:date>
</item>
<item>
<title>Turán Problems for Expanded Hypergraphs</title>
<link>https://hdl.handle.net/1721.1/162259</link>
<description>Turán Problems for Expanded Hypergraphs
Keevash, Peter; Lifshitz, Noam; Long, Eoin; Minzer, Dor
We obtain new results on the Turán number of any bounded degree uniform hypergraph obtained as the expansion of a hypergraph of bounded uniformity. These are asymptotically sharp over an essentially optimal regime for both the uniformity and the number of edges and solve a number of open problems in Extremal Combinatorics. Firstly, we give general conditions under which the crosscut parameter asymptotically determines the Turán number, thus answering a question of Mubayi and Verstraëte. Secondly, we refine our asymptotic results to obtain several exact results, including proofs of the Huang–Loh–Sudakov conjecture on cross matchings and the Füredi–Jiang–Seiver conjecture on path expansions. We have introduced two major new tools for the proofs of these results. The first of these, Global Hypercontractivity, is used as a ‘black box’ (we present it in a separate paper with several other applications). The second tool, presented in this paper, is a far-reaching extension of the Junta Method, which we develop from a powerful and general technique for finding matchings in hypergraphs under certain pseudorandomness conditions.
</description>
<pubDate>Wed, 23 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162259</guid>
<dc:date>2025-04-23T00:00:00Z</dc:date>
</item>
<item>
<title>Measurements of the Higgs boson production cross section in the four-lepton final state in proton-proton collisions at $$\sqrt{s}$$ = 13.6 TeV</title>
<link>https://hdl.handle.net/1721.1/162258</link>
<description>Measurements of the Higgs boson production cross section in the four-lepton final state in proton-proton collisions at $$\sqrt{s}$$ = 13.6 TeV
Chekhovsky, V.; Hayrapetyan, A.; Makarenko, V.; Tumasyan, A.; Adam, W.; Andrejkovic, J. W.; Benato, L.; Bergauer, T.; Chatterjee, S.; Damanakis, K.; Dragicevic, M.; Hussain, P. S.; Jeitler, M.; Krammer, N.; Li, A.; Liko, D.; Mikulec, I.; Schieck, J.
The measurements of the Higgs boson (H) production cross sections performed&#13;
by the CMS Collaboration in the four-lepton (4ℓ, ℓ = e, µ) fnal state at a center-of-mass&#13;
energy √&#13;
s = 13.6 TeV are presented. These measurements are based on data collected with&#13;
the CMS detector at the CERN LHC in 2022, corresponding to an integrated luminosity of&#13;
34.7 fb−1&#13;
. Cross sections are measured in a fducial region closely matching the experimental&#13;
acceptance, both inclusively and diferentially, as a function of the transverse momentum&#13;
and the absolute value of the rapidity of the four-lepton system. The H → ZZ → 4ℓ inclusive&#13;
fducial cross section is measured to be 2.89+0.53&#13;
−0.49(stat)&#13;
+0.29&#13;
−0.21(syst)fb, in agreement with the&#13;
standard model expectation of 3.09+0.27&#13;
−0.24 fb.
</description>
<pubDate>Thu, 08 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162258</guid>
<dc:date>2025-05-08T00:00:00Z</dc:date>
</item>
<item>
<title>Ruzsa’s Problem on Bi-Sidon Sets</title>
<link>https://hdl.handle.net/1721.1/162257</link>
<description>Ruzsa’s Problem on Bi-Sidon Sets
Pach, János; Zakharov, Dmitrii
A subset S of real numbers is called bi-Sidon if it is a Sidon set with respect to both addition and multiplication, i.e., if all pairwise sums and all pairwise products of elements of S are distinct. Imre Ruzsa asked the following question: What is the maximum number f(N) such that every set S of N real numbers contains a bi-Sidon subset of size at least f(N)? He proved that f ( N ) ⩾ c N 1 3 , for a constant c &gt; 0 . In this note, we improve this bound to N 1 3 + 7 78 + o ( 1 ).
</description>
<pubDate>Tue, 08 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162257</guid>
<dc:date>2025-04-08T00:00:00Z</dc:date>
</item>
<item>
<title>Near-Optimal Leader Election in Population Protocols on Graphs</title>
<link>https://hdl.handle.net/1721.1/162256</link>
<description>Near-Optimal Leader Election in Population Protocols on Graphs
Alistarh, Dan; Rybicki, Joel; Voitovych, Sasha
In the stochastic population protocol model, we are given a connected graph with n nodes, and in every time step, a scheduler samples an edge of the graph uniformly at random and the nodes connected by this edge interact. A fundamental task in this model is stable leader election, in which all nodes start in an identical state and the aim is to reach a configuration in which (1) exactly one node is elected as leader and (2) this node remains as the unique leader no matter what sequence of interactions follows. On cliques, the complexity of this problem has recently been settled: time-optimal protocols stabilize in Θ ( n log n ) expected steps using Θ ( log log n ) states, whereas protocols that use O(1) states require Θ ( n 2 ) expected steps. In this work, we investigate the complexity of stable leader election on graphs. We provide the first non-trivial time lower bounds on general graphs, showing that, when moving beyond cliques, the complexity of stable leader election can range from O(1) to Θ ( n 3 ) expected steps. We describe a protocol that is time-optimal on many graph families, but uses polynomially-many states. In contrast, we give a near-time-optimal protocol that uses only O ( log 2 n ) states that is at most a factor O ( log n ) slower. Finally, we observe that for many graphs the constant-state protocol of Beauquier et al. [OPODIS 2013] is at most a factor O ( n log n ) slower than the fast polynomial-state protocol, and among constant-state protocols, this protocol has near-optimal average case complexity on dense random graphs.
</description>
<pubDate>Fri, 27 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162256</guid>
<dc:date>2025-06-27T00:00:00Z</dc:date>
</item>
<item>
<title>A classical model for semiclassical state-counting</title>
<link>https://hdl.handle.net/1721.1/162255</link>
<description>A classical model for semiclassical state-counting
Sorce, Jonathan
In the type II von Neumann algebras that appear in semiclassical gravity, all states have infinite entropy, but entropy differences are uniquely defined. Akers and I have shown that the entropy difference of microcanonical states has a relative state-counting interpretation in terms of the additional (finite) number of degrees of freedom that are needed to represent the “larger-entropy” state supposing that one already has a representation of the “smaller-entropy” state, and supposing that one is restricted to act with gauge-invariant operators. This short paper explains some of the curious features of relative state-counting by analogy to the classical limit of quantum statistical mechanics. In this analogy the preferred family of renormalized traces becomes the preferred family of symplectic measures on phase space; the trace-index of infinite-dimensional subspaces becomes the ratio of phase space volumes; and the restriction that one must act with gauge-invariant operators becomes the restriction that one must act with symplectomorphisms. Because in the phase-space analogy one has exact control over the quantum deformation away from the classical theory, one can see precisely how the relevant aspects of the classical structure are inherited from the quantum theory — though even in this simple setting, it is a nontrivial technical task to show how classical symplectomorphisms emerge from the underlying quantum theory in the ℏ → 0 limit.
</description>
<pubDate>Tue, 13 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162255</guid>
<dc:date>2025-05-13T00:00:00Z</dc:date>
</item>
<item>
<title>The gravitational path integral from an observer’s point of view</title>
<link>https://hdl.handle.net/1721.1/162254</link>
<description>The gravitational path integral from an observer’s point of view
Abdalla, Ahmed I.; Antonini, Stefano; Iliesiu, Luca V.; Levine, Adam
One of the fundamental problems in quantum gravity is to describe the experience of a gravitating observer in generic spacetimes. In this paper, we develop a framework for describing non-perturbative physics relative to an observer using the gravitational path integral. We apply our proposal to an observer that lives in a closed universe and one that falls behind a black hole horizon. We find that the Hilbert space that describes the experience of the observer is much larger than the Hilbert space in the absence of an observer. In the case of closed universes, the Hilbert space is not one-dimensional, as calculations in the absence of the observer suggest. Rather, its dimension scales exponentially with $${G}_{N}^{-1}$$ . Similarly, from an observer’s perspective, the dimension of the Hilbert space in a two-sided black hole is increased. We compute various observables probing the experience of a gravitating observer in this Hilbert space. We find that an observer experiences non-trivial physics in the closed universe in contrast to what it would see in a one-dimensional Hilbert space. In the two-sided black hole setting, our proposal implies that non-perturbative corrections to effective field theory for an infalling observer are suppressed until times exponential in the black hole entropy, resolving a recently-raised puzzle in black hole physics. While the framework that we develop is exemplified in the toy-model of JT gravity, most of our analysis can be extended to higher dimensions and, in particular, to generic spacetimes not admitting a conventional holographic description, such as cosmological universes or black hole interiors.
</description>
<pubDate>Wed, 07 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162254</guid>
<dc:date>2025-05-07T00:00:00Z</dc:date>
</item>
<item>
<title>Deformations of acid-mediated invasive tumors in a model with Allee effect</title>
<link>https://hdl.handle.net/1721.1/162249</link>
<description>Deformations of acid-mediated invasive tumors in a model with Allee effect
Carter, Paul; Doelman, Arjen; van Heijster, Peter; Levy, Daniel; Maini, Philip; Okey, Erin; Yeung, Paige
We consider a Gatenby–Gawlinski-type model of invasive tumors in the presence of an Allee effect. We describe the construction of bistable one-dimensional traveling fronts using singular perturbation techniques in different parameter regimes corresponding to tumor interfaces with, or without, an acellular gap. By extending the front as a planar interface, we perform a stability analysis to long wavelength perturbations transverse to the direction of front propagation and derive a simple stability criterion for the front in two spatial dimensions. In particular we find that in general the presence of the acellular gap indicates transversal instability of the associated planar front, which can lead to complex interfacial dynamics such as the development of finger-like protrusions and/or different invasion speeds.
</description>
<pubDate>Mon, 05 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162249</guid>
<dc:date>2025-05-05T00:00:00Z</dc:date>
</item>
<item>
<title>Wormholes and factorization in exact effective theory</title>
<link>https://hdl.handle.net/1721.1/162248</link>
<description>Wormholes and factorization in exact effective theory
Hernández-Cuenca, Sergio
We study the general framework of effective theories obtained via exact path integration of a complete theory over some sector of its degrees of freedom. Theories constructed this way contain multi-integrals which couple fields arbitrarily far apart, and in certain settings even on path-disconnected components of the space. These are not just entanglement, but genuine non-local interactions that we dub quantum wormholes. Any state the path integral of such an effective theory prepares is shown to be a partial trace of a state of the complete theory over the integrated-out sector. The resulting reduced density operator is generally mixed due to bra-ket wormholes. An infinite family of ensembles of pure states of the complete theory giving the same effective state is identified. These allow one to equivalently interpret any effective state as being prepared by an ensemble of theories. When computing entropic quantities, bra-ket wormholes give rise to replica wormholes. This causes replica path integrals for the effective theory to not factorize even when the underlying manifold does, as expected from mixing. In contrast, effective theories obtained by derivative expansions have no quantum wormholes and prepare pure states. There exist operators in the algebra of effective theories which can distinguish mixed from pure states, implying a breakdown of non-exact effective theories for sufficiently complex observables. This framework unifies and provides new insights into much of the phenomena observed in quantum gravity, including the interplay between wormholes and unitarity, the breakdown of bulk effective theory, the factorization puzzle, state ensembles, theory ensembles, quantum error correction, and baby universes. Some interesting lessons are drawn accounting also for characteristic aspects of gravity concerning IR/UV mixing and Kaluza-Klein reductions.
</description>
<pubDate>Mon, 05 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162248</guid>
<dc:date>2025-05-05T00:00:00Z</dc:date>
</item>
<item>
<title>Measurement of the multiplicity dependence of Υ production ratios in pp collisions at √s = 13 TeV</title>
<link>https://hdl.handle.net/1721.1/162247</link>
<description>Measurement of the multiplicity dependence of Υ production ratios in pp collisions at √s = 13 TeV
Aaij, R.; Abdelmotteleb, A. S. W.; Abellan Beteta, C.; Abudinén, F.; Ackernley, T.; Adefisoye, A. A.; Adeva, B.; Adinolfi, M.; Adlarson, P.; Agapopoulou, C.; Aidala, C. A.; Ajaltouni, Z.; Akar, S.; Akiba, K.; Albicocco, P.; Albrecht, J.
The Υ(2S) and Υ(3S) production cross-sections are measured relative to that of the Υ(1S) meson, as a function of charged-particle multiplicity in proton-proton collisions at a centre-of-mass energy of 13 TeV. The measurement uses data collected by the LHCb experiment in 2018 corresponding to an integrated luminosity of 2 fb−1. Both the Υ(2S)-to-Υ(1S) and Υ(3S)-to-Υ(1S) cross-section ratios are found to decrease significantly as a function of event multiplicity, with the Υ(3S)-to-Υ(1S) ratio showing a steeper decline towards high multiplicity. This hierarchy is qualitatively consistent with the comover model predictions, indicating that final-state interactions play an important role in bottomonia production in high-multiplicity events.
</description>
<pubDate>Fri, 02 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162247</guid>
<dc:date>2025-05-02T00:00:00Z</dc:date>
</item>
<item>
<title>New physics at the Muon (Synchrotron) Ion Collider: MuSIC for several scales</title>
<link>https://hdl.handle.net/1721.1/162246</link>
<description>New physics at the Muon (Synchrotron) Ion Collider: MuSIC for several scales
Davoudiasl, Hooman; Liu, Hongkai; Marcarelli, Roman; Soreq, Yotam; Trifinopoulos, Sokratis
A Muon (Synchrotron) Ion Collider (MuSIC) can be the successor to the Electron-Ion Collider at Brookhaven National Laboratory, as well as the ideal demonstrator facility for a future multi-TeV Muon Collider. Besides its rich nuclear physics and Standard Model particle physics programs, in this work we show that the MuSIC with a TeV-scale muon beam offers also a unique opportunity to probe New Physics. In particular, the relevant searches have the potential to surpass current experimental limits and explore new regimes of the parameter space for a variety of Beyond the Standard Model scenarios including: lepton-flavor violating leptoquarks, muonphilic vector boson interactions, axion-like particles coupling to photons, and heavy sterile neutrinos. Depending on the particular case, the sensitivity of the searches in the MuSIC may span a wide range of energy scales, namely from sub-GeV particles to the few TeV New Physics mediators. Our analysis demonstrates that the MuSIC can strike a powerful chord in the search for New Physics, thanks to unique combination of features that amplify its capabilities.
</description>
<pubDate>Fri, 07 Mar 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162246</guid>
<dc:date>2025-03-07T00:00:00Z</dc:date>
</item>
<item>
<title>Search for high-mass resonances in a final state comprising a gluon and two hadronically decaying W bosons in proton-proton collisions at s = 13 TeV</title>
<link>https://hdl.handle.net/1721.1/162245</link>
<description>Search for high-mass resonances in a final state comprising a gluon and two hadronically decaying W bosons in proton-proton collisions at s = 13 TeV
Hayrapetyan, A.; Tumasyan, A.; Adam, W.; Andrejkovic, J. W.; Benato, L.; Bergauer, T.; Chatterjee, S.; Damanakis, K.; Dragicevic, M.; Hussain, P. S.; Jeitler, M.; Krammer, N.; Li, A.; Liko, D.; Mikulec, I.; Schieck, J.
A search for high-mass resonances decaying into a gluon, g, and two W bosons is presented. A Kaluza-Klein gluon, gKK, decaying in cascade via a scalar radion R, gKK → gR → gWW, is considered. The final state studied consists of three large-radius jets, two of which contain the products of hadronically decaying W bosons, and the third one the hadronization products of the gluon. The analysis is performed using proton-proton collision data at s = 13 TeV collected by the CMS experiment at the CERN LHC during 2016–2018, corresponding to an integrated luminosity of 138 fb−1. The masses of the gKK and R candidates are reconstructed as trijet and dijet masses, respectively. These are used for event categorization and signal extraction. No excess of data events above the standard model background expectation is observed. Upper limits are set on the product of the gKK production cross section and its branching fraction via a radion R to gWW. This is the first analysis examining the resonant WW+jet signature and setting limits on the two resonance masses in an extended warped extra-dimensional model.
</description>
<pubDate>Thu, 27 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162245</guid>
<dc:date>2025-02-27T00:00:00Z</dc:date>
</item>
<item>
<title>Porous Polymeric Electrodes for Electrochemical Carbon Dioxide Capture</title>
<link>https://hdl.handle.net/1721.1/162244</link>
<description>Porous Polymeric Electrodes for Electrochemical Carbon Dioxide Capture
Guo, Youhong; Massen‐Hane, Michael; Endy, Grace; Hatton, T Alan
Carbon capture is a promising technology to mitigate greenhouse gas emissions to achieve net carbon neutrality. Electro‐swing reactive adsorption has emerged as an attractive approach for sustainable decarbonization. However, current electrodes with limited gas transport present a major barrier that hinders their practical implementation. Herein, porous polymeric electrodes are developed to effectively enhance CO&lt;jats:sub&gt;2&lt;/jats:sub&gt; transport without the need for additional gas diffusion conduits. Such all‐in‐one porous electrodes also enable more accessible redox active sites (e.g., quinones) for CO&lt;jats:sub&gt;2&lt;/jats:sub&gt; sorption, leading to an increased materials utilization efficiency of ≈90%. A continuous flow‐through carbon capture and release operation with high Faradaic efficiency and excellent stability under practical working conditions is further demonstrated. Together with low cost and robust mechanical properties, the as‐developed porous polymeric electrodes highlight the potential to advance the future implementation of electrochemical separation technologies.
</description>
<pubDate>Tue, 20 Aug 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162244</guid>
<dc:date>2024-08-20T00:00:00Z</dc:date>
</item>
<item>
<title>Multifunctional polymeric guanidine and hydantoin halamines with broad biocidal activity</title>
<link>https://hdl.handle.net/1721.1/162223</link>
<description>Multifunctional polymeric guanidine and hydantoin halamines with broad biocidal activity
Bromberg, Lev; Magariños, Beatriz; Torres, Beatriz S; Santos, Ysabel; Concheiro, Angel; Hatton, T Alan; Alvarez-Lorenzo, Carmen
Prolonged and excessive use of biocides during the coronavirus disease era calls for incorporating new antiviral polymers that enhance the surface design and functionality for existing and potential future pandemics. Herein, we investigated previously unexplored polyamines with nucleophilic biguanide, guanidine, and hydantoin groups that all can be halogenated leading to high contents of oxidizing halogen that enables enhancement of the biocidal activity. Primary amino groups can be used to attach poly(N-vinylguanidine) (PVG) and poly(allylamine-co-4-aminopyridine-co-5-(4-hydroxybenzylidene)hydantoin) (PAH) as well as a broad-spectrum commercial biocide poly(hexamethylene biguanide) (PHMB) onto a solid support. Halogenation of polymer suspensions was conducted through in situ generation of excess hypobromous acid (HBrO) from bromine and sodium hydroxide or by sodium hypochlorite in aqueous solutions, resulting in N-halamines with high contents of active &gt; N-Br or &gt; N-Cl groups. The virucidal activity of the polymers against human respiratory coronavirus HCoV-229E increased dramatically with their halogenation. Brominated PHMB-Br showed activation activity value &gt; 5 even at 1 mg/L, and complete virus inhibition was observed with either PHMB-Br or PAH-Br at 10 mg/mL. Brominated PVG-Br and PAH-Br possessed fungicidal activity against C. albicans, while PHMB was fungistatic. PHMB, PHMB-Br and PAH polymers demonstrated excellent bactericidal activity against the methicillin-resistant S. aureus and vancomycin-resistant E. faecium. Brominated polymers (PHMB-Br, PVG-Br, PAH-Br) were not toxic to the HeLa monolayers, indicating acceptable biocompatibility to cultured human cells. With these features, the N-halamine polymers of the present study are a worthwhile addition to the arsenal of biocides and are promising candidates for development of non-leaching coatings.
</description>
<pubDate>Thu, 15 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162223</guid>
<dc:date>2024-02-15T00:00:00Z</dc:date>
</item>
<item>
<title>Nonleaching Biocidal N-Halamine-Functionalized Polyamine-, Guanidine-, and Hydantoin-Based Coatings</title>
<link>https://hdl.handle.net/1721.1/162222</link>
<description>Nonleaching Biocidal N-Halamine-Functionalized Polyamine-, Guanidine-, and Hydantoin-Based Coatings
Bromberg, Lev; Magariños, Beatriz; Concheiro, Angel; Hatton, T Alan; Alvarez-Lorenzo, Carmen
Fibrous materials with inherent antimicrobial properties can help in real-time deactivation of microorganisms, enabling multiple uses while reducing secondary infections. Coatings with antiviral polymers enhance the surface functionality for existing and potential future pandemics. Herein, we demonstrated a straightforward route toward biocidal surface creation using polymers with nucleophilic biguanide, guanidine, and hydantoin groups that are covalently attached onto a solid support. Biocidal poly(N-vinylguanidine) (PVG) and poly(allylamine-co-4-aminopyridine-co-5-(4-hydroxybenzylidene)hydantoin) (PAH) were introduced for coating applications along with commercially available polyvinylamine (PVAm) and poly(hexamethylene biguanide) (PHMB). Nonleaching coatings were created by first fabricating bifunctional siloxane or isocyanate precursor coatings on the cotton, nylon-cotton, and glass fiber fabric, followed by the polymer attachment. The developed grafting methods ensured the stability of the coating and the reuse of the material while maintaining the biocidal properties. Halogenation of polymer-coated fabric was conducted by aqueous solutions of sodium hypochlorite or in situ generation of hypobromous acid (HOBr), resulting in surfaces coated by N-halamines with high contents of active &gt; N-Cl or &gt; N-Br groups. The polymer-coated fabrics were stable in multiple laundry cycles and maintained hydrophilic character after coating and halogenation. Halogenated polymer-coated fabrics completely inactivated human respiratory coronavirus based on a contact-killing mechanism and were shown to be reusable after recharging with bromine or chlorine.
</description>
<pubDate>Wed, 27 Mar 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162222</guid>
<dc:date>2024-03-27T00:00:00Z</dc:date>
</item>
<item>
<title>Bottom quark energy loss and hadronization with B+ and B s 0 nuclear modification factors using pp and PbPb collisions at s NN = 5.02 TeV</title>
<link>https://hdl.handle.net/1721.1/162221</link>
<description>Bottom quark energy loss and hadronization with B+ and B s 0 nuclear modification factors using pp and PbPb collisions at s NN = 5.02 TeV
Hayrapetyan, A.; Tumasyan, A.; Adam, W.; Andrejkovic, J. W.; Bergauer, T.; Chatterjee, S.; Damanakis, K.; Dragicevic, M.; Hussain, P. S.; Jeitler, M.; Krammer, N.; Li, A.; Liko, D.; Mikulec, I.; Schieck, J.; Schöfbeck, R.; Schwarz, D.
The production cross sections of B s 0 and B+ mesons are reported in proton-proton (pp) collisions recorded by the CMS experiment at the CERN LHC with a center-of-mass energy of 5.02 TeV. The data sample corresponds to an integrated luminosity of 302 pb−1. The cross sections are based on measurements of the B s 0 → J/ψ(μ+μ−)ϕ(1020)(K+K−) and B+ → J/ψ(μ+μ−)K+ decay channels. Results are presented in the transverse momentum (pT) range 7–50 GeV/c and the rapidity interval |y| &lt; 2.4 for the B mesons. The measured pT-differential cross sections of B+ and B s 0 in pp collisions are well described by fixed-order plus next-to-leading logarithm perturbative quantum chromodynamics calculations. Using previous PbPb collision measurements at the same nucleon-nucleon center-of-mass energy, the nuclear modification factors, RAA, of the B mesons are determined. For pT &gt; 10 GeV/c, both mesons are found to be suppressed in PbPb collisions (with RAA values significantly below unity), with less suppression observed for the B s 0 mesons. In this pT range, the RAA values for the B+ mesons are consistent with those for inclusive charged hadrons and D0 mesons. Below 10 GeV/c, both B+ and B s 0 are found to be less suppressed than either inclusive charged hadrons or D0 mesons, with the B s 0 RAA value consistent with unity. The RAA values found for the B+ and B s 0 are compared to theoretical calculations, providing constraints on the mechanism of bottom quark energy loss and hadronization in the quark-gluon plasma, the hot and dense matter created in ultrarelativistic heavy ion collisions.
</description>
<pubDate>Thu, 27 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162221</guid>
<dc:date>2025-02-27T00:00:00Z</dc:date>
</item>
<item>
<title>Constraints on the photon polarisation in b → sγ transitions using B s 0 → ϕe+e− decays</title>
<link>https://hdl.handle.net/1721.1/162220</link>
<description>Constraints on the photon polarisation in b → sγ transitions using B s 0 → ϕe+e− decays
Aaij, R.; Abdelmotteleb, A. S. W.; Abellan Beteta, C.; Abudinén, F.; Ackernley, T.; Adefisoye, A. A.; Adeva, B.; Adinolfi, M.; Adlarson, P.; Agapopoulou, C.; Aidala, C. A.; Ajaltouni, Z.; Akar, S.; Akiba, K.; Albicocco, P.; Albrecht, J.
An angular analysis of the B s 0 → ϕe+e− decay is performed using the proton-proton collision dataset collected between 2011 and 2018 by the LHCb experiment, corresponding to an integrated luminosity of 9 fb−1 at centre-of-mass energies of 7, 8 and 13 TeV. The analysis is performed in the very low dielectron invariant mass-squared region between 0.0009 and 0.2615 GeV2/c4. The longitudinal polarisation fraction of the ϕ meson is measured to be less than 11.5% at 90% confidence level. The A T R eCP observable, which is related to the lepton forward-backward asymmetry, is measured to be 0.116 ± 0.155 ± 0.006, where the first uncertainty is statistical and the second systematic. The transverse asymmetries, A T 2 and A T I mCP , which are sensitive to the virtual photon polarisation, are found to be −0.045 ± 0.235 ± 0.014 and 0.002 ± 0.247 ± 0.016, respectively. The results are consistent with Standard Model predictions.
</description>
<pubDate>Fri, 07 Mar 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162220</guid>
<dc:date>2025-03-07T00:00:00Z</dc:date>
</item>
<item>
<title>Evaluating the Machine Learning Models in Predicting Intensive Care Unit Discharge for Neurosurgical Patients Undergoing Craniotomy: A Big Data Analysis</title>
<link>https://hdl.handle.net/1721.1/162219</link>
<description>Evaluating the Machine Learning Models in Predicting Intensive Care Unit Discharge for Neurosurgical Patients Undergoing Craniotomy: A Big Data Analysis
Khaniyev, Taghi; Cekic, Efecan; Koc, Muhammet A.; Dogan, Ilke; Hanalioglu, Sahin
Background Predicting intensive care unit (ICU) discharge for neurosurgical patients is crucial for optimizing bed sources, reducing costs, and improving outcomes. Our study aims to develop and validate machine learning (ML) models to predict ICU discharge within 24 h for patients undergoing craniotomy. Methods The 2,742 patients undergoing craniotomy were identified from Medical Information Mart for Intensive Care dataset using diagnosis-related group and International Classification of Diseases codes. Demographic, clinical, laboratory, and radiological data were collected and preprocessed. Textual clinical examinations were converted into numerical scales. Data were split into training (70%), validation (15%), and test (15%) sets. Four ML models, logistic regression (LR), decision tree, random forest, and neural network (NN), were trained and evaluated. Model performance was assessed using area under the receiver operating characteristic curve (AUC), average precision (AP), accuracy, and F1 scores. Shapley Additive Explanations (SHAP) were used to analyze importance of features. Statistical analyses were performed using R (version 4.2.1) and ML analyses with Python (version 3.8), using scikit-learn, tensorflow, and shap packages. Results Cohort included 2,742 patients (mean age 58.2 years; first and third quartiles 47–70 years), with 53.4% being male (n = 1,464). Total ICU stay was 15,645 bed days (mean length of stay 4.7 days), and total hospital stay was 32,008 bed days (mean length of stay 10.8 days). Random forest demonstrated highest performance (AUC 0.831, AP 0.561, accuracy 0.827, F1-score 0.339) on test set. NN achieved an AUC of 0.824, with an AP, accuracy, and F1-score of 0.558, 0.830, and 0.383, respectively. LR achieved an AUC of 0.821 and an accuracy of 0.829. The decision tree model showed lowest performance (AUC 0.813, accuracy 0.822). Key predictors of SHAP analysis included Glasgow Coma Scale, respiratory-related parameters (i.e., tidal volume, respiratory effort), intracranial pressure, arterial pH, and Richmond Agitation-Sedation Scale. Conclusions Random forest and NN predict ICU discharge well, whereas LR is interpretable but less accurate. Numeric conversion of clinical data improved performance. This study offers framework for predictions using clinical, radiological, and demographic features, with SHAP enhancing transparency.
</description>
<pubDate>Tue, 06 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162219</guid>
<dc:date>2025-05-06T00:00:00Z</dc:date>
</item>
<item>
<title>Multi-objective Optimization of the Process Design in HSLA Steels Using Physically Based Mean Field Modeling</title>
<link>https://hdl.handle.net/1721.1/162218</link>
<description>Multi-objective Optimization of the Process Design in HSLA Steels Using Physically Based Mean Field Modeling
Tzini, M. I. T.; Haidemenopoulos, G. N.; Thøgersen, A.; Diplas, S.
A multi-objective optimization approach is presented for the process design of an X70 HSLA steel plate using a physically based mean field (PBMF) model. The PBFM model incorporates an integrated precipitation and recrystallization model to describe the microstructural evolution due to the interaction of strain-induced precipitation of niobium and titanium carbonitrides and static recrystallization of austenite. An integer multi-objective optimization problem is formulated, and the non-dominated sorting genetic algorithm (NSGA-II) is applied on the PBMF model to determine a list of optimal processing routes that satisfy specific process design constraints. Aiming in increasing the strengthening and fracture toughness of the material, a trade-off between the microstructural objectives is considered; average ferrite grain size after accelerated cooling, niobium content in solution and residual strain at the end of multipass hot rolling. Higher residual strains and lower niobium in solution due to higher degree of strain-induced precipitation result in smaller average ferrite grain sizes. An optimal processing route was selected, and the microstructure and precipitation state of the processed material were characterized experimentally revealing a good agreement with model predictions. The proposed approach can contribute to the process design of HSLA and microalloyed steels.
</description>
<pubDate>Fri, 21 Mar 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162218</guid>
<dc:date>2025-03-21T00:00:00Z</dc:date>
</item>
<item>
<title>A symplectic viewpoint on Anosov flows</title>
<link>https://hdl.handle.net/1721.1/162217</link>
<description>A symplectic viewpoint on Anosov flows
Massoni, Thomas
This survey explores the geometry of three-dimensional Anosov flows from the perspective of contact and symplectic geometry, following the work of Mitsumatsu, Eliashberg–Thurston, Hozoori, and the author. We also present a few original results and discuss various open questions and conjectures.
</description>
<pubDate>Wed, 09 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162217</guid>
<dc:date>2025-04-09T00:00:00Z</dc:date>
</item>
<item>
<title>MapTune: Advancing ASIC Technology Mapping via Reinforcement Learning Guided Library Tuning</title>
<link>https://hdl.handle.net/1721.1/162216</link>
<description>MapTune: Advancing ASIC Technology Mapping via Reinforcement Learning Guided Library Tuning
Liu, Mingju; Robinson, Daniel; Li, Yingjie; Yu, Cunxi
Technology mapping involves mapping logical circuits to a library of cells. Traditionally, the full technology library is used, leading to a large search space and potential overhead. Motivated by randomly sampled technology mapping case studies, we propose MapTune framework that addresses this challenge by utilizing reinforcement learning to make design-specific choices during cell selection. By learning from the environment, MapTune refines the cell selection process, resulting in a reduced search space and potentially improved mapping quality.&#13;
The effectiveness of MapTune is evaluated on a wide range of benchmarks, different technology libraries and technology mappers. The experimental results demonstrate that MapTune achieves higher mapping accuracy and reducing delay/area across diverse circuit designs, technology libraries and mappers. The paper also discusses the Pareto-Optimal exploration and confirms the perpetual delay-area trade-off. Conducted on benchmark suites ISCAS 85/89, ITC/ISCAS 99, VTR8.0 and EPFL benchmarks, the post-technology mapping and post-sizing quality-of-results (QoR) have been significantly improved, with average Area-Delay Product (ADP) improvement of 22.54% among all different exploration settings in MapTune. The improvements are consistently remained for four different technologies (7nm, 45nm, 130nm, and 180 nm) and two different mappers.
ICCAD ’24, October 27–31, 2024, Newark, NJ, USA
</description>
<pubDate>Wed, 09 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162216</guid>
<dc:date>2025-04-09T00:00:00Z</dc:date>
</item>
<item>
<title>KirchhoffNet: A Scalable Ultra Fast Analog Neural Network</title>
<link>https://hdl.handle.net/1721.1/162215</link>
<description>KirchhoffNet: A Scalable Ultra Fast Analog Neural Network
Gao, Zhengqi; Sun, Fan-keng; Rohrer, Ron; Boning, Duane
In this paper, we leverage a foundational principle of analog electronic circuitry, Kirchhoff's current and voltage laws, to introduce a distinctive class of neural network models termed KirchhoffNet. Essentially, KirchhoffNet is an analog circuit that can function as a neural network, utilizing its initial node voltages as the neural network input and the node voltages at a specific time point as the output. The evolution of node voltages within the specified time is dictated by learnable parameters on the edges connecting nodes. We demonstrate that KirchhoffNet is governed by a set of ordinary differential equations (ODEs), and notably, even in the absence of traditional layers (such as convolution layers), it attains state-of-the-art performances across diverse and complex machine learning tasks. Most importantly, KirchhoffNet can be potentially implemented as a low-power analog integrated circuit, leading to an appealing property --- irrespective of the number of parameters within a KirchhoffNet, its on-chip forward calculation can always be completed within a short time. This characteristic makes KirchhoffNet a promising and fundamental paradigm for implementing large-scale neural networks, opening a new avenue in analog neural networks for AI. Our source code and model checkpoints are publicly available: https://github.com/zhengqigao/kirchhoffnet.
ICCAD ’24, October 27–31, 2024, New York, NY, USA
</description>
<pubDate>Wed, 09 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162215</guid>
<dc:date>2024-10-09T00:00:00Z</dc:date>
</item>
<item>
<title>Student Research Abstract: Evaluating Dialogue Summarization Using LLMs</title>
<link>https://hdl.handle.net/1721.1/162214</link>
<description>Student Research Abstract: Evaluating Dialogue Summarization Using LLMs
Wang, Alison
With the surge in audio data available today, there is a growing need for effective dialogue summarization. This study conducts two experiments using two LLMs, BART and Mistral, to assess dialogue summarization. The first experiment evaluates model performance, while the second examines the impact of upstream errors from Automatic Speech Recognition (ASR) and Machine Translation (MT) on summarization performance. Results indicate that SummaC, a commonly used evaluation metric, is unreliable for dialogue summarization. Additionally, Mistral's summarization performance is more sensitive to upstream errors than BART's.
SAC ’25, March 31-April 4, 2025, Catania, Italy
</description>
<pubDate>Wed, 14 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162214</guid>
<dc:date>2025-05-14T00:00:00Z</dc:date>
</item>
<item>
<title>Induced Gradients in Steady, Two-Dimensional Heat Conduction? Yes, But…</title>
<link>https://hdl.handle.net/1721.1/162213</link>
<description>Induced Gradients in Steady, Two-Dimensional Heat Conduction? Yes, But…
Lienhard, John H.
A two-dimensional object conducts heat steadily between isothermal segments of its boundary that are at two different temperatures, with the heat flow occurring either through the object or through the region surrounding it. In classical potential theory, the isothermal surfaces are represented by source distributions, and the adiabatic surfaces that separate them are represented by dipole distributions. Sources or dipoles at one location can induce a temperature gradient at another location on an isothermal surface. This induced gradient adds to the gradient produced by a source at that location. In this paper, induced gradients are shown to produce zero net power in objects that have appropriate geometrical symmetry but not in objects that lack symmetry. Further, unpowered conductors within the domain (so-called floating conductors) are shown to have a nonzero induced source density that integrates to zero over the surface of the conductor. These results differ from those of a previous study of such configurations.
</description>
<pubDate>Mon, 04 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162213</guid>
<dc:date>2025-08-04T00:00:00Z</dc:date>
</item>
<item>
<title>On the Nusselt Number for Thermally Developed Flow Between Isothermal Parallel Plates With Dissipation</title>
<link>https://hdl.handle.net/1721.1/162212</link>
<description>On the Nusselt Number for Thermally Developed Flow Between Isothermal Parallel Plates With Dissipation
Lienhard, John H.
A recent paper announced that most textbooks, graduate and undergraduate, present an incorrect value for the Nusselt number in thermally developed laminar flow between isothermal parallel plates. The stated cause is flow work and/or dissipation that acts as a persistent source of heat generation in the channel. The purpose of this paper is to rehabilitate the textbook literature. I show that the commonly reported value of the thermally developed Nusselt number, 7.541, is quite acceptable for commonly encountered situations. In particular, for ideal gases, the wall heat flux is predicted exactly without accounting for these effects because they cancel one another. For liquids, I derive the channel length within which dissipation makes a negligible contribution to heat flux. This length will often span the entire range within which the bulk temperature changes in response to a wall temperature change. The residual bulk temperature rise from dissipation can amount to only millikelvin. The Nusselt number following a change in wall temperature should be calculated after separating the temperature rise and heat flux caused by dissipation. Failure to do so gives a Nusselt number that can be zero, negative, or singular. The effects of dissipation on flux and temperature can be added to the ordinary Graetz solution if they are not negligible. The present results show that the Seban–Shimazaki criterion for thermally developed flow is misleading when dissipation is considered. Instead, the flow may be called thermally developed when the Graetz series is well approximated by its first term.
</description>
<pubDate>Mon, 04 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162212</guid>
<dc:date>2025-08-04T00:00:00Z</dc:date>
</item>
<item>
<title>CodingGenie: A Proactive LLM-Powered Programming Assistant</title>
<link>https://hdl.handle.net/1721.1/162202</link>
<description>CodingGenie: A Proactive LLM-Powered Programming Assistant
Zhao, Sebastian; Zhu, Alan; Mozannar, Hussein; Sontag, David; Talwalkar, Ameet; Chen, Valerie
While developers increasingly adopt tools powered by large language models (LLMs) in day-to-day workflows, these tools still require explicit user invocation. To seamlessly integrate LLM capabilities to a developer's workflow, we introduce CodingGenie, a proactive assistant integrated into the code editor. CodingGenie autonomously provides suggestions, ranging from bug fixing to unit testing, based on the current code context and allows users to customize suggestions by providing a task description and selecting what suggestions are shown. We demonstrate multiple use cases to show how proactive suggestions from CodingGenie can improve developer experience, and also analyze the cost of adding proactivity. We believe this open-source tool will enable further research into proactive assistants. CodingGenie is open-sourced at https://github.com/sebzhao/CodingGenie/ and video demos are available at https://sebzhao.github.io/CodingGenie/.
FSE Companion ’25, June 23–28, 2025, Trondheim, Norway
</description>
<pubDate>Mon, 28 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162202</guid>
<dc:date>2025-07-28T00:00:00Z</dc:date>
</item>
<item>
<title>Moderating Large Scale Online Deliberative Processes with Large Language Models (LLMs): Enhancing Collective Decision-Making.</title>
<link>https://hdl.handle.net/1721.1/162200</link>
<description>Moderating Large Scale Online Deliberative Processes with Large Language Models (LLMs): Enhancing Collective Decision-Making.
Babatunde, Ibukun; Nnanna, Obiabuchi; Klein, Mark
This study investigates the use of LLMs, specifically ChatGPT-4o, to enhance the moderation of online deliberative processes. Traditionally, decision-making has been controlled by small groups, often excluding the vital insights that crowd intelligence can provide. As global challenges grow more complex, broader and more inclusive participation is essential. While online platforms allow for such large-scale participation, they also face significant issues, including content fragmentation, low signal-to-noise ratios, and inefficient argumentation. Human moderators can address these challenges, but scaling them is prohibitively costly. This research introduces a more scalable solution by leveraging LLMs to automate critical moderation tasks, including unbundling multiple ideas, categorizing them into solutions, metrics, and barriers, and implementing efficient argument mining and classification techniques. Additionally, it evaluates the effectiveness of different prompting styles in optimizing moderation. The findings demonstrate that LLMs can successfully moderate key aspects of large-scale online deliberations, such as unbundling and categorization, improving the structure of discussions and representing a significant step forward in collective decision-making.
SAC ’25, March 31-April 4, 2025, Catania, Italy
</description>
<pubDate>Mon, 31 Mar 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162200</guid>
<dc:date>2025-03-31T00:00:00Z</dc:date>
</item>
<item>
<title>Analyzing the Role of Emotions on Performance in Hybrid Capstone Projects</title>
<link>https://hdl.handle.net/1721.1/162199</link>
<description>Analyzing the Role of Emotions on Performance in Hybrid Capstone Projects
Awan, Wardah Naeem; Salman, Iflaah; Paasivaara, Maria; Gloor, Peter A.
In hybrid project-based learning environments when students collaborate both in-person and remotely to perform shared tasks, they often experience a wide range of emotions. These emotions significantly influence their performance and collaboration as they navigate team dynamics. This study aimed to investigate the emotions expressed during Scrum retrospective meetings, explore their underlying triggers, and examine their relationship with perceived performance. We conducted our study with 24 participants, divided into three teams, enrolled in a hybrid capstone course. Using a BERT model fine-tuned on the GoEmotions dataset, we analyzed retrospective meeting transcripts and open-ended survey responses to identify emotions. Quantitative analysis assessed the relationship between these emotions and self-reported performance metrics collected through survey responses. To further understand the emotional context, thematic analysis was performed to identify events that triggered these emotions. We observed (1) approval, admiration and curiosity as the most prevalent emotions expressed during retrospective meetings; (2) significant correlations between positive emotions and perceived performance; and (3) milestone achievements, collaboration, and communication and coordination issues as common emotional triggers. Our findings emphasize the role of emotions in relation to performance and collaboration in hybrid learning environments. These findings offer insights into emotional dynamics for designing more effective and supportive hybrid project-based learning environments.
FSE Companion ’25, Trondheim, Norway
</description>
<pubDate>Mon, 28 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162199</guid>
<dc:date>2025-07-28T00:00:00Z</dc:date>
</item>
<item>
<title>Measurement of the inclusive cross sections for W and Z boson production in proton-proton collisions at √s = 5.02 and 13 TeV</title>
<link>https://hdl.handle.net/1721.1/162198</link>
<description>Measurement of the inclusive cross sections for W and Z boson production in proton-proton collisions at √s = 5.02 and 13 TeV
Hayrapetyan, A.; Tumasyan, A.; Adam, W.; Andrejkovic, J. W.; Bergauer, T.; Chatterjee, S.; Damanakis, K.; Dragicevic, M.; Hussain, P. S.; Jeitler, M.; Krammer, N.; Li, A.; Liko, D.; Mikulec, I.; Schieck, J.; Schöfbeck, R.; Schwarz, D.; Sonawane, M.
Measurements of fiducial and total inclusive cross sections for W and Z boson production are presented in proton-proton collisions at s = 5.02 and 13 TeV. Electron and muon decay modes (ℓ = e or μ) are studied in the data collected with the CMS detector in 2017, in dedicated runs with reduced instantaneous luminosity. The data sets correspond to integrated luminosities of 298 ± 6 pb−1 at 5.02 TeV and 206 ± 5 pb−1 at 13 TeV. Measured values of the products of the total inclusive cross sections and the branching fractions at 5.02 TeV are σ(pp → W + X) B (W → ℓν) = 7300 ± 10 (stat) ± 60 (syst) ± 140 (lumi) pb, and σ(pp → Z+X) B (Z → ℓ+ℓ−) = 669 ± 2 (stat) ± 6 (syst) ± 13 (lumi) pb for the dilepton invariant mass in the range of 60–120 GeV. The corresponding results at 13 TeV are 20480 ± 10 (stat) ± 170 (syst) ± 470 (lumi) pb and 1952 ± 4 (stat) ± 18 (syst) ± 45 (lumi) pb. The measured values agree with cross section calculations at next-to-next-to-leading-order in perturbative quantum chromodynamics. Fiducial and total inclusive cross sections, ratios of cross sections of W+ and W− production as well as inclusive W and Z boson production, and ratios of these measurements at 5.02 and 13 TeV are reported.
</description>
<pubDate>Tue, 22 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162198</guid>
<dc:date>2025-04-22T00:00:00Z</dc:date>
</item>
<item>
<title>Measurement of the inclusive WZ production cross section in pp collisions at √s = 13.6 TeV</title>
<link>https://hdl.handle.net/1721.1/162197</link>
<description>Measurement of the inclusive WZ production cross section in pp collisions at √s = 13.6 TeV
Hayrapetyan, A.; Tumasyan, A.; Adam, W.; Andrejkovic, J. W.; Benato, L.; Bergauer, T.; Chatterjee, S.; Damanakis, K.; Dragicevic, M.; Hussain, P. S.; Jeitler, M.; Krammer, N.; Li, A.; Liko, D.; Mikulec, I.; Schieck, J.; Schöfbeck, R.
The inclusive WZ production cross section is measured in proton-proton collisions at a centre-of-mass energy of 13.6 TeV, using data collected during 2022 with the CMS detector, corresponding to an integrated luminosity of 34.7 fb−1. The measurement uses multileptonic final states and a simultaneous likelihood fit to the number of events in four different lepton flavour categories: eee, eeμ, μμe, and μμμ. The selection is optimized to minimize the number of background events, and relies on an efficient prompt lepton discrimination strategy. The WZ production cross section is measured in a phase space defined within a 30 GeV window around the Z boson mass, as σtotal (pp → WZ) = 55.2 ± 1.2 (stat) ± 1.2 (syst) ± 0.8 (lumi) ± 0.3 (theo) pb. In addition, the cross section is measured in a fiducial phase space closer to the detector-level requirements. All the measurements presented in this paper are in agreement with standard model predictions.
</description>
<pubDate>Wed, 16 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162197</guid>
<dc:date>2025-04-16T00:00:00Z</dc:date>
</item>
<item>
<title>Measurement of the inclusive t t ¯ cross section in final states with at least one lepton and additional jets with 302 pb−1 of pp collisions at s = 5.02 TeV</title>
<link>https://hdl.handle.net/1721.1/162196</link>
<description>Measurement of the inclusive t t ¯ cross section in final states with at least one lepton and additional jets with 302 pb−1 of pp collisions at s = 5.02 TeV
Hayrapetyan, A.; Tumasyan, A.; Adam, W.; Andrejkovic, J. W.; Bergauer, T.; Chatterjee, S.; Damanakis, K.; Dragicevic, M.; Hussain, P. S.; Jeitler, M.; Krammer, N.; Li, A.; Liko, D.; Mikulec, I.; Schieck, J.; Schöfbeck, R.; Schwarz, D.
A measurement of the top quark pair ( t t ¯ ) production cross section in proton-proton collisions at a centre-of-mass energy of 5.02 TeV is presented. The data were collected at the LHC in autumn 2017, in dedicated runs with low-energy and low-intensity conditions with respect to the default configuration, and correspond to an integrated luminosity of 302 pb−1. The measurement is performed using events with one electron or muon, and multiple jets, at least one of them being identified as originating from a b quark (b tagged). Events are classified based on the number of all reconstructed jets and of b-tagged jets. Multivariate analysis techniques are used to enhance the separation between the signal and backgrounds. The measured cross section is 62.5 ± 1.6 stat − 2.5 + 2.6 syst ± 1.2 lumi pb. A combination with the result in the dilepton channel based on the same data set yields a value of 62.3 ± 1.5 (stat) ± 2.4 (syst) ± 1.2 (lumi) pb, to be compared with the standard model prediction of 69.5 − 3.7 + 3.5 pb at next-to-next-to-leading order in perturbative quantum chromodynamics.
</description>
<pubDate>Tue, 15 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162196</guid>
<dc:date>2025-04-15T00:00:00Z</dc:date>
</item>
<item>
<title>Search for heavy long-lived charged particles with large ionization energy loss in proton-proton collisions at s = 13 TeV</title>
<link>https://hdl.handle.net/1721.1/162195</link>
<description>Search for heavy long-lived charged particles with large ionization energy loss in proton-proton collisions at s = 13 TeV
Hayrapetyan, A.; Tumasyan, A.; Adam, W.; Andrejkovic, J. W.; Benato, L.; Bergauer, T.; Chatterjee, S.; Damanakis, K.; Dragicevic, M.; Hussain, P. S.; Jeitler, M.; Krammer, N.; Li, A.; Liko, D.; Mikulec, I.; Schieck, J.
A search for heavy, long-lived, charged particles with large ionization energy loss within the silicon tracker of the CMS experiment is presented. A data set of proton-proton collisions at a center of mass energy at s = 13 TeV, collected in 2017 and 2018 at the CERN LHC, corresponding to an integrated luminosity of 101 fb−1, is used in this analysis. Two different approaches for the search are taken. A new method exploits the independence of the silicon pixel and strips measurements, while the second method improves on previous techniques using ionization to determine a mass selection. No significant excess of events above the background expectation is observed. The results are interpreted in the context of the pair production of supersymmetric particles, namely gluinos, top squarks, and tau sleptons, and of the Drell-Yan pair production of fourth generation (τ′) leptons with an electric charge equal to or twice the absolute value of the electron charge (e). An interpretation of a Z’ boson decaying to two τ′ leptons with an electric charge equal to 2e is presented for the first time. The 95% confidence upper limits on the production cross section are extracted for each of these hypothetical particles.
</description>
<pubDate>Tue, 15 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162195</guid>
<dc:date>2025-04-15T00:00:00Z</dc:date>
</item>
<item>
<title>De Sitter quantum gravity and the emergence of local algebras</title>
<link>https://hdl.handle.net/1721.1/162194</link>
<description>De Sitter quantum gravity and the emergence of local algebras
Kaplan, Molly; Marolf, Donald; Yu, Xuyang; Zhao, Ying
Quantum theories of gravity are generally expected to have some degree of non-locality, with familiar local physics emerging only in a particular limit. Perturbative quantum gravity around backgrounds with isometries and compact Cauchy slices provides an interesting laboratory in which this emergence can be explored. In this context, the remaining isometries are gauge symmetries and, as a result, gauge-invariant observables cannot be localized. Instead, local physics can arise only through certain relational constructions. We explore such issues below for perturbative quantum gravity around de Sitter space. In particular, we describe a class of gauge-invariant observables which, under appropriate conditions, provide good approximations to certain algebras of local fields. Our results suggest that, near any minimal Sd in dSd+1, this approximation can be accurate only over regions in which the corresponding global time coordinate t spans an interval ∆t ≲ O(ln G−1). In contrast, however, we find that the approximation can be accurate over arbitrarily large regions of global dSd+1 so long as those regions are located far to the future or past of such a minimal Sd. This in particular includes arbitrarily large parts of any static patch.
</description>
<pubDate>Tue, 22 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162194</guid>
<dc:date>2025-04-22T00:00:00Z</dc:date>
</item>
<item>
<title>Oxygen-Stable Electrochemical CO2 Capture using Redox-Active Heterocyclic Benzodithiophene Quinone</title>
<link>https://hdl.handle.net/1721.1/162193</link>
<description>Oxygen-Stable Electrochemical CO2 Capture using Redox-Active Heterocyclic Benzodithiophene Quinone
Abdinejad, Maryam; Massen‐Hane, Michael; Seo, Hyowon; Hatton, T Alan
Electrochemical carbon capture offers a promising alternative to thermal amine technology, which serves as the traditional benchmark method for CO2 capture. Despite its technological maturity, the widespread deployment of thermal amine technologies is hindered by high energy consumption and sorbent degradation. In contrast, electrochemical methods, with their inherently isothermal operation, address these challenges, offering enhanced energy efficiency and robustness. Among emerging strategies, electrochemical carbon capture systems using redox-active materials such as quinones stand out for their potential to capture CO2. However, their practical application is currently limited by their low stability in the presence of oxygen. We demonstrate that benzodithiophene quinone (BDT-Q), a heterocyclic quinone, exhibits high stability in electrochemical carbon capture processes with oxygen-containing feed gas. Conducted in a cyclic flow system with a simulated flue gas mixture containing 13 % CO2 and 3.5 % O2 for over 100 hours, the process demonstrates high oxygen stability with an electron utilization of 0.83 without significant degradation, indicating a promising approach for real world applications. Our study explores the potential of new heterocyclic quinone compounds in the context of carbon capture technologies.
</description>
<pubDate>Mon, 09 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162193</guid>
<dc:date>2024-09-09T00:00:00Z</dc:date>
</item>
<item>
<title>2-Colorable Perfect Matching is NP-complete in 2-Connected 3-Regular Planar Graphs</title>
<link>https://hdl.handle.net/1721.1/162192</link>
<description>2-Colorable Perfect Matching is NP-complete in 2-Connected 3-Regular Planar Graphs
Demaine, Erik D.; Karntikoon, Kritkorn; Pitimanaaree, Nipun
The 2-colorable perfect matching problem asks whether a graph can be colored with two colors so that each node has exactly one neighbor with the same color as itself. We prove that this problem is NP-complete, even when restricted to 2-connected 3-regular planar graphs. In 1978, Schaefer proved that this problem is NP-complete in general graphs, and claimed without proof that the same result holds when restricted to 3-regular planar graphs. Thus we fill in the missing proof of this claim, while simultaneously strengthening to 2-connected graphs (which implies existence of a perfect matching). We also prove NP-completeness of k-colorable perfect matching, for any fixed k ≥ 2.
</description>
<pubDate>Sat, 26 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162192</guid>
<dc:date>2025-04-26T00:00:00Z</dc:date>
</item>
<item>
<title>The Landis conjecture on exponential decay</title>
<link>https://hdl.handle.net/1721.1/162191</link>
<description>The Landis conjecture on exponential decay
Logunov, A.; Malinnikova, E.; Nadirashvili, N.; Nazarov, F.
Abstract Consider a solution u to Δ u + V u = 0 on R 2 , where V is real-valued, measurable and | V | ≤ 1 . If | u ( x ) | ≤ exp ( − C | x | log 1 / 2 | x | ) , | x | &gt; 2 , where C is a sufficiently large absolute constant, then u ≡ 0 .
</description>
<pubDate>Wed, 25 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162191</guid>
<dc:date>2025-06-25T00:00:00Z</dc:date>
</item>
<item>
<title>Lower bounds for incidences</title>
<link>https://hdl.handle.net/1721.1/162190</link>
<description>Lower bounds for incidences
Cohen, Alex; Pohoata, Cosmin; Zakharov, Dmitrii
Let p 1 , … , p n be a set of points in the unit square and let T 1 , … , T n be a set of δ -tubes such that T j passes through p j . We prove a lower bound for the number of incidences between the points and tubes under a natural regularity condition (similar to Frostman regularity). As a consequence, we show that in any configuration of points p 1 , … , p n ∈ [ 0 , 1 ] 2 along with a line ℓ j through each point p j , there exist j ≠ k for which d ( p j , ℓ k ) ≲ n − 2 / 3 + o ( 1 ) . It follows from the latter result that any set of n points in the unit square contains three points forming a triangle of area at most n − 7 / 6 + o ( 1 ) . This new upper bound for Heilbronn’s triangle problem attains the high-low limit established in our previous work arXiv: 2305.18253.
</description>
<pubDate>Fri, 14 Mar 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162190</guid>
<dc:date>2025-03-14T00:00:00Z</dc:date>
</item>
<item>
<title>Murmurations</title>
<link>https://hdl.handle.net/1721.1/162189</link>
<description>Murmurations
Zubrilina, Nina
We establish the first case of the surprising correlation phenomenon observed in the recent works of He, Lee, Oliver, Pozdnyakov, and Sutherland between Fourier coefficients in families of modular forms and their root numbers. We give a complete description of the resulting correlation functions for holomorphic modular forms of any fixed weight k and examine the asymptotic properties of these functions.
</description>
<pubDate>Wed, 16 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162189</guid>
<dc:date>2025-07-16T00:00:00Z</dc:date>
</item>
<item>
<title>Gestation, endowments, and knowledge flows around the time of venture creation</title>
<link>https://hdl.handle.net/1721.1/162188</link>
<description>Gestation, endowments, and knowledge flows around the time of venture creation
Coad, Alex; Kato, Masatoshi; Srhoj, Stjepan
Recent developments in venture creation are discussed, moving from a standard model of venture creation as a one-off binary decision (enter or not) to viewing venture creation in terms of knowledge endowments that differ according to the gestation process. We draw on the analogy that healthy newborn babies lose weight in the first few days after birth to investigate how nascent ventures slowly build routines and capabilities while drawing down their initial resource endowments. We critically discuss various themes, such as the paradox of new ventures without DNA (i.e., routines), misunderstandings about lean startups, accelerating vs. delaying a startup, and the timing of birth.
</description>
<pubDate>Mon, 05 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162188</guid>
<dc:date>2025-05-05T00:00:00Z</dc:date>
</item>
<item>
<title>Abelian varieties of prescribed order over finite fields</title>
<link>https://hdl.handle.net/1721.1/162187</link>
<description>Abelian varieties of prescribed order over finite fields
van Bommel, Raymond; Costa, Edgar; Li, Wanlin; Poonen, Bjorn; Smith, Alexander
Given a prime power q and n ≫ 1 , we prove that every integer in a large subinterval of the Hasse–Weil interval [ ( q - 1 ) 2 n , ( q + 1 ) 2 n ] is # A ( F q ) for some ordinary geometrically simple principally polarized abelian variety A of dimension n over F q . As a consequence, we generalize a result of Howe and Kedlaya for F 2 to show that for each prime power q, every sufficiently large positive integer is realizable, i.e., # A ( F q ) for some abelian variety A over F q . Our result also improves upon the best known constructions of sequences of simple abelian varieties with point counts towards the extremes of the Hasse–Weil interval. A separate argument determines, for fixed n, the largest subinterval of the Hasse–Weil interval consisting of realizable integers, asymptotically as q → ∞ ; this gives an asymptotically optimal improvement of a 1998 theorem of DiPippo and Howe. Our methods are effective: We prove that if q ≤ 5 , then every positive integer is realizable, and for arbitrary q, every positive integer ≥ q 3 q log q is realizable.
</description>
<pubDate>Thu, 06 Mar 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162187</guid>
<dc:date>2025-03-06T00:00:00Z</dc:date>
</item>
<item>
<title>Practical incorporation of lifecycle flexibility into products at their early development stage</title>
<link>https://hdl.handle.net/1721.1/162186</link>
<description>Practical incorporation of lifecycle flexibility into products at their early development stage
Moerth-Teo, Oliver; de Neufville, Richard; Suh, Eun S.; Ramsauer, Christian
This article presents a practical design guideline for the incorporation of lifecycle flexibility into products at their early development stage. In a world of constant uncertainty, flexibility to avoid risks and exploit opportunities represents a competitive advantage. While considering the entire lifecycle improves the overall product value, early design considerations enable effective and efficient implementations. To date, literature still lacks practical engineering procedures to design for lifecycle flexibility (DFLF). The innovative design guideline extends the focus from current requirements to future circumstances. Such substantial shifts in engineering practice require the demonstration of support for designers, applicability on products, and benefits for companies. Aiming to better understand the impacts of designing products for lifecycle flexibility, the DFLF guideline was applied on a practical use case. Battery packs represented relevant products due to critical uncertainties and high costs. Experts of a renowned engineering company in the automotive industry provided valuable insights into the design process. Based on a real reference project, they expected reductions of criticality and costs throughout the lifecycle. Therefore, these experts recognized the effects of uncertainty and valued the early incorporation of useful flexibility into products. The application of the DFLF guideline on a realistic use case has demonstrated its support for designers, practical applicability, and benefits for the company. Participating experts stated their intention to apply the DFLF guideline for the design of future battery packs. Since uncertainty affects various products, their incorporation of lifecycle flexibility represents an interesting opportunity for further research.
</description>
<pubDate>Mon, 28 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162186</guid>
<dc:date>2025-04-28T00:00:00Z</dc:date>
</item>
<item>
<title>Fast and accurate Bayesian optimization with pre-trained transformers for constrained engineering problems</title>
<link>https://hdl.handle.net/1721.1/162185</link>
<description>Fast and accurate Bayesian optimization with pre-trained transformers for constrained engineering problems
Yu, Rosen T.; Picard, Cyril; Ahmed, Faez
Bayesian Optimization (BO) is a foundational strategy in engineering design optimization for efficiently handling black-box functions with many constraints and expensive evaluations. This paper introduces a novel constraint-handling framework for Bayesian Optimization (BO) using Prior-data Fitted Networks (PFNs), a foundation transformer model. Unlike traditional approaches requiring separate Gaussian Process (GP) models for each constraint, our framework leverages PFN’s transformer architecture to evaluate objectives and constraints simultaneously in a single forward pass using in-context learning. Through comprehensive benchmarking across 15 test problems spanning synthetic, structural, and engineering design challenges, we demonstrate an order of magnitude speedup while maintaining or improving solution quality compared to conventional GP-based methods with constrained expected improvement (CEI). Our approach particularly excels at engineering problems by rapidly finding feasible, optimal solutions. This benchmark framework for evaluating new BO algorithms in engineering design will be published at https://github.com/rosenyu304/BOEngineeringBenchmark .
</description>
<pubDate>Thu, 10 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162185</guid>
<dc:date>2025-04-10T00:00:00Z</dc:date>
</item>
<item>
<title>Household evacuation decisions and relationship to infrastructure disruption using evidence from Hurricane Irma</title>
<link>https://hdl.handle.net/1721.1/162177</link>
<description>Household evacuation decisions and relationship to infrastructure disruption using evidence from Hurricane Irma
Lamadrid, Alberto J.; Escaleras, Monica; Mitsova, Diana; Esnard, Ann-Margaret; Sapat, Alka
Hurricanes and extreme weather hazards disrupt infrastructure services causing cascading effects for households and communities. In this work, we use survey data from households affected by Hurricane Irma in south and central Florida to empirically estimate the effects of infrastructure disruptions on household evacuation decisions and to assess what factors determine the length of evacuation, after controlling for socio-economic and demographic variables. We find that the decision to evacuate prior to Hurricane Irma was affected by the prospects of losing access to critical infrastructure services, primarily electricity services. Medical infrastructure is also associated with evacuation decisions, specifically access to healthcare facilities and prescription medications. Our findings suggest that social networks provide additional support to a subset of evacuees. For those displaced to friends’ and families’ accommodations, over 63% stayed over 4 days before returning, in the upper range of the evacuation duration. The respondents linked the duration of evacuation and their returning behavior to the restoration of electrical service and access to other critical services, including the availability of fuel, food, and water supplies. Our study provides insights into the interdependence between household recovery and critical infrastructure services, notably power, communications, transportation, and health care.
</description>
<pubDate>Tue, 03 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162177</guid>
<dc:date>2025-06-03T00:00:00Z</dc:date>
</item>
<item>
<title>Impact of Terrestrial Weather on the Space Weather of the Ionosphere-Thermosphere: Initial Results from a NASA Living with a Star Focused Science Topic</title>
<link>https://hdl.handle.net/1721.1/162176</link>
<description>Impact of Terrestrial Weather on the Space Weather of the Ionosphere-Thermosphere: Initial Results from a NASA Living with a Star Focused Science Topic
Oberheide, J.; Aggarwal, D.; Bergsson, B.; Chakraborty, S.; Debchoudhury, S.; Dhadly, M.; Gasperini, F.; Goncharenko, L.; Harvey, V. L.; Heale, C.; Inchin, P.; Li, J.; Liu, G.; Liu, H. -.; Lu, X.; McDonald, S.
The ionosphere-thermosphere (IT) is a convergence point of energy and processes that interconnect Earth’s atmosphere with space. Processes generated by terrestrial weather in the lower atmosphere (i.e., troposphere and stratosphere, altitudes less than ~ 50 km) are recognized by the scientific community as sources of variability in both the structure and composition of the IT. Exposed to persistent wave forcing from terrestrial weather sources and solar and magnetic forcing, the IT is a domain of compelling scientific inquiry that connects thermodynamics, fluid dynamics, electrodynamics and plasma physics. Predicting its space weather is of significant national interest for space situation awareness including the very low earth orbit as the new frontier of space operations. Advancing the understanding of whole atmosphere interconnections between terrestrial and space weather requires coordinated modeling and observational efforts across different spatial and temporal scales. Toward this goal, the National Aeronautics and Space Administration (NASA), through the living with a star (LWS) program, established in 2022 a focused science topic (FST) to study the problem from various angles. In this manuscript we report on the vision, goals and status of the ongoing FST “Impact of Terrestrial Weather on the Ionosphere—Thermosphere”. Initial results show bigger impacts on the IT than hitherto thought and help to more clearly define the state-of-the-art in the context of future NASA missions such as EZIE, DYNAMIC and GDC.
</description>
<pubDate>Wed, 09 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162176</guid>
<dc:date>2025-07-09T00:00:00Z</dc:date>
</item>
<item>
<title>The economics of ESG disclosure regulation</title>
<link>https://hdl.handle.net/1721.1/162175</link>
<description>The economics of ESG disclosure regulation
Frankel, Richard; Kothari, S. P.; Raghunandan, Aneesh
We provide an economics-based review of the pros and cons of ESG disclosures, emphasizing environmental disclosures from an investor-centric perspective. Our survey intends to guide corporate management and regulators in navigating the ESG disclosure terrain. Rather than summarizing the vast and growing ESG literature, we assess the economic arguments for ESG disclosure regulation and the form of this disclosure. We discuss investors’ demand for ESG information and its supply by publicly traded firms. We analyze the case for and against mandatory ESG disclosure. Finally, we weigh the efficiency of disclosure requirement characteristics, assuming mandatory ESG disclosure is warranted. We intend to be positive rather than prescriptive, providing a line of reasoning readers can employ to reach their own conclusions about what we ought to do.
</description>
<pubDate>Fri, 04 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162175</guid>
<dc:date>2025-07-04T00:00:00Z</dc:date>
</item>
<item>
<title>Structuring Private Sustainability Governance: Combining Rule-Based and Goal-Based Mechanisms</title>
<link>https://hdl.handle.net/1721.1/162174</link>
<description>Structuring Private Sustainability Governance: Combining Rule-Based and Goal-Based Mechanisms
Wörner, Daniel; Letmathe, Niklas; Jovanovic, Marin; Friedli, Thomas
This study investigates the structuring of private sustainability governance as a critical mechanism for facilitating sustainability transitions. Drawing on 33 semi-structured interviews with manufacturing firms, regulatory bodies, policy associations, auditing firms, and management consultancies, the study examines how firms navigate increasing external governance pressures, including regulatory ambiguity, compliance demands, market expectations, and stakeholder accountability, while simultaneously managing internal governance through organizational restructuring, sustainable performance measurement, data management, human resources, and incentive structures. The findings highlight the importance of integrating rule-based and goal-based private sustainability governance through two key mechanisms: shaping external governance by aligning with and influencing regulatory standards, and adapting internal governance to embed sustainability into core business operations. This study develops a hybrid governance framework that demonstrates how firms leverage both mechanisms in parallel, revealing the tensions inherent in balancing regulatory compliance with strategic sustainability ambitions. We make a further contribution by underscoring the role of ethical change management in fostering transparency, accountability, and proactive sustainability commitments. By examining governance structures in combination with ethical considerations, the study advances the discourse on private sustainability governance, offering both theoretical insights and practical implications for firms navigating the transition toward sustainable systems.
</description>
<pubDate>Mon, 19 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162174</guid>
<dc:date>2025-05-19T00:00:00Z</dc:date>
</item>
<item>
<title>Exploring Learning Engineering Design Decision Tracking: Emergent Themes from Practitioners’ Work</title>
<link>https://hdl.handle.net/1721.1/162173</link>
<description>Exploring Learning Engineering Design Decision Tracking: Emergent Themes from Practitioners’ Work
Totino, Lauren; Kessler, Aaron
This paper examines design decisions that were written down and enacted by learning design practitioners across 18 projects at a postsecondary institution. Through emergent coding of decisions recorded in a Learning Engineering Evidence and Decision (LEED) tracker in situ, this research answers 3 questions: (1) how do practitioners track and cite sources of influence on design decisions, (2) how do practitioners communicate, revisit, and iterate these decisions throughout cycles of their design, and (3) when revisions were made to decisions, what sources of influence led to these changes? Findings indicate that practitioners record new and revised decisions while also tracking influences on these decisions that stem from their own experiences and from the specific project context. This work contributes to the support of learning design practitioners by offering a tool to capture thinking and reasoning in complex contexts, while offering researchers a way to collect evidence of this decision making.
</description>
<pubDate>Wed, 23 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162173</guid>
<dc:date>2025-04-23T00:00:00Z</dc:date>
</item>
<item>
<title>Human Brain Organoids: A New Model to Study Cryptococcus neoformans Neurotropism</title>
<link>https://hdl.handle.net/1721.1/162172</link>
<description>Human Brain Organoids: A New Model to Study Cryptococcus neoformans Neurotropism
Harding, Alfred T.; Gehrke, Lee; Vyas, Jatin M.; Harding, Hannah Brown
With the rise in immunocompromised individuals and patients with immune-related comorbidities such as COVID-19, the rate of fungal infections is growing. This increase, along with the current plateau in antifungal drug development, has made understanding the pathogenesis and dissemination of these organisms more pertinent than ever. The mouse model of fungal infection, while informative on a basic scientific level, has severe limitations in terms of translation to the human disease. Here we present data supporting the implementation of the human cerebral organoid model, which is generated from human embryonic stem cells and accurately recapitulates relevant brain cell types and structures, to study fungal infection and dissemination to the central nervous system (CNS). This approach provides direct insight into the relevant pathogenesis of specific fungal organisms in human tissues where in vivo models are impossible. With this model system we assessed the specific brain tropisms and cellular effects of fungal pathogens known to cross the blood–brain barrier (BBB), such as Cryptococcus neoformans. We determined the effects of this fungal pathogen on the overall gross morphology, cellular architecture, and cytokine release from these model organoids. Furthermore, we demonstrated that C. neoformans penetrates and invades the organoid tissue and remains present throughout the course of infection. These results demonstrate the utility of this new model to the field and highlight the potential for this system to elucidate fungal pathogenesis to develop new therapeutic strategies to prevent and treat the disseminated stages of fungal diseases such as cryptococcal meningitis.
</description>
<pubDate>Sat, 19 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162172</guid>
<dc:date>2025-07-19T00:00:00Z</dc:date>
</item>
<item>
<title>Defibrotide for Protecting Against and Managing Endothelial Injury in Hematologic Malignancies and COVID-19</title>
<link>https://hdl.handle.net/1721.1/162171</link>
<description>Defibrotide for Protecting Against and Managing Endothelial Injury in Hematologic Malignancies and COVID-19
Richardson, Edward; Mo, Clifton C.; Calabretta, Eleonora; Corrado, Francesco; Kocoglu, Mehmet H.; Baron, Rebecca M.; Connors, Jean Marie; Iacobelli, Massimo; Wei, Lee-Jen; Benjamin, Emily J.; Rapoport, Aaron P.; Díaz-Ricart, Maribel; Martínez-Mellado, Antonio José; Carlo-Stella, Carmelo; Richardson, Paul G.; Moraleda, José M.
Defibrotide, which is approved for treating hepatic veno-occlusive disease (VOD)/sinusoidal obstruction syndrome (SOS), exhibits pleiotropic anti-inflammatory, anti-thrombotic, and fibrinolytic properties, conferring broad endothelial protective effects. Given these mechanisms, defibrotide has potential utility in various conditions involving endothelial injury or activation. In this review we outline the endothelial-protective mechanisms of defibrotide and comprehensively summarize current evidence supporting its applications in hematologic malignancies, including the prevention and treatment of hepatic VOD/SOS, graft-versus-host disease, and transplant-associated thrombotic microangiopathy. Additionally, we discuss its role in mitigating key toxicities linked to chimeric antigen receptor (CAR) T-cell therapies and bispecific antibodies, such as cytokine release syndrome (CRS) and immune effector cell-associated neurotoxicity syndrome (ICANS). We also explore emerging evidence on defibrotide’s potential in SARS-CoV-2 infection-associated endotheliopathies, including acute COVID-19 and post-acute sequelae of SARS-CoV-2 infection (“long-COVID”), and the endothelial protective activity of defibrotide in these settings. Finally, we highlight potential future applications of defibrotide in hematologic malignancies and viral infections, emphasizing its multimodal mechanism of action.
</description>
<pubDate>Mon, 14 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162171</guid>
<dc:date>2025-07-14T00:00:00Z</dc:date>
</item>
<item>
<title>Beyond the Counter: A Systemic Mapping of Nanostore Identities in Traditional, Informal Retail Through Multi-Dimensional Archetypes</title>
<link>https://hdl.handle.net/1721.1/162170</link>
<description>Beyond the Counter: A Systemic Mapping of Nanostore Identities in Traditional, Informal Retail Through Multi-Dimensional Archetypes
Salinas-Navarro, David Ernesto; Vilalta-Perdomo, Eliseo; Mejía-Argueta, Christopher
This study examines the identity of nanostores—micro, independent grocery retailers—through a systemic, stakeholder-informed lens to promote their survivability and competitiveness. Moving beyond traditional operational descriptions, it introduces a multidimensional framework that examines what nanostores do (X), how they do it (Y), and why they matter (Z), which is complemented by the use of the TASCOI tool to produce identity statements. Based on survey data collection and a thematic analysis of nanostore stakeholder responses in Mexico City, the research categorises identity statements into six 2 × 2 matrices across four dimensions: operational, functional, relational, and adaptive. This analysis yields twenty-four archetypes that capture the diversity, complexity, and adaptability of nanostores. The findings reveal that nanostores are not a homogeneous category. They simultaneously exhibit characteristics of multiple archetypes, blending retail function, social embeddedness, and entrepreneurial adaptation. This study contributes to the nanostore and micro-enterprise literature by operationalising identity description and offers practical insights for supporting diverse shop types through context-sensitive policy and business strategies. While this study ensures internal validity and reliability through systematic coding and stakeholder feedback, it acknowledges limitations in its generalisability. Future research may build on this work through comparative studies, longitudinal tracking, and direct engagement with nanostore owners and their communities to further understand the dynamics of their identity and their resilience in evolving retail landscapes.
</description>
<pubDate>Sat, 05 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162170</guid>
<dc:date>2025-07-05T00:00:00Z</dc:date>
</item>
<item>
<title>Crystallization of Calcium Sulfate for Mining Wastewater Treatment</title>
<link>https://hdl.handle.net/1721.1/162169</link>
<description>Crystallization of Calcium Sulfate for Mining Wastewater Treatment
Zamengo, Fernanda Gusman Garreta; Botelho Junior, Amilton Barbosa; Seckler, Marcelo Martins; Espinosa, Denise Crocce Romano; Tenório, Jorge Alberto Soares
This study aims to increase the particle size of the precipitate, aiming for an increasing settling speed. The effluent contains 21.88 g/L of sulfate, 526.5 mg/L of calcium, 2.9 mg/L of cadmium, 4.73 g/L of magnesium, 332.8 mg/L of manganese, and 205.8 mg/L of zinc. Based on thermodynamic simulations, evaluating the pH increase up to 9.0, it was possible to determine that the main species are CaSO4·2H2O(s), Mg(OH)2(s), MnO2(s), ZnO(s), and Cd(OH)2(s). In the precipitation tests, it was determined that a concentration of 2.0 mol/L of Ca(OH)2 resulted in a particle size of 12.2 µm. The increase of temperature has an opposite effect, decreasing 40% of the particle size at 80 °C in comparison to 25 °C. On the other hand, the reaction time increases particle size, reaching 300% of an increase from 10 min to 3 h. In the seed tests, it was found that a seed ratio of 10 g/L to 100 g/L with the CaSO4 (2) seed had the greatest impact on particle size growth, resulting in a 700% increase in particle size compared to the test without seeds. In the settling tests, a sedimentation rate of 177 mL/min was achieved using seeds and flocculants, compared to 50 mL/min in the test without reagents.
</description>
<pubDate>Thu, 26 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162169</guid>
<dc:date>2025-06-26T00:00:00Z</dc:date>
</item>
<item>
<title>Extension and replacement</title>
<link>https://hdl.handle.net/1721.1/162168</link>
<description>Extension and replacement
Masny, Michal
Many people believe that it is better to extend the length of a happy life than to create a new happy life, even if the total welfare is the same in both cases. Despite the popularity of this view, one would be hard-pressed to find a fully compelling justification for it in the literature. This paper develops a novel account of why and when extension is better than replacement that applies not just to persons but also to non-human animals and humanity as a whole.
</description>
<pubDate>Fri, 04 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162168</guid>
<dc:date>2025-04-04T00:00:00Z</dc:date>
</item>
<item>
<title>Real-World Evidence Acceptability and Use in Breast Cancer Treatment Decision-Making in the United States: Call-to-Action from a Multidisciplinary Think Tank</title>
<link>https://hdl.handle.net/1721.1/162167</link>
<description>Real-World Evidence Acceptability and Use in Breast Cancer Treatment Decision-Making in the United States: Call-to-Action from a Multidisciplinary Think Tank
Khozin, Sean; Dreyer, Nancy A.; Galante, Dominic; Liu, Raymond; Neumann, Peter; Nussbaum, Nathan; O’Shaughnessy, Joyce; Patt, Debra; Rimawi, Mothaffar; Rugo, Hope; Tolaney, Sara M.; Weiss, Marisa; Brufsky, Adam
Complementing randomized controlled trials, real-world evidence (RWE) from observational analyses can extend clinical insights in oncology. While healthcare stakeholders have published rigorous RWE frameworks and resources, a multidisciplinary think tank was established to further advance acceptance and use of RWE in treatment decision-making, with the focus on breast cancer (while recognizing relevance in oncology more broadly). Members discussed perceptions of RWE from a clinical perspective, across domains of data, methodology, and mindset, and “calls-to-action” for stakeholders. Agreement was reached on a primary “call-to-action,” to develop clinically-relevant, patient-informed, real-world endpoints, and secondary “calls-to-action”: establish a multidisciplinary consensus forum; publish examples of unique RWE value; build upon existing frameworks and resources; and tailor an approach for exhibiting utility to guideline bodies.
</description>
<pubDate>Mon, 12 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162167</guid>
<dc:date>2025-05-12T00:00:00Z</dc:date>
</item>
<item>
<title>Stochastic Differential Games and Optimization Problems with Controlled Point Process Arrivals</title>
<link>https://hdl.handle.net/1721.1/162166</link>
<description>Stochastic Differential Games and Optimization Problems with Controlled Point Process Arrivals
Wernerfelt, Birger
There is a very large literature on applications of stochastic control of jump diffusions and a smaller literature on such games. In many applications it is natural to assume that the arrival intensity is controlled, but except for two long-forgotten papers the literature instead assumes that it is the jump sizes that are controlled. The more natural assumption is typically avoided because a failed Lipschitz condition means that the classical existence and uniqueness proofs cannot be used. We here derive an asymptotic Markov equilibrium of the game with controlled jump intensities and show that it, at least in an example, is very similar to the Markov equilibrium of an analog game with controlled jump sizes. The paper thus makes two contributions: It supplies a way to solve some optimization problems and games with controlled jump intensities and it shows that the commonly used formulation with controlled jump sizes is quite defensible for at least some classes of games.
</description>
<pubDate>Fri, 30 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162166</guid>
<dc:date>2025-05-30T00:00:00Z</dc:date>
</item>
<item>
<title>Idea Evaluation for Solutions to Specialized Problems: Leveraging the Potential of Crowds and Large Language Models</title>
<link>https://hdl.handle.net/1721.1/162165</link>
<description>Idea Evaluation for Solutions to Specialized Problems: Leveraging the Potential of Crowds and Large Language Models
Gimpel, Henner; Laubacher, Robert; Probost, Fabian; Schäfer, Ricarda; Schoch, Manfred
Complex problems such as climate change pose severe challenges to societies worldwide. To overcome these challenges, digital innovation contests have emerged as a promising tool for idea generation. However, assessing idea quality in innovation contests is becoming increasingly problematic in domains where specialized knowledge is needed. Traditionally, expert juries are responsible for idea evaluation in such contests. However, experts are a substantial bottleneck as they are often scarce and expensive. To assess whether expert juries could be replaced, we consider two approaches. We leverage crowdsourcing and a Large Language Model (LLM) to evaluate ideas, two approaches that are similar in terms of the aggregation of collective knowledge and could therefore be close to expert knowledge. We compare expert jury evaluations from innovation contests on climate change with crowdsourced and LLM’s evaluations and assess performance differences. Results indicate that crowds and LLMs have the ability to evaluate ideas in the complex problem domain while contest specialization—the degree to which a contest relates to a knowledge-intensive domain rather than a broad field of interest—is an inhibitor of crowd evaluation performance but does not influence the evaluation performance of LLMs. Our contribution lies with demonstrating that crowds and LLMs (as opposed to traditional expert juries) are suitable for idea evaluation and allows innovation contest operators to integrate the knowledge of crowds and LLMs to reduce the resource bottleneck of expert juries.
</description>
<pubDate>Sat, 28 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162165</guid>
<dc:date>2025-06-28T00:00:00Z</dc:date>
</item>
<item>
<title>Improving automatic cerebral 3D-2D CTA-DSA registration</title>
<link>https://hdl.handle.net/1721.1/162164</link>
<description>Improving automatic cerebral 3D-2D CTA-DSA registration
Downs, Charles; Sluijs, P. M. v. d.; Cornelissen, Sandra A. P.; Nijenhuis, Frank t.; Zwam, Wim H. v.; Gopalakrishnan, Vivek; Zhang, Xucong; Su, Ruisheng; Walsum, Theo v.
Purpose Stroke remains a leading cause of morbidity and mortality worldwide, despite advances in treatment modalities. Endovascular thrombectomy (EVT), a revolutionary intervention for ischemic stroke, is limited by its reliance on 2D fluoroscopic imaging, which lacks depth and comprehensive vascular detail. We propose a novel AI-driven pipeline for 3D CTA to 2D DSA cross-modality registration, termed DeepIterReg. Methods The proposed pipeline integrates neural network-based initialization with iterative optimization to align pre-intervention and peri-intervention data. Our approach addresses the challenges of cross-modality alignment, particularly in scenarios involving limited shared vascular structures, by leveraging synthetic data, vein-centric anchoring, and differentiable rendering techniques. Results We assess the efficacy of DeepIterReg through quantitative analysis of capture ranges and registration accuracy. Results show that our method can accurately register 70% of a test set of 20 patients and can improve capture ranges when performing an initial pose estimation using a convolutional neural network. Conclusions DeepIterReg demonstrates promising performance for 3D-to-2D stroke intervention image registration, potentially aiding clinicians by improving spatial understanding during EVT and reducing dependence on manual adjustments.
</description>
<pubDate>Fri, 23 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162164</guid>
<dc:date>2025-05-23T00:00:00Z</dc:date>
</item>
<item>
<title>Equipment Disposal Cost Assessment for the Horizontal Compact High-Temperature Gas-Cooled Reactor</title>
<link>https://hdl.handle.net/1721.1/162053</link>
<description>Equipment Disposal Cost Assessment for the Horizontal Compact High-Temperature Gas-Cooled Reactor
Kudriavtseva, Anna; Shirvan, Koroush
Accurate capital and operational cost data for advanced nuclear concepts are critical for meaningful technoeconomic analyses. However, the data for advanced reactors disposal costs are often missing or assumed to be the same as for light water reactors. Decommissioning costs should be estimated in a reliable way to establish adequate decommissioning funds. This work presents the projected disposal costs for the horizontal compact high-temperature gas-cooled reactor (HC-HTGR) components to provide insights into HTGR decommissioning costs relative to pressurized water reactors (PWRs).&#13;
&#13;
This paper identifies the waste classifications for the key equipment, including the core barrel and reactor pressure vessel (RPV) cylinder, and the graphite reflector components of the HC-HTGR design. Furthermore, this work discusses the neutron irradiation effects and their impact on the integrity of the barrel, RPV, and graphite reflector against material property changes. The concentrations of radionuclides computed during activation analysis were used to estimate the disposal costs of the HC-HTGR components for immediate dismantlement after 40 years of operating lifetime and after a 10-year decay period. Overall, the disposal costs of the HTGR’s core barrel, RPV cylinder, and graphite reflectors will be 10 times higher than large PWR costs on a per energy produced basis.
</description>
<pubDate>Wed, 11 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162053</guid>
<dc:date>2025-06-11T00:00:00Z</dc:date>
</item>
<item>
<title>Neutronics Analysis of ClLiF: An Alternative Molten-Salt Tritium Breeder</title>
<link>https://hdl.handle.net/1721.1/162052</link>
<description>Neutronics Analysis of ClLiF: An Alternative Molten-Salt Tritium Breeder
Dunn, Collin S; Ebiwonjumi, Bamidele; Segantin, Stefano; Woller, Kevin B; Zhou, Weiyue; Peterson, Ethan E
Fusion pilot plants (FPPs) will require tritium self-sufficiency, which is achieved through the breeding blanket. The liquid immersion blanket (LIB) concept employing liquid breeders has been shown to reduce complexity and costs, but the most popular candidate for LIBs, FLiBe, contains highly toxic beryllium. In order to attain tritium self-sufficiency without the drawbacks of high toxicity, lithium-chloride lithium-fluoride (ClLiF) molten salt is suggested as an alternative liquid breeding candidate.&#13;
&#13;
This work analyzes the viability of ClLiF from a neutronics perspective using the OpenMC transport code. Simulations with a simple, ideal blanket neutronics model with no first wall or structural materials were carried out and revealed that ClLiF enriched in 37Cl is competitive with FLiBe in terms of both the tritium breeding ratio (TBR) and energy multiplication&#119872;&#119864;. Next, a scan across salt temperatures, neutron multiplier materials, neutron multiplier thicknesses, LiCl fractions, 37Cl enrichments, and 6Li enrichment was conducted to identify the parameters that improve ClLiF performance.&#13;
&#13;
These improved parameters were then applied to a more realistic model of a compact, toroidal reactor with a first wall and structural materials. The results from this model demonstrated that a blanket made up of ClLiF, enriched in 37Cl, achieved a TBR greater than that of FLiBe, but had a reduced energy multiplication unless a thicker external beryllium layer was introduced. Last, the effects of nuclear data and density uncertainties on the TBR and &#119872;&#119864; were quantified, and uncertainties in 35Cl nuclear data resulted in the greatest source of uncertainty in the calculation of the TBR and &#119872;&#119864;. However, a new evaluation of 35Cl cross sections by Los Alamos National Laboratory with lower uncertainty led to greater TBRs and ME’s than those calculated using the ENDF/B-VIII.0 and TENDL-2019 libraries.
</description>
<pubDate>Tue, 08 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162052</guid>
<dc:date>2025-07-08T00:00:00Z</dc:date>
</item>
<item>
<title>Uncertainty Analyses of Tritium Production and Gamma Heating Rates in the FNS Clean Benchmark Experiments</title>
<link>https://hdl.handle.net/1721.1/162051</link>
<description>Uncertainty Analyses of Tritium Production and Gamma Heating Rates in the FNS Clean Benchmark Experiments
Ebiwonjumi, Bamidele; Peterson, Ethan
The propagation of nuclear data uncertainties in fusion neutronics calculations is presented in this paper. The uncertainty propagation employs the random samples of neutron cross sections and secondary particle energy/angular distributions generated by the SANDY code as nuclear data in the transport simulation of the Monte Carlo (MC) code OpenMC. The random samples are obtained from stochastic sampling employing covariances in nuclear data libraries. In this work, uncertainties in nuclear data result in perturbed neutron flux distributions that are then propagated to the gamma heating and tritium production rates in the Fusion Neutron Source clean benchmark experiments on vanadium, beryllium, tungsten, iron, copper, and graphite assemblies, which were irradiated with a 14-MeV deuterium-tritium neutron source from the Shielding Integral Benchmark Archive and Database (SINBAD). The uncertainty analysis results show that for the beryllium assembly, the tritium production uncertainties are dominated by the 9Be cross sections, while the cross sections of 6Li and the impurities present have an insignificant effect on the tritium production. In addition, the gamma heating in the vanadium assembly has the largest uncertainty (up to 23%, with impurities contributing less) among the materials analyzed, followed by graphite (~ 20%), tungsten (17%), iron (14%), and copper (&lt; 6%). These results are important for the application of best estimate plus uncertainty methods, verification and validation, and design of fusion reactors and power plants.
</description>
<pubDate>Mon, 02 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162051</guid>
<dc:date>2025-06-02T00:00:00Z</dc:date>
</item>
<item>
<title>Subnational variation among provincial data bureaus in China</title>
<link>https://hdl.handle.net/1721.1/162050</link>
<description>Subnational variation among provincial data bureaus in China
Tai, Katharin
Based on an original overview of the evolution of provincial data bureaus in China between 2014 and 2023, this paper argues that provincial data bureaus between 2014 and 2023 have been an important source of bottom-up data governance practices in China, which are worth studying as part of the broader Chinese data governance regime. To support this argument, the paper provides a descriptive analysis of province-level data bureaus in China up until the establishment of the National Data Administration (NDA) in 2023, with a particular focus on their relationship to the evolving legal concept of ‘public data' in China. The analysis shows that there is substantial subnational variation, exploring the evolution of data bureaus from three different perspectives as examples of data localism: One, considering provincial data bureaus over time, there are significant divergences in when different provinces established theirs, and whether they did so as institutional entrepreneurs or in response to national policy developments. Two, in terms of policy focus, most provincial data bureaus focus on digital public service provision or data-related economic questions, or, increasingly, both. Several data bureaus issued the first set of public data regulations as part of this work, and most were officially responsible for public or closely related concepts. Three, provincial data bureaus have generally been established directly under the provincial government, supervised by a province-level department, or as ‘nameplate’ institutions. This institutional choice matters for the amount of power a data bureau is able to wield and for its substantive work, which needs to fit the priorities of any supervising institution.
</description>
<pubDate>Mon, 21 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162050</guid>
<dc:date>2025-07-21T00:00:00Z</dc:date>
</item>
<item>
<title>Against Character Constraints</title>
<link>https://hdl.handle.net/1721.1/162049</link>
<description>Against Character Constraints
Heine, Jessica Anne
This paper defends the following principle: For any visually perceptible set of objects and any visual phenomenal character, there could be a veridical perception of exactly those objects with that character. This principle is rejected by almost all contemporary theories of perception, yet rarely addressed directly. Many have taken the apparent inconceivability of a certain sort of ‘shape inversion'—as compared to the more plausible, frequently discussed ‘colour inversion’—as evidence that the spatial characters of our perceptions are uniquely suited to and/or revelatory of the structure of their objects, such that alleged perceptions of those objects that differed radically in spatial character could not be veridical. I argue that these conclusions are unjustified: I claim that the difficulty involved in constructing coherent ‘shape inversion’ scenarios is attributable to the complex relations among visual and tactile shape experiences, as opposed to relations between shape experiences and worldly shape properties.
</description>
<pubDate>Mon, 02 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/162049</guid>
<dc:date>2025-06-02T00:00:00Z</dc:date>
</item>
<item>
<title>A Convection/Radiation Temperature Control System for High Power Density Electronic Device Testing</title>
<link>https://hdl.handle.net/1721.1/161628</link>
<description>A Convection/Radiation Temperature Control System for High Power Density Electronic Device Testing
Sweetland, Matthew; Lienhard, John H; Slocum, Alexander&#13;
H.
Active control of die-level temperature is required during production testing of high power microprocessors in order to ensure accurate performance classification, but control is becoming more difficult as the device power densities increase. With power densities approaching 100 W/cm2, the current passive control systems are no longer able to maintain the required temperature tolerance for production testing. This paper describes the design and testing of a temperature control system that combines high performance impingement cooling with higher power laser heating with application to packaged integrated circuit devices under dynamic testing conditions. Also presented are system design concepts and experimental results for typical microprocessor thermal test vehicles.
</description>
<pubDate>Tue, 12 Aug 2008 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/161628</guid>
<dc:date>2008-08-12T00:00:00Z</dc:date>
</item>
<item>
<title>Evaporative cooling of continuously drawn glass fibers by water sprays</title>
<link>https://hdl.handle.net/1721.1/161627</link>
<description>Evaporative cooling of continuously drawn glass fibers by water sprays
Sweetland, Matthew; Lienhard, John H
This paper examines the effect of the water sprays commonly used to cool freshly drawn glass fibers. A model has been developed using a Karman–Pohlhausen treatment of the velocity and thermal boundary layers and accounting for the evaporation of an entrained water spray. Solutions of the model equations have been calculated, and the effect of changing various process parameters is studied. Variations in Sauter mean diameter, spray density, and spray placement along the fiber are considered, as well as the effect of fiber diameter and drawing speed. Fiber temperature profiles for different values of the process variables are presented.
</description>
<pubDate>Wed, 01 Mar 2000 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/161627</guid>
<dc:date>2000-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>High Heat Flux Cooling by Liquid Jet-Array Modules</title>
<link>https://hdl.handle.net/1721.1/161625</link>
<description>High Heat Flux Cooling by Liquid Jet-Array Modules
Lienhard, John H; Hadeler, J.
Liquid jet impingement has a demonstrated capacity for high heat flux cooling. Here, we describe the use of a water jet array in removing heat fluxes of 1 to 17 MW/m2 over an area of 10 cm2. The jet array size may be increased for thermal management of larger areas. Cooling in this system is by single-phase convection. Metal film resistance heaters have been developed for testing purposes.
</description>
<pubDate>Wed, 03 Nov 1999 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/161625</guid>
<dc:date>1999-11-03T00:00:00Z</dc:date>
</item>
<item>
<title>Boiling and Evaporation in Small Diameter Channels</title>
<link>https://hdl.handle.net/1721.1/161624</link>
<description>Boiling and Evaporation in Small Diameter Channels
Bergles, Arthur E.; Lienhard, John H; Kendall, Gail E.; Griffith, Peter
Since the 1950s, the research and industrial communities have developed a body of experimental data and set of analytical tools and correlations for two-phase flow and heat transfer in passages having a hydraulic diameter greater than about 6 mm. These tools include flow regime maps, pressure drop and heat transfer correlations, and critical heat flux limits, as well as strategies for robust thermal management of HVAC systems, electronics, and nuclear power plants. Designers of small systems with thermal management by phase change will need analogous tools to predict and optimize thermal behavior in the mesoscale and smaller sizes. Such systems include a wide range of devices for computation, measurement, and actuation in environments that range from office space to outer space as well as living systems. This paper examines important processes that must be considered when channel diameters decrease, including flow distribution issues in single, parallel, and split flows; flow instability in parallel passages; manufacturing tolerance effects; single-phase heat transfer; nucleation processes; boiling heat transfer and pressure drop; and wall conductance effects. The discussion focuses on engineering issues for the design of practical systems.
</description>
<pubDate>Wed, 01 Jan 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/161624</guid>
<dc:date>2003-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Active Thermal Control of Distributed Parameter Systems Excited at MultipleFrequencies</title>
<link>https://hdl.handle.net/1721.1/161623</link>
<description>Active Thermal Control of Distributed Parameter Systems Excited at MultipleFrequencies
Richter, Christoph&#13;
C.; Lienhard, John H
In testing packaged high-power integrated circuits, active thermal control is useful in providing die-level temperature stability. A time-varying heat load is applied to the surface of the package to compensate for the time-varying test power sequence applied to the die. An earlier study determined the proper control heat load for a single-frequency sinusoidal variation in die power subject to a finite allowed temperature variation on the die. Actual test power sequences contain many frequencies at various phase angles, each contributing to the temperature variation of the die. In the present study, we develop a method of controlling multiple frequency test sequences subject to a finite temperature tolerance. It is shown that the total control power may be minimized assigning temperature tolerances to the highest frequencies in the test power sequence.
</description>
<pubDate>Thu, 10 Feb 2005 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/161623</guid>
<dc:date>2005-02-10T00:00:00Z</dc:date>
</item>
<item>
<title>Active Thermal Control of Distributed Parameter Systems With Application to Testing of Packaged IC Devices</title>
<link>https://hdl.handle.net/1721.1/161622</link>
<description>Active Thermal Control of Distributed Parameter Systems With Application to Testing of Packaged IC Devices
Sweetland, Matthew; Lienhard, John H
Active control of the die-level temperature is desirable during production testing of high power microprocessors, so as to ensure accurate performance classification. Such control requires that the controlling thermal load time-lead the dissipated thermal load and that it be modulated to account for the distributed thermal capacitance and resistance of the device packaging. The analysis in this paper demonstrates fundamental limits of temperature control for typical devices under test conditions. These limits are identified for specified control power to die power ratios. The effects of test sequence design and device package design on the temperature control limits are also examined. The theory developed can be applied to any thermal control problem where a conductive medium separates the control source from the location where control is desired.
</description>
<pubDate>Wed, 29 Jan 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/161622</guid>
<dc:date>2003-01-29T00:00:00Z</dc:date>
</item>
<item>
<title>Yield Limits of Plates at Extremely High Heat Flux</title>
<link>https://hdl.handle.net/1721.1/161621</link>
<description>Yield Limits of Plates at Extremely High Heat Flux
Lienhard, John H; Napolitano, D. S.
For heat fluxes ranging above 10 MW/m2 or so, solid surfaces usually experience large thermal stresses and degradation of mechanical properties. The resulting mechanical failure of such surfaces is a primary limitation to the design of thermal systems at extremely high heat flux. This investigation considers the elastic stresses in circular plates subjected to extremely high heat fluxes. A gaussian distributed heat load is applied to one surface of the plate and the heat flux at which yielding occurs is identified. Several candidate materials are examined, accounting for the temperature dependence of yield strength and other properties. The mechanical boundary conditions on the plate are varied. Figures of merit are given for the high flux performance of a number of materials.
</description>
<pubDate>Sun, 01 Feb 1998 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/161621</guid>
<dc:date>1998-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>HIGH-HEAT-FLUX RESISTANCE HEATERS FROM VPS AND HVOF THERMAL SPRAYING</title>
<link>https://hdl.handle.net/1721.1/161620</link>
<description>HIGH-HEAT-FLUX RESISTANCE HEATERS FROM VPS AND HVOF THERMAL SPRAYING
Michels, D.; Hadeler, J.; Lienhard, John H
This article describes the application of thermal spray techniques to produce resistance heating elements suitable for applying very large heat flaxes to solid surfaces. The surface to be heated is electrically insulated by deposition of a ceramic layer onto which a thin metallic layer is deposited; the metallic layer serves as the heating element. Each layer has a thickness in the range of 75 to 300 μm. Design considerations for the heaters are described. Previous efforts have produced the films using air plasma spraying. In the present work, we applied vacuum plasma spraying and high-velocity oxygen fuel spraying, which result in considerable improvements in performance and reliability. Heaters have been tested at fluxes up to 17 MW/m2, The heaters generally fail by fracture once the thermal stresses in the system exceed a level that depends on the process by which the films have been deposited. These heaters are useful for the experimental development of high-heat-flux cooling systems.
</description>
<pubDate>Thu, 01 Oct 1998 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/161620</guid>
<dc:date>1998-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>A design approach for layer-by-layer surface-mediated siRNA delivery</title>
<link>https://hdl.handle.net/1721.1/161228</link>
<description>A design approach for layer-by-layer surface-mediated siRNA delivery
Chou, Jonathan J; Berger, Adam G; Jalili-Firoozinezhad, Sasan; Hammond, Paula T
The ability to coat scaffolds and wound dressings with therapeutic short interfering RNA (siRNA) holds much potential for applications in wound healing, cancer treatment, and regenerative medicine. Layer-by-layer (LbL) technology is an effective method to formulate polyelectrolyte thin films for local delivery of siRNA; however, the formation and efficacy of LbL coatings as drug delivery systems are highly contingent on the assembly conditions. Here, we investigate the effects of LbL assembly parameters on film composition and consequent siRNA-mediated gene knockdown efficiency in vitro. Films comprising poly(β-amino ester) (PBAE) and siRNA were built on polyglactin 910 (Vicryl) sutures consisting of poly(10% L-lactide, 90% glycolide). A fractional factorial design was employed, varying the following LbL assembly conditions: pH, ionic strength, PBAE concentration, and siRNA concentration. Effects of these parameters on PBAE loading, siRNA loading, their respective weight ratios, and in vitro siRNA-mediated knockdown were elucidated. The parameter effects were leveraged to create a rationally designed set of solution conditions that was predicted to give effective siRNA-mediated knockdown, but not included in any of the original experimental conditions. This level of knockdown with our rationally designed loading conditions (47%) is comparable to previous formulations from our lab while being simpler in construction and requiring fewer film layers, which could save time and cost in manufacturing. This study highlights the importance of LbL solution conditions in the preparation of surface-mediated siRNA delivery systems and presents an adaptable methodology for extending these electrostatically-assembled coatings to the delivery of other therapeutic nucleic acids. STATEMENT OF SIGNIFICANCE: Short interfering RNA (siRNA) therapeutics are powerful tools to silence aberrant gene expression in the diseased state; however, the clinical utility of these therapies relies on effective controlled delivery approaches. Electrostatic self-assembly through the layer-by-layer (LbL) process enables direct siRNA release from surfaces, but this method is highly dependent upon the specific solution conditions used. Here, we use a fractional factorial design to illustrate how these assembly conditions impact composition of siRNA-eluting LbL thin films. We then elucidate how these properties mediate in vitro transfection efficacy. Ultimately, this work presents a significant step towards understanding how optimization of assembly conditions for surface-mediated LbL delivery can promote transfection efficacy while reducing the processing and material required.
</description>
<pubDate>Mon, 01 Nov 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/161228</guid>
<dc:date>2021-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Power in Numbers: Harnessing Combinatorial and Integrated Screens to Advance Nanomedicine</title>
<link>https://hdl.handle.net/1721.1/161227</link>
<description>Power in Numbers: Harnessing Combinatorial and Integrated Screens to Advance Nanomedicine
Boehnke, Natalie; Hammond, Paula T
Nanocarriers have significant potential to advance personalized medicine through targeted drug delivery. However, to date, efforts to improve nanoparticle accumulation at target disease sites have largely failed to translate clinically, stemming from an incomplete understanding of nano-bio interactions. While progress has been made to evaluate the effects of specific physical and chemical nanoparticle properties on trafficking and uptake, there is much to be gained from controlling these properties singularly and in combination to determine their interactions with different cell types. We and others have recently begun leveraging library-based nanoparticle screens to study structure-function relationships of lipid- and polymer-based drug delivery systems to guide nanoparticle design. These combinatorial screening efforts are showing promise in leading to the successful identification of critical characteristics that yield improved and specific accumulation at target sites. However, there is a crucial need to equally consider the influence of biological complexity on nanoparticle delivery, particularly in the context of clinical translation. For example, tissue and cellular heterogeneity presents an additional dimension to nanoparticle trafficking, uptake, and accumulation; applying imaging and screening tools as well as bioinformatics may further expand our understanding of how nanoparticles engage with cells and tissues. Given recent advances in the fields of omics and machine learning, there is substantial promise to revolutionize nanocarrier development through the use of integrated screens, harnessing the combinatorial parameter space afforded both by nanoparticle libraries and clinically annotated biological data sets in combination with high throughput in vivo studies.
</description>
<pubDate>Tue, 23 Nov 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/161227</guid>
<dc:date>2021-11-23T00:00:00Z</dc:date>
</item>
<item>
<title>Modulating Nanoparticle Size to Understand Factors Affecting Hemostatic Efficacy and Maximize Survival in a Lethal Inferior Vena Cava Injury Model</title>
<link>https://hdl.handle.net/1721.1/161226</link>
<description>Modulating Nanoparticle Size to Understand Factors Affecting Hemostatic Efficacy and Maximize Survival in a Lethal Inferior Vena Cava Injury Model
Hong, Celestine; Alser, Osaid; Gebran, Anthony; He, Yanpu; Joo, Wontae; Kokoroskos, Nikolaos; Velmahos, George; Olsen, Bradley D; Hammond, Paula T
Intravenous nanoparticle hemostats offer a potentially attractive approach to promote hemostasis, in particular for inaccessible wounds such as noncompressible torso hemorrhage (NCTH). In this work, particle size was tuned over a range of &lt;100-500 nm, and its effect on nanoparticle-platelet interactions was systematically assessed using in vitro and in vivo experiments. Smaller particles bound a larger percentage of platelets per mass of particle delivered, while larger particles resulted in higher particle accumulation on a surface of platelets and collagen. Intermediate particles led to the greatest platelet content in platelet-nanoparticle aggregates, indicating that they may be able to recruit more platelets to the wound. In biodistribution studies, smaller and intermediate nanoparticles exhibited longer circulation lifetimes, while larger nanoparticles resulted in higher pulmonary accumulation. The particles were then challenged in a 2 h lethal inferior vena cava (IVC) puncture model, where intermediate nanoparticles significantly increased both survival and injury-specific targeting relative to saline and unfunctionalized particle controls. An increase in survival in the second hour was likewise observed in the smaller nanoparticles relative to saline controls, though no significant increase in survival was observed in the larger nanoparticle size. In conjunction with prior in vitro and in vivo experiments, these results suggest that platelet content in aggregates and extended nanoparticle circulation lifetimes are instrumental to enhancing hemostasis. Ultimately, this study elucidates the role of particle size in platelet-particle interactions, which can be a useful tool for engineering the performance of particulate hemostats and improving the design of these materials.
</description>
<pubDate>Fri, 28 Jan 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/161226</guid>
<dc:date>2022-01-28T00:00:00Z</dc:date>
</item>
<item>
<title>A predictive microfluidic model of human glioblastoma to assess trafficking of blood–brain barrier-penetrant nanoparticles</title>
<link>https://hdl.handle.net/1721.1/160949</link>
<description>A predictive microfluidic model of human glioblastoma to assess trafficking of blood–brain barrier-penetrant nanoparticles
Straehla, Joelle P; Hajal, Cynthia; Safford, Hannah C; Offeddu, Giovanni S; Boehnke, Natalie; Dacoba, Tamara G; Wyckoff, Jeffrey; Kamm, Roger D; Hammond, Paula T
The blood–brain barrier represents a significant challenge for the treatment of high-grade gliomas, and our understanding of drug transport across this critical biointerface remains limited. To advance preclinical therapeutic development for gliomas, there is an urgent need for predictive in vitro models with realistic blood–brain-barrier vasculature. Here, we report a vascularized human glioblastoma multiforme (GBM) model in a microfluidic device that accurately recapitulates brain tumor vasculature with self-assembled endothelial cells, astrocytes, and pericytes to investigate the transport of targeted nanotherapeutics across the blood–brain barrier and into GBM cells. Using modular layer-by-layer assembly, we functionalized the surface of nanoparticles with GBM-targeting motifs to improve trafficking to tumors. We directly compared nanoparticle transport in our in vitro platform with transport across mouse brain capillaries using intravital imaging, validating the ability of the platform to model in vivo blood–brain-barrier transport. We investigated the therapeutic potential of functionalized nanoparticles by encapsulating cisplatin and showed improved efficacy of these GBM-targeted nanoparticles both in vitro and in an in vivo orthotopic xenograft model. Our vascularized GBM model represents a significant biomaterials advance, enabling in-depth investigation of brain tumor vasculature and accelerating the development of targeted nanotherapeutics.
</description>
<pubDate>Wed, 01 Jun 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/160949</guid>
<dc:date>2022-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Sustained release of BMP-2 using self-assembled layer-by-layer film-coated implants enhances bone regeneration over burst release</title>
<link>https://hdl.handle.net/1721.1/160948</link>
<description>Sustained release of BMP-2 using self-assembled layer-by-layer film-coated implants enhances bone regeneration over burst release
Howard, MayLin T; Wang, Sheryl; Berger, Adam G; Martin, John R; Jalili-Firoozinezhad, Sasan; Padera, Robert F; Hammond, Paula T
Current clinical products delivering the osteogenic growth factor bone morphogenetic protein 2 (BMP-2) for bone regeneration have been plagued by safety concerns due to a high incidence of off-target effects resulting from bolus release and supraphysiological doses. Layer-by-layer (LbL) film deposition offers the opportunity to coat bone defect-relevant substrates with thin films containing proteins and other therapeutics; however, control of release kinetics is often hampered by interlayer diffusion of drugs throughout the film during assembly, which causes burst drug release. In this work, we present the design of different laponite clay diffusional barrier layer architectures in self-assembled LbL films to modulate the release kinetics of BMP-2 from the surface of a biodegradable implant. Release kinetics were tuned by incorporating laponite in different film arrangements and with varying deposition techniques to achieve release of BMP-2 over 2 days, 4 days, 14 days, and 30 days. Delivery of a low dose (0.5 μg) of BMP-2 over 2 days and 30 days using these LbL film architectures was then compared in an in vivo rat critical size calvarial defect model to determine the effect of BMP-2 release kinetics on bone regeneration. After 6 weeks, sustained release of BMP-2 over 30 days induced 3.7 times higher bone volume and 7.4 times higher bone mineral density as compared with 2-day release of BMP-2, which did not induce more bone growth than the uncoated scaffold control. These findings represent a crucial step in the understanding of how BMP-2 release kinetics influence treatment efficacy and underscore the necessity to optimize protein delivery methods in clinical formulations for bone regeneration. This work could be applied to the delivery of other therapeutic proteins for which careful tuning of the release rate is a key optimization parameter.
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/160948</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Surface Presentation of Hyaluronic Acid Modulates Nanoparticle–Cell Association</title>
<link>https://hdl.handle.net/1721.1/160947</link>
<description>Surface Presentation of Hyaluronic Acid Modulates Nanoparticle–Cell Association
Deiss-Yehiely, Elad; Brucks, Spencer D; Boehnke, Natalie; Pickering, Andrew J; Kiessling, Laura L; Hammond, Paula T
Nanoparticle (NP) drug carriers have revolutionized medicine and increased patient quality of life. Clinically approved formulations typically succeed because of reduced off-target toxicity of the cargo. However, increasing carrier accumulation at disease sites through precise targeting remains one of the biggest challenges in the field. Novel multivalent ligand presentations and self-assembled constructs can enhance cell association, but an inability to draw direct comparisons across formulations has hindered progress. Furthermore, how nanoparticle structure influences function often is unclear. In this report, we leverage the well-characterized hyaluronic acid (HA)-CD44 binding pair to investigate how the surface architecture of modified NPs impacts their association with ovarian cancer cells that overexpress CD44. We functionalized anionic liposomes with 5 kDa HA by either covalent conjugation via surface coupling or electrostatic self-assembly using the layer-by-layer (LbL) adsorption method. Comparing these two methods, we observed a consistent enhancement of NP-cell association with the self-assembly LbL technique, particularly with higher molecular weight (≥10 kDa) HA. To further optimize association, we increased the surface-available HA. We synthesized a bottlebrush glycopolymer composed of a polynorbornene backbone and pendant 5 kDa HA and layered this macromolecule onto NPs. Flow cytometry revealed that the LbL HA bottlebrush NP outperformed the LbL linear display of HA. Cellular visualization by deconvolution optical microscopy corroborated results from all three constructs. Using exogenous HA to block NP-CD44 interactions, we found the LbL HA bottlebrush NP had a 4-fold higher binding avidity than the best-performing LbL linear HA NP. We further observed that decreasing the density of HA bottlebrush side chains to 75% had minimal impact on LbL NP stability or cell association, though we did see a reduction in binding avidity with this side-chain-modified NP. Our studies indicate that LbL surfaces are highly effective for multivalent displays, and the mode in which they present a targeting ligand can be optimized for NP cell targeting.
</description>
<pubDate>Tue, 25 Oct 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/160947</guid>
<dc:date>2022-10-25T00:00:00Z</dc:date>
</item>
<item>
<title>Synergistic combination therapy delivered via layer‐by‐layer nanoparticles induces solid tumor regression of ovarian cancer</title>
<link>https://hdl.handle.net/1721.1/160946</link>
<description>Synergistic combination therapy delivered via layer‐by‐layer nanoparticles induces solid tumor regression of ovarian cancer
Kong, Stephanie; Moharil, Pearl; Handly‐Santana, Abram; Boehnke, Natalie; Panayiotou, Richard; Gomerdinger, Victoria; Covarrubias, Gil; Pires, Ivan S; Zervantonakis, Ioannis; Brugge, Joan; Hammond, Paula T
The majority of patients with high grade serous ovarian cancer (HGSOC) develop recurrent disease and chemotherapy resistance. To identify drug combinations that would be effective in treatment of chemotherapy resistant disease, we examined the efficacy of drug combinations that target the three antiapoptotic proteins most commonly expressed in HGSOC—BCL2, BCL‐XL, and MCL1. Co‐inhibition of BCL2 and BCL‐XL (ABT‐263) with inhibition of MCL1 (S63845) induces potent synergistic cytotoxicity in multiple HGSOC models. Since this drug combination is predicted to be toxic to patients due to the known clinical morbidities of each drug, we developed layer‐by‐layer nanoparticles (LbL NPs) that co‐encapsulate these inhibitors in order to target HGSOC tumor cells and reduce systemic toxicities. We show that the LbL NPs can be designed to have high association with specific ovarian tumor cell types targeted in these studies, thus enabling a more selective uptake when delivered via intraperitoneal injection. Treatment with these LbL NPs displayed better potency than free drugs in vitro and resulted in near‐complete elimination of solid tumor metastases of ovarian cancer xenografts. Thus, these results support the exploration of LbL NPs as a strategy to deliver potent drug combinations to recurrent HGSOC. While these findings are described for co‐encapsulation of a BCL2/XL and a MCL1 inhibitor, the modular nature of LbL assembly provides flexibility in the range of therapies that can be incorporated, making LbL NPs an adaptable vehicle for delivery of additional combinations of pathway inhibitors and other oncology drugs.
</description>
<pubDate>Tue, 08 Nov 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/160946</guid>
<dc:date>2022-11-08T00:00:00Z</dc:date>
</item>
<item>
<title>Layer-by-layer interleukin-12 nanoparticles drive a safe andeffective response in ovarian tumors</title>
<link>https://hdl.handle.net/1721.1/160945</link>
<description>Layer-by-layer interleukin-12 nanoparticles drive a safe andeffective response in ovarian tumors
Barberio, Antonio E; Smith, Sean G; Pires, Ivan S; Iyer, Sonia; Reinhardt, Ferenc; Melo, Mariane B; Suh, Heikyung; Weinberg, Robert A; Irvine, Darrell J; Hammond, Paula T
Ovarian cancer is especially deadly, challenging to treat, and has proven refractory to known immunotherapies. Cytokine therapy is an attractive strategy to drive a proinflammatory immune response in immunologically cold tumors such as many high grade ovarian cancers; however, this strategy has been limited in the past due to severe toxicity. We previously demonstrated the use of a layer‐by‐layer (LbL) nanoparticle (NP) delivery vehicle in subcutaneous flank tumors to reduce the toxicity of interleukin‐12 (IL‐12) therapy upon intratumoral injection. However, ovarian cancer cannot be treated by local injection as it presents as dispersed metastases. Herein, we demonstrate the use of systemically delivered LbL NPs using a cancer cell membrane‐binding outer layer to effectively target and engage the adaptive immune system as a treatment in multiple orthotopic ovarian tumor models, including immunologically cold tumors. IL‐12 therapy from systemically delivered LbL NPs shows reduced severe toxicity and maintained anti‐tumor efficacy compared to carrier‐free IL‐12 or layer‐free liposomal NPs leading to a 30% complete survival rate.
</description>
<pubDate>Thu, 01 Dec 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/160945</guid>
<dc:date>2022-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Controlled lipid self-assembly for scalable manufacturing of next-generation immune stimulating complexes</title>
<link>https://hdl.handle.net/1721.1/160944</link>
<description>Controlled lipid self-assembly for scalable manufacturing of next-generation immune stimulating complexes
Pires, Ivan S; Ni, Kaiyuan; Melo, Mariane Bandeira; Li, Na; Ben-Akiva, Elana; Maiorino, Laura; Dye, Jonathan; Rodrigues, Kristen A; Yun, DongSoo; Kim, Byungji; Hosn, Ryan R; Hammond, Paula T; Irvine, Darrell J
Immune stimulating complexes (ISCOMs) are safe and effective saponin-based adjuvants formed by the self-assembly of saponin, cholesterol, and phospholipids in water to form cage-like 30-40 nm diameter particles. Inclusion of the Toll-like receptor 4 agonist monophosphoryl lipid A (MPLA) in ISCOM particles yields a promising next-generation adjuvant termed Saponin-MPLA NanoParticles (SMNP). In this work, we detail protocols to produce ISCOMs or SMNP via a tangential flow filtration (TFF) process suitable for scalable synthesis and Good Manufacturing Practice (GMP) production of clinical-grade adjuvants. SMNP or ISCOM components were solubilized in micelles of the surfactant MEGA-10, then diluted below the critical micelle concentration (CMC) of the surfactant to drive ISCOM self-assembly. Assembly of ISCOM/SMNP particles using the purified saponin QS-21 used in clinical-grade saponin adjuvants was found to require controlled stepwise dilution of the initial micellar solution, to prevent formation of undesirable kinetically-trapped aggregate species. An optimized protocol gave yields of ~77% based on the initial feed of QS-21 and the final SMNP particle composition mirrored the feed ratios of the components. Further, samples were highly homogeneous with comparable quality to that of material prepared at lab scale by dialysis and purified via size-exclusion chromatography. This protocol may be useful for clinical preparation of ISCOM-based vaccine adjuvants and therapeutics.
</description>
<pubDate>Mon, 15 May 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/160944</guid>
<dc:date>2023-05-15T00:00:00Z</dc:date>
</item>
<item>
<title>Engineering a Two‐Component Hemostat for the Treatment of Internal Bleeding through Wound‐Targeted Crosslinking</title>
<link>https://hdl.handle.net/1721.1/160943</link>
<description>Engineering a Two‐Component Hemostat for the Treatment of Internal Bleeding through Wound‐Targeted Crosslinking
Hong, Celestine; He, Yanpu; Bowen, Porter A; Belcher, Angela M; Olsen, Bradley D; Hammond, Paula T
Primary hemostasis (platelet plug formation) and secondary hemostasis (fibrin clot formation) are intertwined processes that occur upon vascular injury. Researchers have sought to target wounds by leveraging cues specific to these processes, such as using peptides that bind activated platelets or fibrin. While these materials have shown success in various injury models, they are commonly designed for the purpose of treating solely primary or secondary hemostasis. In this work, a two‐component system consisting of a targeting component (azide/GRGDS PEG‐PLGA nanoparticles) and a crosslinking component (multifunctional DBCO) is developed to treat internal bleeding. The system leverages increased injury accumulation to achieve crosslinking above a critical concentration, addressing both primary and secondary hemostasis by amplifying platelet recruitment and mitigating plasminolysis for greater clot stability. Nanoparticle aggregation is measured to validate concentration‐dependent crosslinking, while a 1:3 azide/GRGDS ratio is found to increase platelet recruitment, decrease clot degradation in hemodiluted environments, and decrease complement activation. Finally, this approach significantly increases survival relative to the particle‐only control in a liver resection model. In light of prior successes with the particle‐only system, these results emphasize the potential of this technology in aiding hemostasis and the importance of a holistic approach in engineering new treatments for hemorrhage.
</description>
<pubDate>Wed, 05 Apr 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/160943</guid>
<dc:date>2023-04-05T00:00:00Z</dc:date>
</item>
<item>
<title>Electrostatically assembled wound dressings deliver pro-angiogenic anti-miRs preferentially to endothelial cells</title>
<link>https://hdl.handle.net/1721.1/160942</link>
<description>Electrostatically assembled wound dressings deliver pro-angiogenic anti-miRs preferentially to endothelial cells
Berger, Adam G; Deiss-Yehiely, Elad; Vo, Chau; McCoy, Michael G; Almofty, Sarah; Feinberg, Mark W; Hammond, Paula T
Chronic non-healing wounds occur frequently in individuals affected by diabetes, yet standard-of-care treatment leaves many patients inadequately treated or with recurring wounds. MicroRNA (miR) expression is dysregulated in diabetic wounds and drives an anti-angiogenic phenotype, but miRs can be inhibited with short, chemically-modified RNA oligonucleotides (anti-miRs). Clinical translation of anti-miRs is hindered by delivery challenges such as rapid clearance and uptake by off-target cells, requiring repeated injections, excessively large doses, and bolus dosing mismatched to the dynamics of the wound healing process. To address these limitations, we engineered electrostatically assembled wound dressings that locally release anti-miR-92a, as miR-92a is implicated in angiogenesis and wound repair. In vitro, anti-miR-92a released from these dressings was taken up by cells and inhibited its target. An in vivo cellular biodistribution study in murine diabetic wounds revealed that endothelial cells, which play a critical role in angiogenesis, exhibit higher uptake of anti-miR eluted from coated dressings than other cell types involved in the wound healing process. In a proof-of-concept efficacy study in the same wound model, anti-miR targeting anti-angiogenic miR-92a de-repressed target genes, increased gross wound closure, and induced a sex-dependent increase in vascularization. Overall, this proof-of-concept study demonstrates a facile, translational materials approach for modulating gene expression in ulcer endothelial cells to promote angiogenesis and wound healing. Furthermore, we highlight the importance of probing cellular interactions between the drug delivery system and the target cells to drive therapeutic efficacy.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/160942</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Piloting batch reverse osmosis with a flexible bladder for water recovery from scaling-prone brine</title>
<link>https://hdl.handle.net/1721.1/160819</link>
<description>Piloting batch reverse osmosis with a flexible bladder for water recovery from scaling-prone brine
Tow, Emily W.; Wei, Quantum J.; Abraham, Audrey R.; Chua, Kei L.; Plumley, Michael J.; Lienhard, John H
A pilot-scale batch reverse osmosis (RO) system with a flexible bladder was designed to recover additional water from RO concentrate. The sulfate-rich, ~6400-ppm concentrate was sourced from the Yuma Desalting Plant (Arizona, USA), which desalinates agricultural drainage water. The pilot produced 4.4 m3/day of permeate with 150 ppm total dissolved solids from the facility’s concentrate stream with a recovery ratio of 82.6%. Despite producing supersaturated brine, there was no performance deterioration due to scaling. Using a bladder for retentate pressurization limited average power to 633 W and the specific energy consumption to 3.3 kWh/m3. The pilot’s energy data informed a model of large-scale batch RO, which has the potential to desalinate the same water for less than 1 kWh/m3. Additionally, a model was developed to predict scaling likelihood in batch RO. This investigation demonstrates that batch RO is a viable technology for low-energy brine concentration beyond saturation limits.
</description>
<pubDate>Tue, 15 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/160819</guid>
<dc:date>2025-04-15T00:00:00Z</dc:date>
</item>
<item>
<title>pH-Responsive, Charge-Reversing Layer-by-Layer Nanoparticle Surfaces Enhance Biofilm Penetration and Eradication</title>
<link>https://hdl.handle.net/1721.1/160691</link>
<description>pH-Responsive, Charge-Reversing Layer-by-Layer Nanoparticle Surfaces Enhance Biofilm Penetration and Eradication
Deiss-Yehiely, Elad; Cárcamo-Oyarce, Gerardo; Berger, Adam G; Ribbeck, Katharina; Hammond, Paula T
Microbes entrenched within biofilms can withstand 1000-fold higher concentrations of antibiotics, in part due to the viscous extracellular matrix that sequesters and attenuates antimicrobial activity. Nanoparticle (NP)-based therapeutics can aid in delivering higher local concentrations throughout biofilms as compared to free drugs alone, thereby enhancing the efficacy. Canonical design criteria dictate that positively charged nanoparticles can multivalently bind to anionic biofilm components and increase biofilm penetration. However, cationic particles are toxic and are rapidly cleared from circulation in vivo, limiting their use. Therefore, we sought to design pH-responsive NPs that change their surface charge from negative to positive in response to the reduced biofilm pH microenvironment. We synthesized a family of pH-dependent, hydrolyzable polymers and employed the layer-by-layer (LbL) electrostatic assembly method to fabricate biocompatible NPs with these polymers as the outermost surface. The NP charge conversion rate, dictated by polymer hydrophilicity and the side-chain structure, ranged from hours to undetectable within the experimental timeframe. LbL NPs with an increasingly fast charge conversion rate more effectively penetrated through, and accumulated throughout, wildtype (PAO1) and mutant overexpressing biomass (ΔwspF) Pseudomonas aeruginosa biofilms. Finally, tobramycin, an antibiotic known to be trapped by anionic biofilm components, was loaded into the final layer of the LbL NP. There was a 3.2-fold reduction in ΔwspF colony forming units for the fastest charge-converting NP as compared to both the slowest charge converter and free tobramycin. These studies provide a framework for the design of biofilm-penetrating NPs that respond to matrix interactions, ultimately increasing the efficacious delivery of antimicrobials.
</description>
<pubDate>Fri, 30 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/160691</guid>
<dc:date>2023-06-30T00:00:00Z</dc:date>
</item>
<item>
<title>STING Protein-Based In Situ Vaccine Synergizes CD4+ T, CD8+ T, and NK Cells for Tumor Eradication</title>
<link>https://hdl.handle.net/1721.1/160690</link>
<description>STING Protein-Based In Situ Vaccine Synergizes CD4+ T, CD8+ T, and NK Cells for Tumor Eradication
He, Yanpu; Hong, Celestine; Huang, Shengnan; Kaskow, Justin A; Covarrubias, Gil; Pires, Ivan S; Sacane, James C; Hammond, Paula T; Belcher, Angela M
Stimulator of interferon genes (STING) signaling is a promising target in cancer immunotherapy, with many ongoing clinical studies in combination with immune checkpoint blockade (ICB). Existing STING-based therapies largely focus on activating CD8+ T cell or NK cell-mediated cytotoxicity, while the role of CD4+ T cells in STING signaling has yet to be extensively studied in vivo. Here, a distinct CD4-mediated, protein-based combination therapy of STING and ICB as an in situ vaccine, is reported. The treatment eliminates subcutaneous MC38 and YUMM1.7 tumors in 70–100% of mice and protected all cured mice against rechallenge. Mechanistic studies reveal a robust TH1 polarization and suppression of Treg of CD4+ T cells, followed by an effective collaboration of CD4+ T, CD8+ T, and NK cells to eliminate tumors. Finally, the potential to overcome host STING deficiency by significantly decreasing MC38 tumor burden in STING KO mice is demonstrated, addressing the translational challenge for the 19% of human population with loss-of-function STING variants.
</description>
<pubDate>Tue, 04 Apr 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/160690</guid>
<dc:date>2023-04-04T00:00:00Z</dc:date>
</item>
<item>
<title>Transforming ovarian cancer care by targeting minimal residual disease</title>
<link>https://hdl.handle.net/1721.1/160689</link>
<description>Transforming ovarian cancer care by targeting minimal residual disease
Jazaeri, Amir A; Grisham, Rachel; Knisely, Anne; Spranger, Stefani; Zamarin, Dmitriy; Hillman, R Tyler; Lawson, Barrett C; Burns, Kathleen H; Lee, Sanghoon; Westin, Shannon N; Moiso, Enrico; Williams, Marc J; Bardhan, Neelkanth M; Pisanic, Thomas; Matulonis, Ursula; Weigelt, Britta; Shih, IeMing; Konstantinopoulos, Panagiotis A; Gaillard, Stephanie; Wang, Linghua; Aghajanian, Carol; D’Andrea, Alan D; Hammond, Paula; Shah, Sohrab; Wucherpfennig, Kai W; Lu, Karen H
Frontline treatment and resultant cure rates in patients with advanced ovarian cancer have changed little over the past several decades. Here, we outline a multidisciplinary approach aimed at gaining novel therapeutic insights by focusing on the poorly understood minimal residual disease phase of ovarian cancer that leads to eventual incurable recurrences.
</description>
<pubDate>Fri, 10 Nov 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/160689</guid>
<dc:date>2023-11-10T00:00:00Z</dc:date>
</item>
<item>
<title>Trafficking through the blood–brain barrier is directed by core and outer surface components of layer‐by‐layer nanoparticles</title>
<link>https://hdl.handle.net/1721.1/160688</link>
<description>Trafficking through the blood–brain barrier is directed by core and outer surface components of layer‐by‐layer nanoparticles
Lamson, Nicholas G; Pickering, Andrew J; Wyckoff, Jeffrey; Ganesh, Priya; Calle, Elizabeth A; Straehla, Joelle P; Hammond, Paula T
Drug‐carrying nanoparticles are a promising strategy to deliver therapeutics into the brain, but their translation requires better characterization of interactions between nanomaterials and endothelial cells of the blood–brain barrier (BBB). Here, we use a library of 18 layer‐by‐layer electrostatically assembled nanoparticles (NPs) to independently assess the impact of NP core and surface materials on in vitro uptake, transport, and intracellular trafficking in brain endothelial cells. We demonstrate that NP core stiffness determines the magnitude of transport, while surface chemistry directs intracellular trafficking. Finally, we demonstrate that these factors similarly dictate in vivo BBB transport using intravital imaging through cranial windows in mice. We identify that hyaluronic acid surface chemistry increases transport across the BBB in vivo, and flow conditions are necessary to replicate this finding in vitro. Taken together, these findings highlight the importance of assay geometry, cell biology, and fluid flow in developing nanocarriers for delivery to the brain.
</description>
<pubDate>Thu, 28 Dec 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/160688</guid>
<dc:date>2023-12-28T00:00:00Z</dc:date>
</item>
<item>
<title>DJESTHESIA: Tangible Multimedia for DJs</title>
<link>https://hdl.handle.net/1721.1/160686</link>
<description>DJESTHESIA: Tangible Multimedia for DJs
Castelló Ferrer, Eduardo
DJESTHESIA uses tangible interaction to craft real-time audiovisual multimedia, blending sound, visuals, and gestures into a unified live performance. The project supports four interaction modes: I) Knob changes music, where standard DJing is performed. II) Music changes visuals, where changes in the audio parameters done through the mixer have a direct impact in the visualizations representing the music (e.g., color palette). III) Gesture changes visuals, where gestures and body movements give the possibility to interact physically with the visual representation of the music (e.g., grab, release, throw). IV) Gesture changes music, where, gestures can convey information to an audio composition software to alter aspects of the music being played (e.g., EQs). The aim of DJESTHESIA is to transform the DJ into both a performer and a performance.
SIGGRAPH Real-Time Live! ’25, Vancouver, BC, Canada
</description>
<pubDate>Sat, 09 Aug 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/160686</guid>
<dc:date>2025-08-09T00:00:00Z</dc:date>
</item>
<item>
<title>Hydrogel dressings with intrinsic antibiofilm and antioxidative dual functionalities accelerate infected diabetic wound healing</title>
<link>https://hdl.handle.net/1721.1/160562</link>
<description>Hydrogel dressings with intrinsic antibiofilm and antioxidative dual functionalities accelerate infected diabetic wound healing
Pranantyo, Dicky; Yeo, Chun Kiat; Wu, Yang; Fan, Chen; Xu, Xiaofei; Yip, Yun Sheng; Vos, Marcus Ivan Gerard; Mahadevegowda, Surendra H; Lim, Priscilla Lay Keng; Yang, Liang; Hammond, Paula T; Leavesley, David Ian; Tan, Nguan Soon; Chan-Park, Mary B
Chronic wounds are often infected with biofilm bacteria and characterized by high oxidative stress. Current dressings that promote chronic wound healing either require additional processes such as photothermal irradiation or leave behind gross amounts of undesirable residues. We report a dual-functionality hydrogel dressing with intrinsic antibiofilm and antioxidative properties that are synergistic and low-leaching. The hydrogel is a crosslinked network with tethered antibacterial cationic polyimidazolium and antioxidative N-acetylcysteine. In a murine diabetic wound model, the hydrogel accelerates the closure of wounds infected with methicillin-resistant Staphylococcus aureus or carbapenem-resistant Pseudomonas aeruginosa biofilm. Furthermore, a three-dimensional ex vivo human skin equivalent model shows that N-acetylcysteine promotes the keratinocyte differentiation and accelerates the re-epithelialization process. Our hydrogel dressing can be made into different formats for the healing of both flat and deep infected chronic wounds without contamination of the wound or needing other modalities such as photothermal irradiation.
</description>
<pubDate>Thu, 01 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/160562</guid>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Layer-by-Layer Polymer Functionalization Improves Nanoparticle Penetration and Glioblastoma Targeting in the Brain</title>
<link>https://hdl.handle.net/1721.1/160561</link>
<description>Layer-by-Layer Polymer Functionalization Improves Nanoparticle Penetration and Glioblastoma Targeting in the Brain
Pickering, Andrew J; Lamson, Nicholas G; Marand, Michael H; Straehla, Joelle P; Hammond, Paula T; Huang, Wei
Glioblastoma is characterized by diffuse infiltration into surrounding healthy brain tissues, which makes it challenging to treat. Complete surgical resection is often impossible, and systemically delivered drugs cannot achieve adequate tumor exposure to prevent local recurrence. Convection-enhanced delivery (CED) offers a method for administering therapeutics directly into brain tumor tissue, but its impact has been limited by rapid clearance and off-target cellular uptake. Nanoparticle (NP) encapsulation presents a promising strategy for extending the retention time of locally delivered therapies while specifically targeting glioblastoma cells. However, the brain's extracellular structure poses challenges for NP distribution due to its narrow, tortuous pores and a harsh ionic environment. In this study, we investigated the impact of NP surface chemistry using layer-by-layer (LbL) assembly to design drug carriers for broad spatial distribution in brain tissue and specific glioblastoma cell targeting. We found that poly-l-glutamate and hyaluronate were effective surface chemistries for targeting glioblastoma cells in vitro. Coadsorbing either polymer with a small fraction of PEGylated polyelectrolytes improved the colloidal stability without sacrificing cancer cell selectivity. Following CED in vivo, gadolinium-functionalized LbL NPs enabled MRI visualization and exhibited a distribution volume up to three times larger than liposomes and doubled the retention half-time up to 13.5 days. Flow cytometric analysis of CED-treated murine orthotopic brain tumors indicated greater cancer cell uptake and reduced healthy cell uptake for LbL NPs compared to nonfunctionalized liposomes. The distinct cellular outcomes for different colayered LbL NPs provide opportunities to tailor this modular delivery system for various therapeutic applications.
</description>
<pubDate>Wed, 22 Nov 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/160561</guid>
<dc:date>2023-11-22T00:00:00Z</dc:date>
</item>
<item>
<title>Electrostatic adsorption of polyanions onto lipid nanoparticles controls uptake, trafficking, and transfection of RNA and DNA therapies</title>
<link>https://hdl.handle.net/1721.1/160560</link>
<description>Electrostatic adsorption of polyanions onto lipid nanoparticles controls uptake, trafficking, and transfection of RNA and DNA therapies
Nabar, Namita; Dacoba, Tamara G; Covarrubias, Gil; Romero-Cruz, Denisse; Hammond, Paula T
Rapid advances in nucleic acid therapies highlight the immense therapeutic potential of genetic therapeutics. Lipid nanoparticles (LNPs) are highly potent nonviral transfection agents that can encapsulate and deliver various nucleic acid therapeutics, including but not limited to messenger RNA (mRNA), silencing RNA (siRNA), and plasmid DNA (pDNA). However, a major challenge of targeted LNP-mediated systemic delivery is the nanoparticles’ nonspecific uptake by the liver and the mononuclear phagocytic system, due partly to the adsorption of endogenous serum proteins onto LNP surfaces. Tunable LNP surface chemistries may enable efficacious delivery across a range of organs and cell types. Here, we describe a method to electrostatically adsorb bioactive polyelectrolytes onto LNPs to create layered LNPs (LLNPs). LNP cores varying in nucleic acid cargo and component lipids were stably layered with four biologically relevant polyanions: hyaluronate (HA), poly-L-aspartate (PLD), poly-L-glutamate (PLE), and polyacrylate (PAA). We further investigated the impact of the four surface polyanions on the transfection and uptake of mRNA- and pDNA-loaded LNPs in cell cultures. PLD- and PLE-LLNPs increased mRNA transfection twofold over unlayered LNPs in immune cells. HA-LLNPs increased pDNA transfection rates by more than twofold in epithelial and immune cells. In a healthy C57BL/6 murine model, PLE- and HA-LLNPs increased transfection by 1.8-fold to 2.5-fold over unlayered LNPs in the liver and spleen. These results suggest that LbL assembly is a generalizable, highly tunable platform to modify the targeting specificity, stability, and transfection efficacy of LNPs, as well as incorporate other charged targeting and therapeutic molecules into these systems.
</description>
<pubDate>Mon, 04 Mar 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/160560</guid>
<dc:date>2024-03-04T00:00:00Z</dc:date>
</item>
<item>
<title>Targeting and monitoring ovarian cancer invasion with an RNAi and peptide delivery system</title>
<link>https://hdl.handle.net/1721.1/160559</link>
<description>Targeting and monitoring ovarian cancer invasion with an RNAi and peptide delivery system
Hao, Liangliang; Boehnke, Natalie; Elledge, Susanna K; Harzallah, Nour-Saïda; Zhao, Renee T; Cai, Eva; Feng, Yu-Xiong; Neaher, Sofia; Fleming, Heather E; Gupta, Piyush B; Hammond, Paula T; Bhatia, Sangeeta N
RNA interference (RNAi) therapeutics are an emerging class of medicines that selectively target mRNA transcripts to silence protein production and combat disease. Despite the recent progress, a generalizable approach for monitoring the efficacy of RNAi therapeutics without invasive biopsy remains a challenge. Here, we describe the development of a self-reporting, theranostic nanoparticle that delivers siRNA to silence a protein that drives cancer progression while also monitoring the functional activity of its downstream targets. Our therapeutic target is the transcription factor SMARCE1, which was previously identified as a key driver of invasion in early-stage breast cancer. Using a doxycycline-inducible shRNA knockdown in OVCAR8 ovarian cancer cells both in vitro and in vivo, we demonstrate that SMARCE1 is a master regulator of genes encoding proinvasive proteases in a model of human ovarian cancer. We additionally map the peptide cleavage profiles of SMARCE1-regulated proteases so as to design a readout for downstream enzymatic activity. To demonstrate the therapeutic and diagnostic potential of our approach, we engineered self-assembled layer-by-layer nanoparticles that can encapsulate nucleic acid cargo and be decorated with peptide substrates that release a urinary reporter upon exposure to SMARCE1-related proteases. In an orthotopic ovarian cancer xenograft model, theranostic nanoparticles were able to knockdown SMARCE1 which was in turn reported through a reduction in protease-activated urinary reporters. These LBL nanoparticles both silence gene products by delivering siRNA and noninvasively report on downstream target activity by delivering synthetic biomarkers to sites of disease, enabling dose-finding studies as well as longitudinal assessments of efficacy.
</description>
<pubDate>Mon, 04 Mar 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/160559</guid>
<dc:date>2024-03-04T00:00:00Z</dc:date>
</item>
<item>
<title>Carboxylated Nanoparticle Surfaces Enhance Association with Mucoid Pseudomonas aeruginosa Biofilms</title>
<link>https://hdl.handle.net/1721.1/160558</link>
<description>Carboxylated Nanoparticle Surfaces Enhance Association with Mucoid Pseudomonas aeruginosa Biofilms
Deiss-Yehiely, Elad; Dzordzorme, Abigail E; Loiselle, Maggie Elizabeth; Yonker, Lael M; Hammond, Paula T
Pseudomonas aeruginosa biofilms comprise three main polysaccharides: alginate, psl, and pel, which all imbue tolerance against exogenous antimicrobials. Nanoparticles (NPs) are an exciting new strategy to overcome the biofilm matrix for therapeutic delivery applications; however, zero existing FDA approvals for biofilm-specific NP formulations can be attributed to the complex interplay of physiochemical forces at the biofilm-NP interface. Here, we leverage a set of inducible, polysaccharide-specific, expressing isogenic P. aeruginosa mutants coupled with an assembled layer-by-layer NP (LbL NP) panel to characterize biofilm-NP interactions. When investigating these interactions using confocal microscopy, alginate-layered NPs associated more than dextran-sulfate-layered NPs with biofilms that had increased alginate production, including biofilms produced by mucoid P. aeruginosa isolates from people with cystic fibrosis. These differences were further confirmed in LbL NPs layered with polysaccharide- or hydrocarbon-based polymers with pendent carboxylate or sulfate functional groups. These data suggest carboxylated NP surfaces have enhanced interactions specifically with mucoid biofilms as compared to sulfated surfaces and lay the foundation for their inclusion as a design element for increasing biofilm-NP interactions and efficacious drug delivery.
</description>
<pubDate>Thu, 14 Mar 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/160558</guid>
<dc:date>2024-03-14T00:00:00Z</dc:date>
</item>
<item>
<title>Poly(β-aminoester) Physicochemical Properties Govern the Delivery of siRNA from Electrostatically Assembled Coatings</title>
<link>https://hdl.handle.net/1721.1/160557</link>
<description>Poly(β-aminoester) Physicochemical Properties Govern the Delivery of siRNA from Electrostatically Assembled Coatings
Berger, Adam G; DeLorenzo, Charles; Vo, Chau; Kaskow, Justin A; Nabar, Namita; Hammond, Paula T
Localized short interfering RNA (siRNA) therapy has the potential to drive high-specificity molecular-level treatment of a variety of disease states. Unfortunately, effective siRNA therapy suffers from several barriers to its intracellular delivery. Thus, drug delivery systems that package and control the release of therapeutic siRNAs are necessary to overcome these obstacles to clinical translation. Layer-by-layer (LbL) electrostatic assembly of thin film coatings containing siRNA and protonatable, hydrolyzable poly(β-aminoester) (PBAE) polymers is one such drug delivery strategy. However, the impact of PBAE physicochemical properties on the transfection efficacy of siRNA released from LbL thin film coatings has not been systematically characterized. In this study, we investigate the siRNA transfection efficacy of four structurally similar PBAEs in vitro. We demonstrate that small changes in structure yield large changes in physicochemical properties, such as hydrophobicity, pKa, and amine chemical structure, driving differences in the interactions between PBAEs and siRNA in polyplexes and in LbL thin film coatings for wound dressings. In our polymer set, Poly3 forms the most stable interactions with siRNA (Keff,w/w = 0.298) to slow release kinetics and enhance transfection of reporter cells in both colloidal and thin film coating approaches. This is due to its unique physiochemical properties: high hydrophobicity (clog P = 7.86), effective pKa closest to endosomal pH (pKa = 6.21), and high cooperativity in buffering (nhill = 7.2). These properties bestow Poly3 with enhanced endosomal buffering and escape properties. Taken together, this work elucidates the connections between small changes in polymer structure, emergent properties, and polyelectrolyte theory to better understand PBAE transfection efficacy.
</description>
<pubDate>Tue, 30 Apr 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/160557</guid>
<dc:date>2024-04-30T00:00:00Z</dc:date>
</item>
<item>
<title>Surfactant-Mediated Assembly of Precision-Size Liposomes</title>
<link>https://hdl.handle.net/1721.1/160556</link>
<description>Surfactant-Mediated Assembly of Precision-Size Liposomes
Pires, Ivan S; Suggs, Jack R; Carlo, Isabella S; Yun, DongSoo; Hammond, Paula T; Irvine, Darrell J
Liposomes can greatly improve the pharmacokinetics of therapeutic agents due to their ability to encapsulate drugs and accumulate in target tissues. Considerable effort has been focused on methods to synthesize these nanocarriers in the past decades. However, most methods fail to controllably generate lipid vesicles at specific sizes and with low polydispersity, especially via scalable approaches suitable for clinical product manufacturing. Here, we report a surfactant-assisted liposome assembly method enabling the precise production of monodisperse liposomes with diameters ranging from 50 nm to 1 μm. To overcome scalability limitations, we used tangential flow filtration, a scalable size-based separation technique, to readily concentrate and purify the liposomal samples from more than 99.9% of detergent. Further, we propose two modes of liposome self-assembly following detergent dilution to explain the wide range of liposome size control, one in which phase separation into lipid-rich and detergent-rich phases drives the formation of large bilayer liposomes and a second where the rate of detergent monomer partitioning into solution controls bilayer leaflet imbalances that promote fusion into larger vesicles. We demonstrate the utility of controlled size assembly of liposomes by evaluating nanoparticle uptake in macrophages, where we observe a clear linear relationship between vesicle size and total nanoparticle uptake.
</description>
<pubDate>Tue, 13 Aug 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/160556</guid>
<dc:date>2024-08-13T00:00:00Z</dc:date>
</item>
<item>
<title>Carbene formation as a mechanism for efficient intracellular uptake of cationic antimicrobial carbon acid polymers</title>
<link>https://hdl.handle.net/1721.1/160555</link>
<description>Carbene formation as a mechanism for efficient intracellular uptake of cationic antimicrobial carbon acid polymers
Koh, Chong Hui; Lambu, Mallikharjuna Rao; Tan, Chongyun; Wei, Guangmin; Kok, Zhi Yuan; Zhang, Kaixi; Vu, Quang Huy Nhat; Panneerselvam, Muthuvel; Ooi, Ying Jie; Tan, Shiow Han; Wang, Zheng; Tatina, Madhu Babu; Ng, Justin Tze Yang; Guo, Aoxin; Tonanon, Panyawut; Dang, Tram T; Gan, Yunn-Hwen; Mu, Yuguang; Hammond, Paula T; Chi, Yonggui Robin; Webster, Richard D; Pullarkat, Sumod A; Li, Qingjie; Greenberg, E Peter; Gründling, Angelika; Pethe, Kevin; Chan-Park, Mary B
Cationic polymers have emerged as promising next-generation antimicrobial agents, albeit with inherent limitations such as low potency and limited biocompatibility. Classical cationic polymers kill bacteria via physical membrane disruption. We propose a non-classical mechanism of crossing the bacterial plasma membrane barrier, a step required for subsequent inhibition of intracellular targets, by cationic polymers which are carbon acids. Oligoimidazolium (OIM) carbon acids, instead of lysing bacteria, transiently deprotonate in water to form hydrophobic N-heterocyclic carbenes (NHCs) and exhibit efficient plasma membrane translocation. Only OIMs that are carbon acids have potent antibacterial activities against even colistin- and multidrug-resistant bacteria. OIM amide derivatives exhibit excellent antibacterial efficacy in murine sepsis and thigh infection models, while a polymeric version acts as a prophylactic agent against bovine mastitis, which is a global agricultural problem. This study unveils a promising path for the development of an alternative class of potent antimicrobial agents.
</description>
<pubDate>Sat, 12 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/160555</guid>
<dc:date>2025-07-12T00:00:00Z</dc:date>
</item>
<item>
<title>Vane rheometry of viscoelastic liquids and yield stress fluids</title>
<link>https://hdl.handle.net/1721.1/160432</link>
<description>Vane rheometry of viscoelastic liquids and yield stress fluids
Vadillo, Damien C.; Owens, Crystal E.; Perego, Alessandro; McKinley, Gareth H.
An inter-laboratory comparison was performed to set a baseline for how the properties of difficult materials vary based on location and measurement tool. These tests focused on rheology of a Newtonian fluid, a viscous silicone oil, and two colloidal gels with yield stress behavior: a commercially available milk-based cream and an aging aluminum oxide hydroxide gel. Rheological data were collected on these materials using an array of rheometric test geometries including a cone and plate, parallel plates, a cup and bob, a 4-arm vane, and 12- and 24-arm vanes having fractal cross section that were fabricated independently by each lab and for which accurate torque and rotation conversion factors have been established. Characterization by the 3D-printed fractal vanes agree between the two laboratories and agree with reference data obtained with cone-and-plate, parallel-plate, and cup-and-bob measurement tools. The viscous oil exhibited predominantly Newtonian behavior in shear while weak viscoelastic effects emerged at high frequency and can be accurately described by a fractional Maxwell model. The colloidal gels exhibited a more intricate thixo-elastoviscoplastic (TEVP) rheological behavior, including thixotropy, as well as distinct dynamic and static yield stresses. To explore the elastoviscoplastic character of these systems, we show how the fractal vane geometry can be readily utilized with such materials to measure creep and partial elastic recoil without concern about slip or shear banding.
</description>
<pubDate>Mon, 07 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/160432</guid>
<dc:date>2025-07-07T00:00:00Z</dc:date>
</item>
<item>
<title>Building confidence in models for complex barrier systems for radionuclides</title>
<link>https://hdl.handle.net/1721.1/160431</link>
<description>Building confidence in models for complex barrier systems for radionuclides
Sarsenbayev, Dauren; Tournassat, Christophe; Steefel, Carl I.; Wainwright, Haruko M.
The modeling and simulation of the Cement–clay Interaction–Diffusion field (CI-D) experiment at the Mont Terri site in Switzerland presented here demonstrates that it is possible to capture the multiscale physical and chemical features of natural and engineered barrier systems for radionuclides. The simulations are successfully carried out with the newly developed CrunchODiTi high-performance computing software that accounts for multiple continua, including a continuum representing the electrical double layer (EDL) developed along negatively charged clay particles in clay rock. The simulation also accounts for both the complex three-dimensional (3D) geometry, expected as the norm in a geological waste repository, and the anisotropy of the geological formation. In addition, the high resolution of the model makes it possible to include “skin effects” developed at the interface between highly reactive materials, in this case between the high pH cement and the circumneutral but electrostatic Opalinus Clay. The successful history matching with the field experiment demonstrates that the distinct geochemical and physical properties of the cement and the Opalinus Clay in the CI-D experiment can be accounted for. Such analyses are essential for developing a defensible safety case for the underground storage of radioactive waste.
</description>
<pubDate>Thu, 03 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/160431</guid>
<dc:date>2025-07-03T00:00:00Z</dc:date>
</item>
<item>
<title>Enhancing Resource Circularity in Aluminum Production through Nanofiltration of Waste Cryolite</title>
<link>https://hdl.handle.net/1721.1/160325</link>
<description>Enhancing Resource Circularity in Aluminum Production through Nanofiltration of Waste Cryolite
Lee, Trent R.; Foo, Zi Hao; Nguyen, Vinn; Lienhard, John H
This study presents a novel approach to the selective separation of aluminum from waste cryolite electrolyte with two nanofiltration membranes: a conventional polyamide membrane and a membrane coated with a polyelectrolyte layer. Utilizing transmission electron microscopy and Fourier transform infrared spectroscopy, we find that the polyelectrolyte coating significantly increases the density of positively charged ammonium groups on the membrane surface, thereby enhancing the Donnan exclusion of aluminum ions. Notably, the polyelectrolyte coating enhances the sodium/aluminum separation factor by 55%. Our experimental results demonstrate that the coated membrane sustains high aluminum rejection rates, averaging 99.1%, while permitting substantial permeation of sodium, lithium, and potassium ions. This selective permeability is pronounced at lower pH levels, where the sodium/aluminum separation factor peaks at 102.02 for chloride-rich waste cryolite. Our process modeling using the Donnan steric pore model with dielectric exclusion substantiates the practical viability of Donnan-enhanced nanofiltration for processing waste cryolite. Our module-scale analysis indicates that the efficient aluminum concentration in the retentate, achieving a sodium/aluminum ratio of approximately 2.6, is viable for upcycling cryolite electrolyte and promoting a circular aluminum economy. Furthermore, the aluminum-depleted permeate, with aluminum cationic composition as low as 0.00194%, makes ample progress toward a benignly disposable effluent, reducing the aluminum industry’s environmental footprint.
</description>
<pubDate>Mon, 06 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/160325</guid>
<dc:date>2025-01-06T00:00:00Z</dc:date>
</item>
<item>
<title>Toward a Circular Lithium Economy with Electrodialysis: Upcycling Spent Battery Leachates with Selective and Bipolar Ion-Exchange Membranes</title>
<link>https://hdl.handle.net/1721.1/160305</link>
<description>Toward a Circular Lithium Economy with Electrodialysis: Upcycling Spent Battery Leachates with Selective and Bipolar Ion-Exchange Membranes
Foo, Zi Hao; Lee, Trent R.; Wegmueller, Jakob M.; Heath, Samuel M.; Lienhard, John H
Recycling spent lithium-ion batteries offers a sustainable solution to reduce ecological degradation from mining and mitigate raw material shortages and price volatility. This study investigates using electrodialysis with selective and bipolar ion-exchange membranes to establish a circular economy for lithium-ion batteries. An experimental data set of over 1700 ion concentration measurements across five current densities, two solution compositions, and three pH levels supports the techno-economic analysis. Selective electrodialysis (SED) isolates lithium ions from battery leachates, yielding a 99% Li-pure retentate with 68.8% lithium retention, achieving relative ionic fluxes up to 2.41 for Li+ over transition metal cations and a selectivity of 5.64 over monovalent cations. Bipolar membrane electrodialysis (BMED) converts LiCl into high-purity LiOH and HCl, essential for battery remanufacturing and reducing acid consumption via acid recycling. High current densities reduce ion leakage, achieving lithium leakage as low as 0.03%, though hydronium and hydroxide leakage in BMED remains high at 11–20%. Our analysis projects LiOH production costs between USD 1.1 and 3.6 per kilogram, significantly lower than current prices. Optimal SED and BMED conditions are identified, emphasizing the need to control proton transport in BMED and improve cobalt–lithium separation in SED to enhance cost efficiency.
</description>
<pubDate>Fri, 18 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/160305</guid>
<dc:date>2024-10-18T00:00:00Z</dc:date>
</item>
<item>
<title>Dynamic Sub‐Nanoscale “Water Fingers” in Interfacial Polymerization</title>
<link>https://hdl.handle.net/1721.1/160304</link>
<description>Dynamic Sub‐Nanoscale “Water Fingers” in Interfacial Polymerization
Mai, Zhaohuan; Yoshioka, Tomohisa; Deshmukh, Akshay; Yuan, Tianmu; Zhu, Junyong; Yuan, Jinkai; Gonzales, Ralph Rolly; Yamamoto, Ayano; Shi, Yongxuan; Fu, Wenming; Guan, Kecheng; Li, Zhan; Zhang, Pengfei; Lienhard, John H; Matsuyama, Hideto
Interfacial polymerization (IP) is widely used to fabricate high‐performance membranes, yet the molecular‐level dynamics that govern monomer transport across liquid–liquid interfaces remain poorly understood. Here it is reported that sub‐nanoscale “water fingers”—transient chains of water molecules—modulate the interfacial behavior of amine monomers during IP, dictating the structure and performance of the resulting polyamide films. Using molecular dynamics simulations of archetypal membrane‐forming systems (&lt;jats:italic&gt;m&lt;/jats:italic&gt;‐phenylenediamine (MPD)–trimesoyl chloride (TMC) for reverse osmosis and piperazine (PIP)–TMC for nanofiltration), it is revealed that water fingers differentially stabilize monomer transport across the aqueous‐organic interface, correlating with experimentally observed disparities in film density and permeability. These findings offer a new physical picture of interfacial reactivity, establishing water fingers as critical, tunable elements of monomer transport. This work provides mechanistic insights into a century‐old reaction and opens new design strategies for ultrathin films and interfacial materials.
</description>
<pubDate>Fri, 20 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/160304</guid>
<dc:date>2025-06-20T00:00:00Z</dc:date>
</item>
<item>
<title>User-centered evaluation of visual generative AI for city design: an exploratory technology acceptance model analysis</title>
<link>https://hdl.handle.net/1721.1/160303</link>
<description>User-centered evaluation of visual generative AI for city design: an exploratory technology acceptance model analysis
Haddad, Fadi Ghassan; Jang, Kee Moon; Duarte, Fábio; Rajaonson, Juste; Ratti, Carlo
This study explores the potential of visual generative artificial intelligence (visual GenAI) in augmenting city design workflows. Using customized DALL-E 3 interfaces, we facilitated engagement sessions with members of an academic planning community to assess their perceptions of AI-generated imagery before and after its use, with a focus on main street revitalization (&lt;jats:italic&gt;n&lt;/jats:italic&gt; = 24 qualitative, &lt;jats:italic&gt;n&lt;/jats:italic&gt; = 17 quantitative). Drawing on the Technology Acceptance Model, we assessed cognitive, operational, and participatory dimensions influencing user attitudes toward AI-assisted urban design. Perceived usefulness in cognitive and participatory tasks emerged as the strongest predictors of attitudes toward visual GenAI use, explaining up to 71% and 44% of the variance, respectively. While participants valued the ability to generate visuals and stimulate dialogue rapidly, challenges with prompt precision, output predictability, and interface usability limited broader accessibility. User expertise moderated perceptions, with higher proficiency participants generally expressing more positive attitudes toward its use. Our preliminary findings suggest that while visual GenAI may offer new opportunities to augment cognitive and co-design processes, its integration into city design workflows may also depend on diverse training datasets to address biases; human-centered design with clearer affordances and support for non-expert users; and, validation processes that maintain human oversight. This study contributes to the emerging research on human-AI work integration by providing initial empirical evidence on the opportunities and constraints of visual GenAI tools in city design contexts, while establishing a foundation for future research.&lt;/jats:p&gt;
</description>
<pubDate>Sun, 13 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/160303</guid>
<dc:date>2025-07-13T00:00:00Z</dc:date>
</item>
<item>
<title>Clinical Utility of Plasma Microbial Cell-Free DNA Surveillance in Neutropenic Patients with Acute Myeloid Leukemia Undergoing Outpatient Chemotherapy: A Case Series</title>
<link>https://hdl.handle.net/1721.1/160036</link>
<description>Clinical Utility of Plasma Microbial Cell-Free DNA Surveillance in Neutropenic Patients with Acute Myeloid Leukemia Undergoing Outpatient Chemotherapy: A Case Series
Lampou, Maria; Trull, Elizabeth C.; Warren, Hailey M.; Ghebremichael, Musie S.; Nakka, Raja; Floyd, Daniel J.; Fathi, Amir T.; Brunner, Andrew M.; Mansour, Michael K.
Background/Objectives: The main objective of the study is to assess the clinical utility of microbial cell-free DNA (mcfDNA) in neutropenic patients diagnosed with acute myeloid leukemia (AML) undergoing chemotherapy in the outpatient setting. Neutropenia is a common complication in this patient cohort and enhances the risk of fatal opportunistic bacterial and fungal infections. Accurate and timely diagnosis of these infections in outpatient asymptomatic individuals is critical. Methods: Fourteen patients were studied in this prospective observational case series. Traditional blood cultures (BCs) were obtained when clinically indicated and blood samples were collected for plasma mcfDNA metagenomic sequencing up to two times a week at outpatient oncology appointments. Results were compared in identifying potential infectious agents. Results: BCs identified pathogens in only two patients, despite several cases where infection was suspected. In contrast, mcfDNA testing detected pathogens in 11 of the 14 patients, including bacteria, such as Staphylococcus aureus, and invasive fungi, such as Candida and Aspergillus species, and Pneumocystis jirovecii. Conclusions: In the outpatient setting, mcfDNA surveillance offers a more reliable method for detecting pathogens. This approach identified actionable microbiologic results in immunocompromised individuals who did not meet standard clinical criteria for suspicion of infection. Further research is required to confirm the potential of mcfDNA surveillance in an outpatient setting to guide more accurate treatment decisions, reduce extensive clinical investigations, and improve neutropenic patient outcomes.
</description>
<pubDate>Sat, 05 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/160036</guid>
<dc:date>2025-07-05T00:00:00Z</dc:date>
</item>
<item>
<title>Unlocking the Potential of MBenes in Li/Na-Ion Batteries</title>
<link>https://hdl.handle.net/1721.1/160035</link>
<description>Unlocking the Potential of MBenes in Li/Na-Ion Batteries
Li, Zixin; Hu, Yao; Lan, Haihui; Xia, Huicong
MBenes, an emerging family of two-dimensional transition metal boride materials, are gaining prominence in alkali metal-ion battery research owing to their distinctive stratified architecture, enhanced charge transport properties, and exceptional electrochemical durability. This analysis provides a comprehensive examination of morphological characteristics and fabrication protocols for MBenes, with particular focus on strategies for optimizing energy storage metrics through controlled adjustment of interlayer distance and tailored surface modifications. The discussion highlights these materials&amp;rsquo; unique capability to host substantial alkali metal ions, translating to exceptional longevity during charge&amp;ndash;discharge cycling and remarkable high-current performance in both lithium and sodium battery systems. Current obstacles to materials development are critically evaluated, encompassing precision control in nanoscale synthesis, reproducibility in large-scale production, enhancement of thermodynamic stability, and eco-friendly processing requirements. Prospective research pathways are proposed, including sustainable manufacturing innovations, atomic-level structural tailoring through computational modeling, and expansion into hybrid energy storage-conversion platforms. By integrating fundamental material science principles with practical engineering considerations, this work seeks to establish actionable frameworks for advancing MBene-based technologies toward next-generation electrochemical storage solutions with enhanced energy density and operational reliability.
</description>
<pubDate>Tue, 01 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/160035</guid>
<dc:date>2025-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Composition-Dependent Structural, Phonon, and Thermodynamical Characteristics of Zinc-Blende BeZnO</title>
<link>https://hdl.handle.net/1721.1/160034</link>
<description>Composition-Dependent Structural, Phonon, and Thermodynamical Characteristics of Zinc-Blende BeZnO
Talwar, Devki N.; Becla, Piotr
Both ZnO and BeO semiconductors crystallize in the hexagonal wurtzite (wz), cubic rock salt (rs), and zinc-blende (zb) phases, depending upon their growth conditions. Low-dimensional heterostructures ZnO/BexZn1-xO and BexZn1-xO ternary alloy-based devices have recently gained substantial interest to design/improve the operations of highly efficient and flexible nano- and micro-electronics. Attempts are being made to engineer different electronic devices to cover light emission over a wide range of wavelengths to meet the growing industrial needs in photonics, energy harvesting, and biomedical applications. For zb materials, both experimental and theoretical studies of lattice dynamics ωj(q→)&#13;
 have played crucial roles for understanding their optical and electronic properties. Except for zb ZnO, inelastic neutron scattering measurement of ωj(q→)&#13;
 for BeO is still lacking. For the BexZn1-xO ternary alloys, no experimental and/or theoretical studies exist for comprehending their structural, vibrational, and thermodynamical traits (e.g., Debye temperature ΘD(T);&#13;
 specific heat Cv(T))&#13;
. By adopting a realistic rigid-ion model, we have meticulously simulated the results of lattice dynamics, and thermodynamic properties for both the binary zb ZnO, BeO and ternary BexZn1-xO alloys. The theoretical results are compared/contrasted against the limited experimental data and/or ab initio calculations. We strongly feel that the phonon/thermodynamic features reported here will encourage spectroscopists to perform similar measurements and check our theoretical conjectures.
</description>
<pubDate>Tue, 01 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/160034</guid>
<dc:date>2025-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Potential Expansion of Low-Carbon Liquid Fuel Production Using Hydrogen-Enhanced Biomass/Municipal Solid Waste Gasification</title>
<link>https://hdl.handle.net/1721.1/160033</link>
<description>Potential Expansion of Low-Carbon Liquid Fuel Production Using Hydrogen-Enhanced Biomass/Municipal Solid Waste Gasification
Ostadi, Mohammad; Cohn, Daniel R.; Zang, Guiyan; Bromberg, Leslie
Low-carbon liquid fuels are needed for decarbonization of hard-to-decarbonize segments of the transportation sector. This decarbonization can be limited by the amount of renewable carbon. Thermochemical conversion of biomass/municipal solid waste (MSW) through gasification is a promising route for producing low-carbon fuels. There are two major opportunities for increasing the amount of low-carbon liquid fuel that can be produced from gasification in any region. One is to increase the amount of liquid fuel from a given amount of biomass/MSW, particularly by hydrogen-enhancement of gasification synthesis gas. Second is the potential for large expansion of use of biomass feedstocks from its present level. Such biomass feedstocks include agricultural waste, forestry waste, MSW, and specially grown biomass that does not interfere with food production. The use of MSW may provide advantages of an established network for pickup and transportation of feedstock to disposal sites and the avoidance of methane produced from landfilling of MSW. As a case study, we looked at potential expansion of US low-carbon fuel production, considering the recent projections of the 2024 USDOE report, which estimated potential production of a billion tons/yr of biomass/MSW feedstocks in the US. This report included an estimated potential for liquid biofuel production of 60 billion gallons/yr of diesel energy equivalent fuel without the use of hydrogen enhancement. By hydrogen-enhanced biomass/MSW gasification, this projection could be doubled to 120 billion gallons/yr of diesel energy equivalent fuel. Furthermore, the co-location potential of biomass/MSW resources with potential renewable energy generation sites is explored. This overlap of hydrogen production and biomass production in the US are located in regions such as the US Midwest, Texas, and California. This co-location strategy enhances logistical feasibility, reducing transport costs and optimizing energy system integration; and can be applied to other geographical locations. Hydrogen-enhanced biomass/MSW gasification offers a promising route to substantially increase low-carbon liquid fuel production (e.g., methanol) and support increased liquid fuel production and greenhouse gas reduction goals.
</description>
<pubDate>Sat, 21 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/160033</guid>
<dc:date>2025-06-21T00:00:00Z</dc:date>
</item>
<item>
<title>Digital Transformation of Healthcare Enterprises in the Era of Disruptions&amp;mdash;A Structured Literature Review</title>
<link>https://hdl.handle.net/1721.1/160032</link>
<description>Digital Transformation of Healthcare Enterprises in the Era of Disruptions&amp;mdash;A Structured Literature Review
Hundal, Gaganpreet Singh; Rhodes, Donna; Laux, Chad
Digital transformation is the process of using digital technologies for creating or modifying existing business processes and customer experience, leveraging cutting-edge technology to meet changing market needs. Disruptions like the COVID-19 pandemic, regional wars, and climate-driven natural disasters create consequential scenarios, e.g., global supply chain disruption creating further demand&amp;ndash;supply mismatch for healthcare enterprises. According to KPMG&amp;rsquo;s 2021 Healthcare CEO Future Pulse, 97% of healthcare leaders reported that COVID-19 significantly accelerated the digital transformation agenda. Successful digital transformation initiatives, for example, digital twins for supply chains, augmented reality, the IoT, and cybersecurity technology initiatives implemented significantly enhanced resiliency in supply chain and manufacturing operations. However, according to another study conducted by Mckinsey &amp;amp; Company, 70% of digital transformation efforts for healthcare enterprises fail to meet their goals. Healthcare enterprises face unique challenges, such as complex regulatory environments, cultural resistance, workforce IT skills, and the need for data interoperability, which make digital transformation a challenging project. Therefore, this study explored potential barriers, enablers, disruption scenarios, and digital transformation use cases for healthcare enterprises. A structured literature review (SLR), followed by thematic content analysis, was conducted to inform the research objectives. A sample of sixty (&lt;i&gt;n&lt;/i&gt; = 60) peer-reviewed journal articles were analyzed using research screening criteria and keywords aligned with research objectives. The key themes for digital transformation use cases identified in this study included information processing capability, workforce enablement, operational efficiency, and supply chain resilience. Collaborative leadership as a change agent, collaboration between information technology (IT) and operational technology (OT), and effective change management were identified as the key enablers for digital transformation of healthcare enterprises. This study will inform digital transformation leaders, researchers, and healthcare enterprises in the development of enterprise-level proactive strategies, business use cases, and roadmaps for digital transformation.
</description>
<pubDate>Fri, 20 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/160032</guid>
<dc:date>2025-06-20T00:00:00Z</dc:date>
</item>
<item>
<title>Charge‐Stabilized Nanodiscs as a New Class of Lipid Nanoparticles</title>
<link>https://hdl.handle.net/1721.1/159988</link>
<description>Charge‐Stabilized Nanodiscs as a New Class of Lipid Nanoparticles
Pires, Ivan S; Hostetler, Alexander; Covarrubias, Gil; Carlo, Isabella S; Suggs, Jack R; Kim, BJ; Pickering, Andrew J; Gordon, Ezra; Irvine, Darrell J; Hammond, Paula T
Nanoparticles have the potential to improve disease treatment and diagnosis due to their ability to incorporate drugs, alter pharmacokinetics, and enable tissue targeting. While considerable effort is placed on developing spherical lipid‐based nanocarriers, recent evidence suggests that high aspect ratio lipid nanocarriers can exhibit enhanced disease site targeting and altered cellular interactions. However, the assembly of lipid‐based nanoparticles into non‐spherical morphologies has typically required incorporating additional agents such as synthetic polymers, proteins, lipid‐polymer conjugates, or detergents. Here, charged lipid headgroups are used to generate stable discoidal lipid nanoparticles from mixed micelles, which are termed charge‐stabilized nanodiscs (CNDs). The ability to generate CNDs in buffers with physiological ionic strength is restricted to lipids with more than one anionic group, whereas monovalent lipids only generate small nanoliposomal assemblies. In mice, the smaller size and anisotropic shape of CNDs promote higher accumulation in subcutaneous tumors than spherical liposomes. Further, the surface chemistry of CNDs can be modified via layer‐by‐layer (LbL) assembly to improve their tumor‐targeting properties over state‐of‐the‐art LbL‐liposomes when tested using a metastatic model of ovarian cancer. The application of charge‐mediated anisotropy in lipid‐based assemblies can aid in the future design of biomaterials and cell‐membrane mimetic structures.
</description>
<pubDate>Thu, 14 Nov 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159988</guid>
<dc:date>2024-11-14T00:00:00Z</dc:date>
</item>
<item>
<title>Pooled Nanoparticle Screening Using a Chemical Barcoding Approach</title>
<link>https://hdl.handle.net/1721.1/159987</link>
<description>Pooled Nanoparticle Screening Using a Chemical Barcoding Approach
Vaidya, Katherine; Regan, Michael S; Lin, James; Houle, Jenna; Gupta, Aanchal; Stopka, Sylwia A; Agar, Nathalie YR; Hammond, Paula T; Boehnke, Natalie
We report the development of a small molecule‐based barcoding platform for pooled screening of nanoparticle delivery. Using aryl halide‐based tags (halocodes), we achieve high‐sensitivity detection via gas chromatography coupled with mass spectrometry or electron capture. This enables barcoding and tracking of nanoparticles with minimal halocode concentrations and without altering their physicochemical properties. To demonstrate the utility of our platform for pooled screening, we synthesized a halocoded library of polylactide‐co‐glycolide (PLGA) nanoparticles and quantified uptake in ovarian cancer cells in a pooled manner. Our findings correlate with conventional fluorescence‐based assays. Additionally, we demonstrate the potential of halocodes for spatial mapping of nanoparticles using mass spectrometry imaging (MSI). Halocoding presents an accessible and modular nanoparticle screening platform capable of quantifying delivery of pooled nanocarrier libraries in a range of biological settings.
</description>
<pubDate>Mon, 27 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159987</guid>
<dc:date>2025-01-27T00:00:00Z</dc:date>
</item>
<item>
<title>High‐Throughput Microfluidic‐Mediated Assembly of Layer‐By‐Layer Nanoparticles</title>
<link>https://hdl.handle.net/1721.1/159986</link>
<description>High‐Throughput Microfluidic‐Mediated Assembly of Layer‐By‐Layer Nanoparticles
Pires, Ivan S; Gordon, Ezra; Suh, Heikyung; Irvine, Darrell J; Hammond, Paula T
Surface modification of nanoparticles (NPs) via the layer‐by‐layer (LbL) technique is a promising approach to generate targeted drug delivery vehicles. LbL‐NPs have been successfully used in preclinical models for controlled drug release, tumor and immune cell targeting, improved pharmacokinetics and biodistribution, and controlling cellular trafficking and uptake mechanisms. A simple and scalable synthesis method for LbL‐NPs that can be adapted for clinical translation is of great interest. Here a new method of polymer deposition is presented onto NPs enabled through microfluidic (MCF) mixing. NPs are mixed with polyelectrolytes using commercially available bifurcating mixer MCF cartridges. In addition to increased process robustness, MCF allows for LbL electrostatic assembly using titrated polymer‐to‐NP weight equivalent ratios where no excess polymer is required to achieve a given LbL layering. Under such conditions, no time‐consuming purification is needed, greatly increasing LbL‐NP throughput and avoiding the loss of NPs during purification. The utility of this system is demonstrated using interleukin‐12‐loaded liposomal NPs, which show equivalent efficacy in vitro and in vivo to LbL‐NPs generated via traditional lab‐scale batch‐wise polymer adsorption and tangential flow filtration purification. Moreover, it is shown that MCF can assemble LbL films of various chemistries and on various NP core substrates.
</description>
<pubDate>Thu, 03 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159986</guid>
<dc:date>2025-04-03T00:00:00Z</dc:date>
</item>
<item>
<title>Reaction Pathways, Thermodynamics, and Kinetics of Cyclopentanone Oxidation Intermediates: A Theoretical Approach</title>
<link>https://hdl.handle.net/1721.1/159985</link>
<description>Reaction Pathways, Thermodynamics, and Kinetics of Cyclopentanone Oxidation Intermediates: A Theoretical Approach
Khanniche, Sarah; Green, William H
Despite the promising role of cyclopentanone as a bioderived fuel, thermodynamic and kinetic data are lacking for low temperature oxidation regimes. In this study, ab initio calculations at the CBS-QB3 level explore the subsequent reactivity that results from O2-addition to 2- and 3-oxo cyclopentyl radicals, including expected reaction classes such as intra-H migration, HO2-elimination, cyclic ether formation, and β-scission along with their thermodynamic parameters. Some of the rates are similar to the analogous reactions of cyclopentane, but some other reactions of cyclopentanone are very different. The carbonyl group hinders H-migration from the α′ position but promotes HO2-elimination. Enol peroxy formation from some hydroperoxy alkyl radicals of cyclopentanone is unexpectedly important, and so is HO2-elimination by β-scission. Our calculations also indicated that at engine relevant conditions the α-RO2 prefers to go back to the reactants 2-oxo cyclopentyl radical and O2. Therefore, the reactions resulting from HO2-addition to 2-oxo cyclopentyl are also provided. The lowest barrier channel identified on the singlet surface corresponds to an unexpected intra OH-migration path concerted with ring opening. This valuable information will advance the construction of improved kinetic models for the oxidation of cylopentanone.
</description>
<pubDate>Wed, 18 Sep 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159985</guid>
<dc:date>2019-09-18T00:00:00Z</dc:date>
</item>
<item>
<title>Effects of surface species and homogeneous reactions on rates and selectivity in ethane oxidation on oxide catalysts</title>
<link>https://hdl.handle.net/1721.1/159983</link>
<description>Effects of surface species and homogeneous reactions on rates and selectivity in ethane oxidation on oxide catalysts
Liu, Yilang; McGill, Charles J; Green, William H; Deshlahra, Prashant
Selective alkane oxidations on metal oxide catalysts involve complex mechanisms with multiple reactions in series and parallel, different types of reduced and oxidized surface species, and potential contributions from gas-phase reactions. Here, kinetics and thermodynamics of elementary steps involved in C2H6-O2 reactions on SiO2-supported small vanadium oxide domains are determined using density functional theory. These surface reactions together with gas-phase mechanisms are incorporated in kinetic simulations to determine how surface and gaseous reactions interact and contribute to rates and selectivity. The results show that gas-phase reactions within pore volumes in contact with the catalyst contribute significantly to C2H6 activation rates, even at conditions where gas-phase reactions in empty volumes without catalyst are negligible. The majority of C2H6 activations occur on the surface, via H abstraction by vanadium oxo species present at terminal lattice oxygens. The gas-phase activations via H-abstraction by OH radicals also exhibit significant contributions. The reduced centers formed by reactions at vanadium oxo species are re-oxidized rapidly and, therefore, are present in very small concentrations at reaction conditions. The re-oxidation steps lead to the formation of HO2 radicals and surface peroxo species that are also rapidly consumed and are present in small concentrations. The peroxo species preferentially convert C2H4 to its epoxide product and influence selectivity even at low concentrations. The gas-phase reactions decrease the concentrations of peroxo species and improve selectivity slightly. The effects of reaction conditions and catalyst site density provide further insights into how factors beyond conversions at lattice oxygens influence rates and selectivity in alkane oxidation reactions of significant industrial importance.
</description>
<pubDate>Wed, 01 Dec 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159983</guid>
<dc:date>2021-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Curation and Storage of Astromaterials: A Magnetic Perspective</title>
<link>https://hdl.handle.net/1721.1/159982</link>
<description>Curation and Storage of Astromaterials: A Magnetic Perspective
Gattacceca, Jérôme; Maurel, Clara; Hutzler, Aurore; Rochette, Pierre; Weiss, Benjamin P.
In this exciting new era of sample return space missions and improved detection of meteorite falls, the curation of pristine astromaterials has become a central concern. Well-preserved meteorites and returned extraterrestrial samples are absolutely unique sources of information regarding the formation and evolution of our solar system. Among their numerous informative properties, the paleomagnetic records of meteorites and returned samples (i.e., their natural remanent magnetization, NRM) provide invaluable insight on the physical characteristics of the first planetesimals, the Moon and Mars and on the environment in which these objects formed, as well as they long term evolution. Unfortunately, the NRM of a rock is probably one of the easiest and fastest property to alter, and today, very little action is taken in most curation facilities and collections to avoid magnetic contamination. In particular, contact with magnets and other unidentified, yet omnipresent, sources of magnetic fields are responsible for erasing the 4.5-billion-year old paleomagnetic records of &gt;60% of meteorites in collections. In this paper, we describe the principal sources of magnetic contamination that are found in curation or storage facilities and propose simple preventive solutions and monitoring strategies. We recommend that strict measures are taken at the very least for the most precious samples (i.e., returned samples, meteorite falls and rare meteorite finds). We also encourage curators to raise awareness among their regular providers regarding the unfortunate and widespread use of magnets for meteorite testing.
</description>
<pubDate>Tue, 08 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159982</guid>
<dc:date>2025-07-08T00:00:00Z</dc:date>
</item>
<item>
<title>Detailed Kinetic Modeling for the Pyrolysis of a Jet A Surrogate</title>
<link>https://hdl.handle.net/1721.1/159981</link>
<description>Detailed Kinetic Modeling for the Pyrolysis of a Jet A Surrogate
Vermeire, Florence H; Aravindakshan, Syam Ukkandath; Jocher, Agnes; Liu, Mengjie; Chu, Te-Chun; Hawtof, Ryan E; Van de Vijver, Ruben; Prendergast, Matthew B; Van Geem, Kevin M; Green, William H
Fuel microchannels for regenerative cooling are receiving increased attention in advanced aviation&#13;
technologies. Those microchannels allow heat integration between the endothermic cracking of&#13;
the jet fuels and their subsequent combustion. In this work, a detailed elementary-step kinetic&#13;
model is developed to gain insights in the cracking chemistry of a Jet A surrogate (n-dodecane,&#13;
isooctane, n-propyl benzene, and 1,3,5-trimethylbenzene), which allows for further optimization&#13;
of those aviation technologies. A dedicated procedure is described for the automated generation of&#13;
kinetic models for multi-component mixtures with the open-source Reaction Mechanism&#13;
Generator (RMG) software. The full kinetic model is validated against experimental measurements&#13;
in multiple reactor geometries, at various experimental conditions, including both a surrogate&#13;
mixture and commercial Jet A. The experimental data includes new experimental measurements&#13;
for the pyrolysis of a Jet A surrogate in a tubular reactor with detailed product analysis using&#13;
comprehensive 2D GC. The good performance of the kinetic model for data from a broad range of&#13;
experimental conditions demonstrates the advantage of a kinetic model with detailed chemistry&#13;
against empirical kinetic models that are limited in their applicability range. Further analysis of&#13;
the important chemistry in the kinetic model shows that it is essential to account for cross-reactions&#13;
between the different surrogate components.
</description>
<pubDate>Thu, 20 Jan 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159981</guid>
<dc:date>2022-01-20T00:00:00Z</dc:date>
</item>
<item>
<title>EnzymeMap: curation, validation and data-driven prediction of enzymatic reactions</title>
<link>https://hdl.handle.net/1721.1/159980</link>
<description>EnzymeMap: curation, validation and data-driven prediction of enzymatic reactions
Heid, Esther; Probst, Daniel; Green, William H; Madsen, Georg KH
Enzymatic reactions are an ecofriendly, selective, and versatile addition, sometimes even alternative to organic reactions for the synthesis of chemical compounds such as pharmaceuticals or fine chemicals. To identify suitable reactions, computational models to predict the activity of enzymes on non-native substrates, to perform retrosynthetic pathway searches, or to predict the outcomes of reactions including regio- and stereoselectivity are becoming increasingly important. However, current approaches are substantially hindered by the limited amount of available data, especially if balanced and atom mapped reactions are needed and if the models feature machine learning components. We therefore constructed a high-quality dataset (EnzymeMap) by developing a large set of correction and validation algorithms for recorded reactions in the literature and showcase its significant positive impact on machine learning models of retrosynthesis, forward prediction, and regioselectivity prediction, outperforming previous approaches by a large margin. Our dataset allows for deep learning models of enzymatic reactions with unprecedented accuracy, and is freely available online.
</description>
<pubDate>Wed, 22 Nov 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159980</guid>
<dc:date>2023-11-22T00:00:00Z</dc:date>
</item>
<item>
<title>Automated reaction kinetics and network exploration (Arkane): A statistical mechanics, thermodynamics, transition state theory, and master equation software</title>
<link>https://hdl.handle.net/1721.1/159979</link>
<description>Automated reaction kinetics and network exploration (Arkane): A statistical mechanics, thermodynamics, transition state theory, and master equation software
Dana, Alon Grinberg; Johnson, Matthew S; Allen, Joshua W; Sharma, Sandeep; Raman, Sumathy; Liu, Mengjie; Gao, Connie W; Grambow, Colin A; Goldman, Mark J; Ranasinghe, Duminda S; Gillis, Ryan J; Payne, A Mark; Li, Yi‐Pei; Dong, Xiaorui; Spiekermann, Kevin A; Wu, Haoyang; Dames, Enoch E; Buras, Zachary J; Vandewiele, Nick M; Yee, Nathan W; Merchant, Shamel S; Buesser, Beat; Class, Caleb A; Goldsmith, Franklin; West, Richard H; Green, William H
The open-source statistical mechanics software described here, Arkane–Automated Reaction Kinetics and Network Exploration–facilitates computations of thermodynamic properties of chemical species, high-pressure limit reaction rate coefficients, and pressure-dependent rate coefficient over multi-well molecular potential energy surfaces (PES) including the effects of collisional energy transfer on phenomenological kinetics. Arkane can use estimates to fill in information for molecules or reactions where quantum chemistry information is missing. The software solves the internal energy master equation for complex unimolecular reaction systems. Inputs to the software include converged electronic structure computations performed by the user using a variety of supported software packages (Gaussian, Molpro, Orca, TeraChem, Q-Chem, Psi4). The software outputs high-pressure limit rate coefficients and pressure-dependent phenomenological rate coefficients, as well as computed thermodynamic properties (enthalpy, entropy, and constant pressure heat capacity) with added energy corrections. Some of the key features of Arkane include treatment of 1D, 2D or ND hindered internal rotation modes, treatment of free internal rotation modes, quantum tunneling effect consideration, transition state theory (TST) and Rice-Ramsperger-Kassel-Marcus (RRKM) rate coefficient computations, master equation solution with four implemented methods, inverse-Laplace transform of high-pressure limit rate coefficients into the energy domain, energy corrections based on bond-additivity or isodesmic reactions, automated and efficient PES exploration, and PES sensitivity analysis. The present work describes the design of Arkane, how it should be used, and refers to the theory that it employs. Arkane is distributed via the RMG-Py software suite (https://github.com/ReactionMechanismGenerator/RMG-Py).
</description>
<pubDate>Mon, 03 Apr 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159979</guid>
<dc:date>2023-04-03T00:00:00Z</dc:date>
</item>
<item>
<title>A wide range experimental and kinetic modeling study of the oxidation of 2,3-dimethyl-2-butene: Part 1</title>
<link>https://hdl.handle.net/1721.1/159978</link>
<description>A wide range experimental and kinetic modeling study of the oxidation of 2,3-dimethyl-2-butene: Part 1
Liang, Jinhu; He, Ruining; Nagaraja, Shashank S; Mohamed, A Abd El-Sabor; Lu, Haitao; Almarzooq, Yousef M; Dong, Xiaorui; Mathieu, Olivier; Green, William H; Petersen, Eric L; Sarathy, S Mani; Curran, Henry J
2,3-Dimethyl-2-butene (TME) is a potential fuel additive with high research octane number (RON) and octane sensitivity (S), which can improve internal combustion engine performance and efficiency. However, the combustion characteristics of TME have not been comprehensively investigated. Thus, it is essential to study the combustion characteristics of TME and construct a detailed chemical kinetic model to describe its combustion. In this paper, two high-pressure shock tubes and a constant-volume reactor are used to measure ignition delay times and laminar flame speeds of TME oxidation. The ignition delay times were measured at equivalence ratios of 0.5, 1.0, and 2.0 in “air”, at pressures of 5 and 10 bar, in the temperature range of 950 – 1500 K. Flame speeds of the TME/ “air” mixtures were measured at atmospheric pressure, at a temperature of 325 K, for equivalence ratios ranging from 0.78 to 1.31. Two detailed kinetic mechanisms were constructed independently using different methodologies; the KAUST TME mechanism was constructed based on NUIGMech1.1, and the MIT TME mechanism was built using the Reaction Mechanism Generator (RMG). Both mechanisms were used to simulate the experimental results using Chemkin Pro. In the present work, reaction flux and sensitivity analyses were performed using the KAUST mechanism to determine the critical reactions controlling TME oxidation at the conditions studied.
</description>
<pubDate>Mon, 01 May 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159978</guid>
<dc:date>2023-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Characterizing Uncertainty in Machine Learning for Chemistry</title>
<link>https://hdl.handle.net/1721.1/159977</link>
<description>Characterizing Uncertainty in Machine Learning for Chemistry
Heid, Esther; McGill, Charles J; Vermeire, Florence H; Green, William H
Characterizing uncertainty in machine learning models has recently gained interest in the context of machine learning reliability, robustness, safety, and active learning. Here, we separate the total uncertainty into contributions from noise in the data (aleatoric) and shortcomings of the model (epistemic), further dividing epistemic uncertainty into model bias and variance contributions. We systematically address the influence of noise, model bias, and model variance in the context of chemical property predictions, where the diverse nature of target properties and the vast chemical chemical space give rise to many different distinct sources of prediction error. We demonstrate that different sources of error can each be significant in different contexts and must be individually addressed during model development. Through controlled experiments on data sets of molecular properties, we show important trends in model performance associated with the level of noise in the data set, size of the data set, model architecture, molecule representation, ensemble size, and data set splitting. In particular, we show that 1) noise in the test set can limit a model's observed performance when the actual performance is much better, 2) using size-extensive model aggregation structures is crucial for extensive property prediction, and 3) ensembling is a reliable tool for uncertainty quantification and improvement specifically for the contribution of model variance. We develop general guidelines on how to improve an underperforming model when falling into different uncertainty contexts.
</description>
<pubDate>Tue, 20 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159977</guid>
<dc:date>2023-06-20T00:00:00Z</dc:date>
</item>
<item>
<title>Comment on ‘Physics-based representations for machine learning properties of chemical reactions’</title>
<link>https://hdl.handle.net/1721.1/159976</link>
<description>Comment on ‘Physics-based representations for machine learning properties of chemical reactions’
Spiekermann, Kevin A; Stuyver, Thijs; Pattanaik, Lagnajit; Green, William H
In a recent article in this journal, van Gerwen et al (2022 Mach. Learn.: Sci. Technol. 3 045005)&#13;
presented a kernel ridge regression model to predict reaction barrier heights. Here, we comment on&#13;
the utility of that model and present references and results that contradict several statements made&#13;
in that article. Our primary interest is to offer a broader perspective by presenting three aspects&#13;
that are essential for researchers to consider when creating models for chemical kinetics: (1) are the&#13;
model’s prediction targets and associated errors sufficient for practical applications? (2) Does the&#13;
model prioritize user-friendly inputs so it is practical for others to integrate into prediction&#13;
workflows? (3) Does the analysis report performance on both interpolative and more challenging&#13;
extrapolative data splits so users have a realistic idea of the likely errors in the model’s predictions?
</description>
<pubDate>Fri, 06 Oct 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159976</guid>
<dc:date>2023-10-06T00:00:00Z</dc:date>
</item>
<item>
<title>Machine Learning Validation via Rational DatasetSampling with astartes</title>
<link>https://hdl.handle.net/1721.1/159975</link>
<description>Machine Learning Validation via Rational DatasetSampling with astartes
Burns, Jackson W; Spiekermann, Kevin A; Bhattacharjee, Himaghna; Vlachos, Dionisios G; Green, William H
Machine Learning (ML) has become an increasingly popular tool to accelerate traditional&#13;
workflows. Critical to the use of ML is the process of splitting datasets into training, validation,&#13;
and testing subsets that are used to develop and evaluate models. Common practice in the&#13;
literature is to assign these subsets randomly. Although this approach is fast and efficient, it&#13;
only measures a model’s capacity to interpolate. Testing errors from random splits may be&#13;
overly optimistic if given new data that is dissimilar to the scope of the training set; thus,&#13;
there is a growing need to easily measure performance for extrapolation tasks. To address this&#13;
issue, we report astartes, an open-source Python package that implements many similarityand distance-based algorithms to partition data into more challenging splits. Separate from&#13;
astartes, users can then use these splits to better assess out-of-sample performance with any&#13;
ML model of choice. This publication focuses on use-cases within cheminformatics. However,&#13;
astartes operates on arbitrary vector inputs, so its principals and workflow are generalizable&#13;
to other ML domains as well. astartes is available via the Python package managers pip&#13;
and conda and is publicly hosted on GitHub (github.com/JacksonBurns/astartes).
</description>
<pubDate>Sun, 05 Nov 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159975</guid>
<dc:date>2023-11-05T00:00:00Z</dc:date>
</item>
<item>
<title>Chemprop: A Machine Learning Package for Chemical Property Prediction</title>
<link>https://hdl.handle.net/1721.1/159974</link>
<description>Chemprop: A Machine Learning Package for Chemical Property Prediction
Heid, Esther; Greenman, Kevin P; Chung, Yunsie; Li, Shih-Cheng; Graff, David E; Vermeire, Florence H; Wu, Haoyang; Green, William H; McGill, Charles J
Deep learning has become a powerful and frequently employed tool for the prediction of molecular properties, thus creating a need for open-source and versatile software solutions that can be operated by nonexperts. Among the current approaches, directed message-passing neural networks (D-MPNNs) have proven to perform well on a variety of property prediction tasks. The software package Chemprop implements the D-MPNN architecture and offers simple, easy, and fast access to machine-learned molecular properties. Compared to its initial version, we present a multitude of new Chemprop functionalities such as the support of multimolecule properties, reactions, atom/bond-level properties, and spectra. Further, we incorporate various uncertainty quantification and calibration methods along with related metrics as well as pretraining and transfer learning workflows, improved hyperparameter optimization, and other customization options concerning loss functions or atom/bond features. We benchmark D-MPNN models trained using Chemprop with the new reaction, atom-level, and spectra functionality on a variety of property prediction data sets, including MoleculeNet and SAMPL, and observe state-of-the-art performance on the prediction of water-octanol partition coefficients, reaction barrier heights, atomic partial charges, and absorption spectra. Chemprop enables out-of-the-box training of D-MPNN models for a variety of problem settings in fast, user-friendly, and open-source software.
</description>
<pubDate>Tue, 26 Dec 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159974</guid>
<dc:date>2023-12-26T00:00:00Z</dc:date>
</item>
<item>
<title>Perspective on automated predictive kinetics using estimates derived from large datasets</title>
<link>https://hdl.handle.net/1721.1/159973</link>
<description>Perspective on automated predictive kinetics using estimates derived from large datasets
Green, William H
A longstanding project of the chemical kinetics community is to predict reaction rates and the behavior of reacting systems, even for systems where there are no experimental data. Many important reacting systems (atmosphere, combustion, pyrolysis, partial oxidations) involve a large number of reactions occurring simultaneously, and reaction intermediates that have never been observed, making this goal even more challenging. Improvements in our ability to compute rate coefficients and other important parameters accurately from first principles, and improvements in automated kinetic modeling software, have partially overcome many challenges. Indeed, in some cases quite complicated kinetic models have been constructed which accurately predicted the results of independent experiments. However, the process of constructing the models, and deciding which reactions to measure or compute ab initio, relies on accurate estimates (and indeed most of the numerical rate parameters in most large kinetic models are estimates.) Machine‐learned models trained on large datasets can improve the accuracy of these estimates, and allow a better integration of quantum chemistry and experimental data. The need for continued development of shared (perhaps open‐source) software and databases, and some directions for improvement, are highlighted. As we model more complicated systems, many of the weaknesses of the traditional ways of doing chemical kinetic modeling, and of testing kinetic models, have been exposed, identifying several challenges for future research by the community.
</description>
<pubDate>Tue, 25 Jun 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159973</guid>
<dc:date>2024-06-25T00:00:00Z</dc:date>
</item>
<item>
<title>ReactionMechanismSimulator.jl: A modern approach to chemical kinetic mechanism simulation and analysis</title>
<link>https://hdl.handle.net/1721.1/159972</link>
<description>ReactionMechanismSimulator.jl: A modern approach to chemical kinetic mechanism simulation and analysis
Johnson, Matthew S; Pang, Hao‐Wei; Payne, Allen Mark; Green, William H
We present ReactionMechanismSimulator.jl (RMS), a modern differentiable software for the simulation and analysis of chemical kinetic mechanisms, including multiphase systems. RMS has already been applied to problems in combustion, pyrolysis, polymers, pharmaceuticals, catalysis, and electrocatalysis. RMS is written in Julia, making it easy to develop and allowing it to take advantage of Julia's extensive numerical computing ecosystem. In addition to its extensive library of optimized analytic Jacobians, RMS can generate and use Jacobians computed using automatic differentiation and symbolically generated analytic Jacobians. RMS is demonstrated to be faster than Cantera and Chemkin in several benchmarks. RMS also implements an extensive set of features for analyzing chemical mechanisms, including a library of easy-to-call plotting functions, molecular structure resolved flux diagram generation, crash analysis, traditional sensitivity analysis, transitory sensitivity analysis, and an automatic mechanism analysis toolkit. RMS implements efficient adjoint and parallel forward sensitivity analyses. We also demonstrate the ease of adding new features to RMS.
</description>
<pubDate>Fri, 05 Jul 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159972</guid>
<dc:date>2024-07-05T00:00:00Z</dc:date>
</item>
<item>
<title>Decarbonizing of power plants by ammonia co-firing: design, techno-economic, and life-cycle analyses</title>
<link>https://hdl.handle.net/1721.1/159971</link>
<description>Decarbonizing of power plants by ammonia co-firing: design, techno-economic, and life-cycle analyses
Deng, Lingyan; Lai, Haoxiang; Zang, Guiyan; Menon, Angiras; Farnsworth, Amanda M; Gencer, Emre; Ghoniem, Ahmed; Green, William H; Stoner, Robert J
This research investigates the decarbonization of India’s electricity grid using ammonia in power plants. It focuses on ammonia produced in Western Australia and transported to India, co-fired with high rank coal, and compared with power plants utilizing carbon capture and sequestration (CCS). The study assesses the overall costs and the life cycle greenhouse gas (LC GHG) emissions for both new plants and retrofits. For 20% gray, blue, and green ammonia, the levelized cost of electricity is 86, 89, 125 $/MWh, with corresponding LC GHG emissions of 1,234, 1,079, and 1,062 kg CO2e/MWh. Co-firing with green ammonia, though more expensive than blue ammonia, yields lower CO2 emissions. Conversely, reducing the same amount of direct CO2 emission via CCS costs $84/MWh and a LC GHG emission of 1,227 kg CO2e/MWh. While CCS is cheaper, it results in higher LC GHG. There is a trade-off between cost and emissions across the strategies. Under scenarios with low capacity factors or reduced ammonia production costs, coal-ammonia co-firing could become more economical and greener than the CCS. This study provides quantitative insights for policymakers and project developers. However, it is crucial for decision-makers to consider several factors: (1) the potential impact of social resistance to CCS; (2) the time required for large-scale commercialization of CCS technology, which is expected to be significantly longer than the implementation time for a coal-ammonia co-firing decarbonization strategy; (3) the potential of either CCS or ammonia-coal co-firing strategy to enhance India’s electricity mix, thus contributing to energy security.
</description>
<pubDate>Sun, 18 Aug 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159971</guid>
<dc:date>2024-08-18T00:00:00Z</dc:date>
</item>
<item>
<title>Transitory sensitivity in automatic chemical kinetic mechanism analysis</title>
<link>https://hdl.handle.net/1721.1/159964</link>
<description>Transitory sensitivity in automatic chemical kinetic mechanism analysis
Johnson, Matthew S; McGill, Charles J; Green, William H
Detailed chemical kinetic mechanisms are necessary for resolving many important chemical processes. As the chemistry of smaller molecules has become better grounded and quantum chemistry calculations have become cheaper, kineticists have become interested in constructing progressively larger kinetic mechanisms to model increasingly complex chemical processes. These large kinetic mechanisms prove incredibly difficult to refine and time‐consuming to interpret. Traditional sensitivity analysis on a large mechanism can range from inconvenient to practically impossible without special techniques to reduce the computational cost. We first present a new time‐local sensitivity analysis we term transitory sensitivity analysis. Transitory sensitivity analysis is demonstrated in an example to accurately identify traditionally sensitive reactions at an 18,000x speed up over traditional sensitivities. By fusing transitory sensitivity analysis with more traditional time‐local branching, pathway, and cluster analyses, we develop an algorithm for efficient automatic mechanism analysis. This automatic mechanism analysis at a time point is able to identify the reactions a target is most sensitive to using transitory sensitivity analysis and then propose hypotheses why the reaction might be sensitive using branching, pathway, and cluster analyses. We implement these algorithms within the reaction mechanism simulator (RMS) package, which enables us to report the automatic mechanism analysis results in highly readable text formats and in molecular flux diagrams.
</description>
<pubDate>Mon, 21 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159964</guid>
<dc:date>2024-10-21T00:00:00Z</dc:date>
</item>
<item>
<title>pKa prediction in non-aqueous solvents</title>
<link>https://hdl.handle.net/1721.1/159963</link>
<description>pKa prediction in non-aqueous solvents
Zheng, Jonathan W; Al Ibrahim, Emad; Kaljurand, Ivari; Leito, Ivo; Green, William H
Acid dissociation constants (pKa ) are widely measured and studied, most typically inwater. Comparatively few datasets and models for non-aqueous pKa values exist. Inthis work, we demonstrate how the pKa in one solvent can be accurately determinedusing reference data in another solvent, corrected by solvation energy calculationsfrom the COSMO-RS method. We benchmark this approach in 10 different solvents,and find that pKa values calculated in six solvents deviate from experimental data onaverage by less than 1 pKa unit. We observe comparable performance on a morediverse test set including amino acids and drug molecules, with higher error for largemolecules. The model performance in four other solvents is worse, with one MAEexceeding 3 pKa units; we discuss how such errors arise due to both model error andinconsistency in obtaining experimental data. Finally, we demonstrate how this tech-nique can be used to estimate the proton transfer energy between different solvents,and use this to report a value of the proton's solvation energy in formamide, a quan-tity that does not have a consensus value in literature.
</description>
<pubDate>Wed, 11 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159963</guid>
<dc:date>2024-12-11T00:00:00Z</dc:date>
</item>
<item>
<title>Integrating Machine Learning and Large Language Models to Advance Exploration of Electrochemical Reactions</title>
<link>https://hdl.handle.net/1721.1/159962</link>
<description>Integrating Machine Learning and Large Language Models to Advance Exploration of Electrochemical Reactions
Zheng, Zhiling; Florit, Federico; Jin, Brooke; Wu, Haoyang; Li, Shih‐Cheng; Nandiwale, Kakasaheb Y; Salazar, Chase A; Mustakis, Jason G; Green, William H; Jensen, Klavs F
Electrochemical C−H oxidation reactions offer a sustainable route to functionalize hydrocarbons, yet identifying suitable substrates and optimizing synthesis remain challenging. Here, we report an integrated approach combining machine learning and large language models to streamline the exploration of electrochemical C−H oxidation reactions. Utilizing a batch rapid screening electrochemical platform, we evaluated a wide range of reactions, initially classifying substrates by their reactivity, while LLMs text‐mined literature data to augment the training set. The resulting ML models for reactivity prediction achieved high accuracy (&amp;gt;90 %) and enabled virtual screening of a large set of commercially available molecules. To optimize reaction conditions for selected substrates, LLMs were prompted to generate code that iteratively improved yields. This human‐AI collaboration proved effective, efficiently identifying high‐yield conditions for 8 drug‐like substances or intermediates. Notably, we benchmarked the accuracy and reliability of 12 different LLMs–including LLaMA series, Claude series, OpenAI o1, and GPT‐4‐on code generation and function calling related to ML based on natural language prompts given by chemists to showcase potentials for accelerating research across four diverse tasks. In addition, we collected an experimental benchmark dataset comprising 1071 reaction conditions and yields for electrochemical C−H oxidation reactions.
</description>
<pubDate>Tue, 03 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159962</guid>
<dc:date>2024-12-03T00:00:00Z</dc:date>
</item>
<item>
<title>Thermodynamic and Chemical Kinetic Parameters in Ammonia Oxidation: A Comparison of Recent Studies and Parameter Recommendations</title>
<link>https://hdl.handle.net/1721.1/159961</link>
<description>Thermodynamic and Chemical Kinetic Parameters in Ammonia Oxidation: A Comparison of Recent Studies and Parameter Recommendations
Grinberg Dana, Alon; Kaplan, Kfir; Keslin, Michal; Cao, Chuangchuang; Green, William H
Ammonia is a promising energy storage vector for renewable hydrogen and could become an essential part of the future energy mix. To efficiently utilize ammonia as a fuel, overcome its relatively low reactivity and high tailpipe emissions, and optimize the ratio of different additives, it is crucial to develop accurate chemical kinetic predictive abilities for this system. In this study, we review and compare thermo-kinetic parameters from 19 chemical kinetic reaction mechanisms published in the five-year period of 2018–2023. This comparison reveals a concerning inconsistency in the thermodynamic parameters used for many species in the H/N and H/N/O subsets. One species was even double-counted (having two distinct labels with two similar sets of reactions) in six of these recent mechanisms. Twelve reactions were identified for which the reported rate coefficient deviates by 3 orders of magnitude or more at 1000 K among the reviewed sources. Not all parameters in the literature mechanisms are trackable and properly cited. Ammonia modeling suffers from the “many-model” problem, with significant inconsistency in the parameter values used by different studies. The present work highlights some of these concerns and attempts to advance the current state of NH3 oxidation modeling by suggesting a coherent set of parameters justified by other work in the literature or by quantum chemistry calculations reported here. It is recommended that future studies of ammonia modeling will properly justify the thermokinetic parameters they use─especially deviations from established values─make all parameter sources trackable, and provide a glossary of chemical structures to all species in the suggested reaction mechanism.
</description>
<pubDate>Thu, 21 Nov 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159961</guid>
<dc:date>2024-11-21T00:00:00Z</dc:date>
</item>
<item>
<title>Formation of Sandwich Complexes between Lanthanides and Chlorophylls Recovers Photosynthetic Activity and Imparts Crop Resistance to UV Stress via Single-Nanodose Seed Treatment</title>
<link>https://hdl.handle.net/1721.1/159877</link>
<description>Formation of Sandwich Complexes between Lanthanides and Chlorophylls Recovers Photosynthetic Activity and Imparts Crop Resistance to UV Stress via Single-Nanodose Seed Treatment
Rizzo, Giorgio; Marelli, Benedetto
Lanthanides (Lns) have been used as fertilizers in several countries over the past 50 years, yet their interaction with plant metabolism remains largely unknown, with only a few confirmed biological roles, such as cofactors in specific dehydrogenases found in methylotrophic organisms. This study investigates the interplay between Ln ions and chlorophylls (Chls), showing that throughout the Ln series, Ln-Chls replace Mg2+ at the center of the pigment structures through the formation of sandwich-like Ln-porphyrin complexes. When applied as seed treatment to four staple crops (i.e., barley, chickpea, corn, and soybean), Lns enhance seedlings’ growth. Mobility of lanthanum from the rhizosphere to plants is favored when the metal is in neutral or weakly charged forms, as studied by changing the salt used (i.e., LaCl3, La(NO3)3, LaPO4, La2(CO3)3, [LaEDTA(H2O)3]K, or [LaDOTA(H2O)]K). These results highlight the role of the Casparian strip in modulating the apoplastic movement of Lns in the stele and suggest that plants possess an uptake system with binding strengths comparable to or stronger than La-DOTA complexes. Additionally, when complexed with trehalose in seed treatments, Ln3+ protected pigments from UV stress, enhancing the resilience of the four staple crops to abiotic stressors.
</description>
<pubDate>Mon, 07 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159877</guid>
<dc:date>2025-07-07T00:00:00Z</dc:date>
</item>
<item>
<title>Use of Large Language Models for Rapid Quantitative Feedback in Case-Based Learning: A Pilot Study</title>
<link>https://hdl.handle.net/1721.1/159876</link>
<description>Use of Large Language Models for Rapid Quantitative Feedback in Case-Based Learning: A Pilot Study
Qian, Carolyn; Gao, Christina; Park, Sang-O.; Gim, Haelynn; Hou, Kelly; Cook, Benjamin; Le, Jasmin; Stretton, Brandon; Maddison, John; McCoy, Liam; Goh, Rudy; Arnold, Matthew; Reda, Haatem; Kaplan, Tamara; Gheihman, Galina
Abstract Large language models (LLMs) may be able to deliver interactive case-based content and score student interactions with such cases. In this study, GPT-4o demonstrated a high correlation with expert scorers in the evaluation of medical students’ interactions with cases. A difference between LLM scores and expert scorers was corrected through calibration.
</description>
<pubDate>Fri, 28 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159876</guid>
<dc:date>2025-02-28T00:00:00Z</dc:date>
</item>
<item>
<title>Generalizable, fast, and accurate DeepQSPR with fastprop</title>
<link>https://hdl.handle.net/1721.1/159875</link>
<description>Generalizable, fast, and accurate DeepQSPR with fastprop
Burns, Jackson W; Green, William H
Quantitative Structure–Property Relationship studies (QSPR), often referred to interchangeably as QSAR, seek to establish a mapping between molecular structure and an arbitrary target property. Historically this was done on a target-by-target basis with new descriptors being devised to specifically map to a given target. Today software packages exist that calculate thousands of these descriptors, enabling general modeling typically with classical and machine learning methods. Also present today are learned representation methods in which deep learning models generate a target-specific representation during training. The former requires less training data and offers improved speed and interpretability while the latter offers excellent generality, while the intersection of the two remains under-explored. This paper introduces fastprop, a software package and general Deep-QSPR framework that combines a cogent set of molecular descriptors with deep learning to achieve state-of-the-art performance on datasets ranging from tens to tens of thousands of molecules. fastprop provides both a user-friendly Command Line Interface and highly interoperable set of Python modules for the training and deployment of feedforward neural networks for property prediction. This approach yields improvements in speed and interpretability over existing methods while statistically equaling or exceeding their performance across most of the tested benchmarks. fastprop is designed with Research Software Engineering best practices and is free and open source, hosted at github.com/jacksonburns/fastprop.
</description>
<pubDate>Tue, 13 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159875</guid>
<dc:date>2025-05-13T00:00:00Z</dc:date>
</item>
<item>
<title>Supercoiling-mediated feedback rapidly couples and tunes transcription</title>
<link>https://hdl.handle.net/1721.1/159874</link>
<description>Supercoiling-mediated feedback rapidly couples and tunes transcription
Johnstone, Christopher P; Galloway, Kate E
Transcription induces a wave of DNA supercoiling, altering the binding affinity of RNA polymerases and reshaping the biochemical landscape of gene regulation. As supercoiling rapidly diffuses, transcription dynamically reshapes the regulation of proximal genes, forming a complex feedback loop. However, a theoretical framework is needed to integrate biophysical regulation with biochemical transcriptional regulation. To investigate the role of supercoiling-mediated feedback within multi-gene systems, we model transcriptional regulation under the influence of supercoiling-mediated polymerase dynamics, allowing us to identify patterns of expression that result from physical inter-gene coupling. We find that gene syntax-the relative ordering and orientation of genes-defines the expression profiles, variance, burst dynamics, and inter-gene correlation of two-gene systems. Furthermore, supercoiling can enhance or weaken biochemical regulation. Our results suggest that supercoiling couples behavior between neighboring genes, providing a regulatory mechanism that tunes transcriptional variance in engineered gene networks and explains the behavior of co-localized native circuits.
</description>
<pubDate>Sat, 01 Oct 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159874</guid>
<dc:date>2022-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Quantum Differential Equation Solvers: Limitations and Fast-Forwarding</title>
<link>https://hdl.handle.net/1721.1/159873</link>
<description>Quantum Differential Equation Solvers: Limitations and Fast-Forwarding
An, Dong; Liu, Jin-Peng; Wang, Daochen; Zhao, Qi
We study the limitations and fast-forwarding of quantum algorithms for linear ordinary differential equation (ODE) systems with a particular focus on non-quantum dynamics, where the coefficient matrix in the ODE is not anti-Hermitian or the ODE is inhomogeneous. On the one hand, for generic linear ODEs, by proving worst-case lower bounds, we show that quantum algorithms suffer from computational overheads due to two types of “non-quantumness”: real part gap and non-normality of the coefficient matrix. We then show that homogeneous ODEs in the absence of both types of “non-quantumness” are equivalent to quantum dynamics, and reach the conclusion that quantum algorithms for quantum dynamics work best. To obtain these lower bounds, we propose a general framework for proving lower bounds on quantum algorithms that are amplifiers, meaning that they amplify the difference between a pair of input quantum states. On the other hand, we show how to fast-forward quantum algorithms for solving special classes of ODEs which leads to improved efficiency. More specifically, we obtain exponential improvements in both T and the spectral norm of the coefficient matrix for inhomogeneous ODEs with efficiently implementable eigensystems, including various spatially discretized linear evolutionary partial differential equations. We give fast-forwarding algorithms that are conceptually different from existing ones in the sense that they neither require time discretization nor solving high-dimensional linear systems.
</description>
<pubDate>Wed, 02 Jul 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159873</guid>
<dc:date>2025-07-02T00:00:00Z</dc:date>
</item>
<item>
<title>Membrane Charge Effects on Solute Transport in Nanofiltration: Experiments and Molecular Dynamics Simulations</title>
<link>https://hdl.handle.net/1721.1/159872</link>
<description>Membrane Charge Effects on Solute Transport in Nanofiltration: Experiments and Molecular Dynamics Simulations
Liu, Suwei; Foo, Zihao; Lienhard, John H; Keten, Sinan; Lueptow, Richard M.
Polyamide membranes, such as nanofiltration (NF) membranes, are widely used for water purification. However, the mechanisms of solute transport and solute rejection due to solute charge interactions with the membrane remain unclear at the molecular level. Here, we use molecular dynamics simulations to examine the transport of single-solute feeds through charged nanofiltration membranes with different membrane charge concentrations of COO−&#13;
 and NH+2&#13;
 resulting from the deprotonation or protonation of polymeric end groups according to the pH level that the membrane experiences. The results show that Na+&#13;
 and Cl−&#13;
 solute ions are better rejected when the membrane has a higher concentration of negatively charged groups, corresponding to a higher pH, whereas CaCl2 is well rejected at all pH levels studied. These results are consistent with those of experiments performed at the same pH conditions as the simulation setup. Moreover, solute transport behavior depends on the membrane functional group distribution. When COO− functional groups are concentrated at membrane feed surface, ion permeation into the membrane is reduced. Counter-ions tend to associate with charged functional groups while co-ions seem to pass by the charged groups more easily. In addition, steric effects play a role when ions of opposite charge cluster in pores of the membrane. This study reveals solute transport and rejection mechanisms related to membrane charge and provides insights into how membranes might be designed to achieve specific desired solute rejection.
</description>
<pubDate>Wed, 18 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159872</guid>
<dc:date>2025-06-18T00:00:00Z</dc:date>
</item>
<item>
<title>Food Grade Synthesis of Hetero-Coupled Biflavones and 3D-Quantitative Structure&amp;ndash;Activity Relationship (QSAR) Modeling of Antioxidant Activity</title>
<link>https://hdl.handle.net/1721.1/159871</link>
<description>Food Grade Synthesis of Hetero-Coupled Biflavones and 3D-Quantitative Structure&amp;ndash;Activity Relationship (QSAR) Modeling of Antioxidant Activity
Zheng, Hongling; Yang, Xin; Zhang, Qiuyu; Toy, Joanne Yi Hui; Huang, Dejian
Biflavonoids are a unique subclass of dietary polyphenolic compounds known for their diverse bioactivities. Despite these benefits, these biflavonoids remain largely underexplored due to their limited natural availability and harsh conditions required for their synthesis, which restricts broader research and application in functional foods and nutraceuticals. To address this gap, we synthesized a library of rare biflavonoids using a radical–nucleophile coupling reaction previously reported by our group. The food grade coupling reaction under weakly alkaline water at room temperature led to isolation of 28 heterocoupled biflavones from 11 monomers, namely 3′,4′-dihydroxyflavone, 5,3′,4′-trihydroxyflavone, 6,3′,4′-trihydroxyflavone, 7,3′,4′-trihydroxyflavone, diosmetin, chrysin, acacetin, genistein, biochanin A, and wogonin. The structures of the dimers are characterized by nuclear magnetic resonance spectroscopy (NMR) and high-resolution mass spectroscopy (HRMS). In addition, we evaluated the antioxidant potential of these biflavones using a DPPH (2,2-diphenyl-1-picrylhydrazyl) radical scavenging assay and the DPPH value ranges between 0.75 to 1.82 mM of Trolox/mM of sample across the 28 synthesized dimers. Additionally, a three-dimensional quantitative structure–activity relationship (3D-QSAR) analysis was conducted to identify structural features associated with enhanced antioxidant activity. The partial least squares (PLS) regression QSAR model showed acceptable r2 = 0.936 and q2 = 0.869. Additionally, the average local ionization energy (ALIE), electrostatic potential (ESP), Fukui index (F-), and electron density (ED) were determined to identify the key structural moiety that was capable of donating electrons to neutralize reactive oxygen species.
</description>
<pubDate>Mon, 16 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159871</guid>
<dc:date>2025-06-16T00:00:00Z</dc:date>
</item>
<item>
<title>Project-Based Learning at Universities: A Sustainable Approach to Renewable Energy in Latin America&amp;mdash;A Case Study</title>
<link>https://hdl.handle.net/1721.1/159870</link>
<description>Project-Based Learning at Universities: A Sustainable Approach to Renewable Energy in Latin America&amp;mdash;A Case Study
Pastor, Miguel Antonio Soplin; Cervantes-Marreros, Melany Dayana; Cubas-Pérez, José Dilmer; Reategui-Apagueño, Luis Alfredo; Tito-Pezo, David; Piña-Rimarachi, Jhim Max; Vasquez-Perez, Cesar Adolfo; Correa-Vasquez, Claudio Leandro; Soplin Rios, Jose Antonio; del Pino, Lisveth Flores; Botelho Junior, Amilton Barbosa
New teaching methods are essential to prepare 21st-century engineers for sustainable challenges. This study used project-based learning to evaluate the energy potential of water channels in fish farms in Loreto, Peru. Chemical engineering students applied theory to practice, enhancing skills like field data collection and technical assessment. The results show a practical potential of 18.37 kW and a theoretical potential of 84.19 kW, enough to power 37–244 households. This approach not only highlights renewable energy opportunities but also demonstrates the effectiveness of connecting theory and practice in real-world contexts. Despite simplified calculations, this project significantly impacts engineering education in Latin America, serving as an example of successful learning and inspiring innovative teaching techniques. All of the students (100%) agreed that the project helped in terms of practical skill and problem-solving capability development, teaching motivation, and relevance training for professional life.
</description>
<pubDate>Sat, 14 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159870</guid>
<dc:date>2025-06-14T00:00:00Z</dc:date>
</item>
<item>
<title>Magnetotransport Measurements in Overdoped Mn:Bi2Te3 Thin Films</title>
<link>https://hdl.handle.net/1721.1/159869</link>
<description>Magnetotransport Measurements in Overdoped Mn:Bi2Te3 Thin Films
Singh, Angadjit; Kamboj, Varun S.; Barnes, Crispin H. W.; Hesjedal, Thorsten
Introducing magnetic dopants into topological insulators (TIs) provides a pathway to realizing novel quantum phenomena, including the quantum anomalous Hall effect (QAHE) and axionic states. One of the most commonly used 3&#119889; transition metal dopants is Mn, despite its known tendency to be highly mobile and to cause phase segregation. In this study, we present a detailed magnetotransport investigation of Mn-overdoped Bi2Te3 thin films using field-effect transistor architectures. Building on our previous structural investigations of these samples, we examine how high Mn content influences their electronic transport properties. From our earlier studies, we know that high Mn doping concentrations lead to the formation of secondary phases, which significantly alter weak antilocalization behavior and suppress topological surface transport. To probe the gate response of these doped films over extended areas, we fabricate field-effect transistor structures, and we observe uniform electrostatic control of conduction across the magnetic phase. Inspired by recent developments in intrinsic topological systems such as the MnTe-Bi2Te3 septuple-layer compounds, we explore the influence of embedded ferromagnetic chalcogenide inclusions as an alternative route to engineer magnetic topological states and potentially expand the operational temperature range of QAHE-enabled devices.
</description>
<pubDate>Wed, 11 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159869</guid>
<dc:date>2025-06-11T00:00:00Z</dc:date>
</item>
<item>
<title>Ensemble-Based Biometric Verification: Defending Against Multi-Strategy Deepfake Image Generation</title>
<link>https://hdl.handle.net/1721.1/159868</link>
<description>Ensemble-Based Biometric Verification: Defending Against Multi-Strategy Deepfake Image Generation
Zen, Hilary; Wagh, Rohan; Wanderley, Miguel; Bicalho, Gustavo; Park, Rachel; Sun, Megan; Palacios, Rafael; Carvalho, Lucas; Rinaldo, Guilherme; Gupta, Amar
Deepfake images, synthetic images created using digital software, continue to present a serious threat to online platforms. This is especially relevant for biometric verification systems, as deepfakes that attempt to bypass such measures increase the risk of impersonation, identity theft and scams. Although research on deepfake image detection has provided many high-performing classifiers, many of these commonly used detection models lack generalizability across different methods of deepfake generation. For companies and governments fighting identify fraud, a lack of generalization is challenging, as malicious actors may use a variety of deepfake image-generation methods available through online wrappers. This work explores if combining multiple classifiers into an ensemble model can improve generalization without losing performance across different generation methods. It also considers current methods of deepfake image generation, with a focus on publicly available and easily accessible methods. We compare our framework against its underlying models to show how companies can better respond to emerging deepfake generation methods.
</description>
<pubDate>Mon, 09 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159868</guid>
<dc:date>2025-06-09T00:00:00Z</dc:date>
</item>
<item>
<title>Are We Satisfied with the Achievements of New Eco-City Construction in China? A Case Study of the Sino-Singapore Tianjin Eco-City</title>
<link>https://hdl.handle.net/1721.1/159867</link>
<description>Are We Satisfied with the Achievements of New Eco-City Construction in China? A Case Study of the Sino-Singapore Tianjin Eco-City
Sun, Xuan; Sun, Tao; Hou, Jingchuan; Yue, Zhuoruo; Li, Xiaomeng
With the goal of sustainable urbanization, eco-cities have garnered significant global attention in recent decades. Unlike eco-city renovation or renewal, the construction of a new eco-city represents a comprehensive urbanization process that integrates environmental sustainability with livability. To evaluate the outcomes of new eco-city construction in China, this study employs a dual approach combining objective achievements and residents’ subjective satisfaction to systematically examine the Sino-Singapore Tianjin Eco-City. The analysis encompasses five dimensions: environmental amenity, life safety, residential functionality, traffic capability, and economic well-being, with the relative weights of specific indicators determined through the entropy method, expert scoring, and analytic hierarchy process. The findings reveal that based on objective indicators, the eco-city’s overall performance nearly doubled during its first phase of development, with life safety showing the most notable improvements. However, subjective assessments revealed that overall resident satisfaction remained below 70%, with residential functionality receiving the highest rating. The annual progress of the eco-city did not consistently align with residents’ needs, and no clear correlation was found between the eco-city’s current state and public sentiment. For sustainable development, the eco-city must address its shortcomings and better cater to residents’ demands across various dimensions through targeted and effective strategies.
</description>
<pubDate>Fri, 06 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159867</guid>
<dc:date>2025-06-06T00:00:00Z</dc:date>
</item>
<item>
<title>Smoothing Techniques for Improving COVID-19 Time Series Forecasting Across Countries</title>
<link>https://hdl.handle.net/1721.1/159866</link>
<description>Smoothing Techniques for Improving COVID-19 Time Series Forecasting Across Countries
Zbezhkhovska, Uliana; Chumachenko, Dmytro
Accurate forecasting of COVID-19 case numbers is critical for timely and effective public health interventions. However, epidemiological data’s irregular and noisy nature often undermines the predictive performance. This study examines the influence of four smoothing techniques—the rolling mean, the exponentially weighted moving average, a Kalman filter, and seasonal–trend decomposition using Loess (STL)—on the forecasting accuracy of four models: LSTM, the Temporal Fusion Transformer (TFT), XGBoost, and LightGBM. Weekly case data from Ukraine, Bulgaria, Slovenia, and Greece were used to assess the models’ performance over short- (3-month) and medium-term (6-month) horizons. The results demonstrate that smoothing enhanced the models’ stability, particularly for neural architectures, and the model selection emerged as the primary driver of predictive accuracy. The LSTM and TFT models, when paired with STL or the rolling mean, outperformed the others in their short-term forecasts, while XGBoost exhibited greater robustness over longer horizons in selected countries. An ANOVA confirmed the statistically significant influence of the model type on the MAPE (p = 0.008), whereas the smoothing method alone showed no significant effect. These findings offer practical guidance for designing context-specific forecasting pipelines adapted to epidemic dynamics and variations in data quality.
</description>
<pubDate>Tue, 03 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159866</guid>
<dc:date>2025-06-03T00:00:00Z</dc:date>
</item>
<item>
<title>Surpassing Legacy Approaches to PWR Core Reload Optimization with Single-Objective Reinforcement Learning</title>
<link>https://hdl.handle.net/1721.1/159863</link>
<description>Surpassing Legacy Approaches to PWR Core Reload Optimization with Single-Objective Reinforcement Learning
Seurin, Paul; Shirvan, Koroush
Optimizing the fuel cycle cost through the optimization of nuclear reactor core loading patterns (LPs) involves multiple objectives and constraints, leading to a vast number of candidate solutions that cannot be explicitly solved. To advance the state of the art in core reload patterns, we have developed methods based on deep Reinforcement Learning (RL) for both single- and multi-objective optimization. Our previous research laid the groundwork for these approaches and demonstrated their ability to discover high-quality patterns within a reasonable time frame. On the other hand, Stochastic Optimization (SO) approaches are commonly used in the literature, but there is no rigorous explanation that shows which approach is better in which scenario. In this paper, we demonstrate the advantage of our RL-based approach, specifically using Proximal Policy Optimization (PPO) against the most commonly used SO-based methods: Genetic Algorithm, Parallel Simulated Annealing with mixing of states, and Tabu Search, as well as an ensemble-based method, i.e. the Prioritized replay Evolutionary and Swarm Algorithm. We found that the LP scenarios derived in this paper are amenable to a global search to identify promising research directions rapidly but then need to transition into a local search to exploit these directions efficiently and prevent getting stuck in local optima. PPO adapts its search capability via a policy with learnable weights, allowing it to function as both a global search method and a local search method. Subsequently, we compared all algorithms against PPO in long runs, which exacerbated the differences seen in the shorter cases. Overall, the work demonstrates the statistical superiority of PPO compared to the other considered algorithms.
</description>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159863</guid>
</item>
<item>
<title>Raman Hyperspectroscopy and Chemometric Analysis of Blood Serum for Diagnosing Celiac Disease in Adults</title>
<link>https://hdl.handle.net/1721.1/159861</link>
<description>Raman Hyperspectroscopy and Chemometric Analysis of Blood Serum for Diagnosing Celiac Disease in Adults
Al-Hetlani, Entesar; Almehmadi, Lamyaa M.; Lednev, Igor K.
Celiac disease (CD) is a chronic autoimmune disorder triggered by an abnormal immune response to gluten, a protein found in wheat, barley, and rye. Current diagnostic methods, including serological assessments and biopsies, can be challenging due to the disease&amp;rsquo;s heterogeneous nature, creating a need for a reliable, noninvasive diagnostic approach. Here, in this study, we aimed to extend the Raman peak area ratios approach to the adult population. However, our findings indicate no significant differences in Raman peak area ratios between healthy and diseased adults based on blood serum samples. Nevertheless, genetic algorithm combined with partial least squares discriminant analysis (GA-PLS-DA) allowed differentiation with 92% sensitivity and 96% specificity at the spectral level in external validation. Receiver operating characteristic (ROC) analysis showed 100% classification at the donor level in external validation. These results demonstrate further that Raman spectroscopy, combined with chemometrics, is a promising, noninvasive tool for CD diagnosis.
</description>
<pubDate>Fri, 30 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159861</guid>
<dc:date>2025-05-30T00:00:00Z</dc:date>
</item>
<item>
<title>Brachial Plexopathy in Head and Neck Cancer Potentially Related to LET-Dependent RBE</title>
<link>https://hdl.handle.net/1721.1/159859</link>
<description>Brachial Plexopathy in Head and Neck Cancer Potentially Related to LET-Dependent RBE
Hanna, Abanob; Casper, Anthony; Dagan, Roi; Grewal, Hardev S.; Park, Jiyeon; Brooks, Eric D.; Traneus, Erik; Glimelius, Lars; Johnson, Perry B.; Saki, Mohammad; Zhang, Yawei; Willoughby, Twyla R.; Bradley, Julie A.; Browne, Jackson; Artz, Mark E.
Proton beam therapy for head and neck cancers traditionally employs a fixed relative biological effectiveness (RBE) of 1.1, which may underestimate actual biological effects in critical structures. This study evaluates how Linear Energy Transfer (LET) optimization could potentially prevent radiation-induced brachial plexopathy (RIBP). (1) Case presentation: A 65-year-old male with stage IVA p16-positive oropharyngeal squamous cell carcinoma received pencil-beam-scanning intensity-modulated proton therapy with concurrent cisplatin. Due to a right level 4 neck node, the high-risk target volume overlapped with the brachial plexus, resulting in a D0.1cc of 70.3 Gy (RBE = 1.1). Four years post-treatment, the patient developed progressive right upper extremity paresthesia, weakness, and dysesthesia. Electromyography revealed myokymia consistent with brachial plexopathy, while MRI showed hyperintensity of the right brachial plexus corresponding to the radiation field. Conservative treatment with pentoxifylline, gabapentin, and physical therapy improved his symptoms. (2) Methods: The original treatment plan was retrospectively analyzed using Monte Carlo dose algorithms and LET-dependent RBE models from McMahon and McNamara. An LET-optimized plan was created to limit LETd to 2.0 keV/µm in the brachial plexus. (3) Results: The relative biological equivalent (RBE) dose to 0.1cc of the brachial plexus was 77.8 Gy (CGE RBE), exceeding tolerance. The LET-optimized plan reduced the brachial plexus D0.1cc to 59.4 Gy (RBE = 1.1) and 63.2 Gy (CGE RBE), an 18.8% decrease, while maintaining target coverage. LETd, within the brachial plexus enhancement, decreased from 5.3 to 2.6 keV/μm. (4) Conclusion: This case highlights the potential clinical importance of LET optimization in proton therapy planning, particularly when organs-at-risk overlap with target volumes. By reducing LETd from 5.3 to 2.6 keV/μm and biological equivalent dose by 18.8%, LET optimization could potentially prevent late toxicities, like RIBP, while maintaining target coverage.
</description>
<pubDate>Thu, 29 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159859</guid>
<dc:date>2025-05-29T00:00:00Z</dc:date>
</item>
<item>
<title>Advanced Machine Learning Methods for the Prediction of the Optical Parameters of Tellurite Glasses</title>
<link>https://hdl.handle.net/1721.1/159858</link>
<description>Advanced Machine Learning Methods for the Prediction of the Optical Parameters of Tellurite Glasses
Ahmadi, Fahimeh; Hajihassani, Mohsen; Sivenas, Tryfon; Papanikolaou, Stefanos; Asteris, Panagiotis G.
This study evaluates the predictive performance of advanced machine learning models, including DeepBoost, XGBoost, CatBoost, RF, and MLP, in estimating the Ω2, Ω4, and Ω6 parameters based on a comprehensive set of input variables. Among the models, DeepBoost consistently demonstrated the best performance across the training and testing phases. For the Ω2 prediction, DeepBoost achieved an R2 of 0.974 and accuracy of 99.895% in the training phase, with corresponding values of 0.971 and 99.902% in the testing phase. In comparison, XGBoost ranked second with an R2 of 0.929 and accuracy of 99.870% during testing. For Ω4, DeepBoost achieved a training phase R2 of 0.955 and accuracy of 99.846%, while the testing phase results included an R2 of 0.945 and accuracy of 99.951%. Similar trends were observed for Ω6, where DeepBoost obtained near-perfect training phase results (R2 = 0.997, accuracy = 99.968%) and testing phase performance (R2 = 0.994, accuracy = 99.946%). These findings are further supported by violin plots and correlation analyses, underscoring DeepBoost’s superior predictive reliability and generalization capabilities. This work highlights the importance of model selection in predictive tasks and demonstrates the potential of machine learning for capturing complex relationships in data.
</description>
<pubDate>Sun, 25 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159858</guid>
<dc:date>2025-05-25T00:00:00Z</dc:date>
</item>
<item>
<title>Improving Tandem Fluency Through Utilization of Deep Learning to Predict Human Motion in Exoskeleton</title>
<link>https://hdl.handle.net/1721.1/159857</link>
<description>Improving Tandem Fluency Through Utilization of Deep Learning to Predict Human Motion in Exoskeleton
Koo, Bon Ho; Siu, Ho Chit; Apostolides, Luke; Kim, Sangbae; Petersen, Lonnie G.
first_pagesettingsOrder Article Reprints&#13;
Open AccessArticle&#13;
Improving Tandem Fluency Through Utilization of Deep Learning to Predict Human Motion in Exoskeleton&#13;
by Bon Ho Koo 1ORCID,Ho Chit Siu 2ORCID,Luke Apostolides 1ORCID,Sangbae Kim 1 andLonnie G. Petersen 3,4,*ORCID&#13;
1&#13;
Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, MA 02139, USA&#13;
2&#13;
MIT Lincoln Laboratory, Lexington, MA 02421, USA&#13;
3&#13;
Department of Aeronautics and Astronautics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA&#13;
4&#13;
Institute for Medical Engineering and Science, Massachusetts Institute of Technology, Cambridge, MA 02139, USA&#13;
*&#13;
Author to whom correspondence should be addressed.&#13;
Actuators 2025, 14(6), 260; https://doi.org/10.3390/act14060260&#13;
Submission received: 8 April 2025 / Revised: 13 May 2025 / Accepted: 20 May 2025 / Published: 23 May 2025&#13;
(This article belongs to the Special Issue Recent Advances in Soft Actuators, Robotics and Intelligence)&#13;
Downloadkeyboard_arrow_down Browse Figures Versions Notes&#13;
Abstract&#13;
Today’s exoskeletons face challenges with low fluency (a quantifiable alternative to “seamlessness”), hypothesized to be caused by a lag in active control innate in many leader–follower paradigms seen in contemporary systems, leading to inefficiencies and discomfort. Furthermore, tandem fluency, a variation of fluency specific for tandem robots systems as exoskeletons, is yet to be rigorously tested in practice. This study aims to utilize metrics of tandem fluency in order to demonstrate improved human–robot interaction (HRI) in exoskeletons through human subject testing of a prototype 1 degree of freedom (DoF) exoskeleton using a motion prediction bidirectional long short-term memory (bi-LSTM) deep learning network. Subjects were recruited to conduct various upper body exercises about the elbow joint, and the collected sEMG, goniometer, and gas exchange data was used to design, test, optimize, and assess the performance of the 1 DoF exoskeleton using tandem fluency metrics. We found that the correlation between I-ACT, a metric of tandem fluency, the subjective survey responses, and metabolic data suggest that the use of a predictive bi-LSTM network to control a 1 DoF exoskeleton about the elbow results in an overall positive trend, which may correlate to high tandem fluency.
</description>
<pubDate>Fri, 23 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159857</guid>
<dc:date>2025-05-23T00:00:00Z</dc:date>
</item>
<item>
<title>A Potent Fluorescent Derivative of 8-Hydroxyquinoline Suggests Cell Wall Damage as a Possible Cellular Action of the 5-Triazole 8-Hydroxyquinoline Class</title>
<link>https://hdl.handle.net/1721.1/159856</link>
<description>A Potent Fluorescent Derivative of 8-Hydroxyquinoline Suggests Cell Wall Damage as a Possible Cellular Action of the 5-Triazole 8-Hydroxyquinoline Class
Gentz, Caroline de Bem; Lopes, Marcela Silva; Quatrin, Priscilla Maciel; Gionbelli, Mariana Pies; de Cesare, Maycon Antonio; Perin, Ana Paula; Lopes, William; Fuentefria, Alexandre Meneghello; Vainstein, Marilene Henning; Andrade, Saulo Fernandes de
Fungal infections are a major but often neglected global health challenge, affecting both human health and agricultural productivity. Current treatments are limited by few drug classes and increasing multidrug resistance, exacerbated by the widespread use of antifungal agents in clinical and agricultural settings. This study investigates the antifungal potential of a novel 8-hydroxyquinoline derivative with a triazole core at the 5-position, synthesized to improve both efficacy and mechanistic understanding as a fluorescent chemical probe. Biological assays demonstrated significant antifungal activity of compound &lt;b&gt;10&lt;/b&gt; against a range of pathogens, which was active against all &lt;i&gt;Candida&lt;/i&gt; species, dermatophytes, and &lt;i&gt;Fusarium solani&lt;/i&gt; with MIC values ranging from 0.5 to 4 &amp;micro;g/mL. Confocal fluorescence microscopy of treated fungal cells was conducted and showed a high accumulation of compound &lt;b&gt;10&lt;/b&gt; at the cell edge. To further investigate the mode of action, results from a sorbitol protection assay suggested a possible cell wall action, and scanning electron microscopy (SEM) revealed cell wall disruption, such as cell shrinkage and surface roughness, in treated fungal cells. These findings highlight the 8-hydroxyquinoline-triazole scaffold as a promising antifungal agent with cell wall damage properties, providing a basis for future therapeutic development against human and plant fungal pathogens.
</description>
<pubDate>Thu, 10 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159856</guid>
<dc:date>2025-04-10T00:00:00Z</dc:date>
</item>
<item>
<title>Quantum power flows: from theory to practice</title>
<link>https://hdl.handle.net/1721.1/159855</link>
<description>Quantum power flows: from theory to practice
Liu, Junyu; Zheng, Han; Hanada, Masanori; Setia, Kanav; Wu, Dan
The high-level integration of spatial-dispersed renewable energies can greatly enlarge future smart grid size and complicate system operations. Existing numerical methods based on classical computational oracles may be challenged to fulfill efficiency requirements for future smart grid evaluations, where modern advanced computational technologies, specifically quantum computing, have significant potential to help. In this paper, we discuss applications of quantum computing algorithms toward state-of-the-art smart grid problems. We suggest potential, exponential quantum speedup by the use of the Harrow-Hassidim-Lloyd (HHL) algorithms for solving sparse linear systems of equations in Newton’s method of power-flow problems. However, practical implementations of the algorithm are limited by the noise of quantum circuits, the hardness of realizations of quantum random access memories (QRAM), and the depth of the required quantum circuits. We benchmark the hardware and software requirements from the state-of-the-art power-flow algorithms, including QRAM requirements from hybrid phonon-transmon systems, and explicit gate counting used in HHL for explicit realizations. We also develop near-term algorithms of power flow by variational quantum circuits and implement physical experiments for 6 qubits with a truncated version of power flows.
</description>
<pubDate>Thu, 05 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159855</guid>
<dc:date>2024-09-05T00:00:00Z</dc:date>
</item>
<item>
<title>Accelerating Diverse Cell-Based Therapies Through Scalable Design</title>
<link>https://hdl.handle.net/1721.1/159844</link>
<description>Accelerating Diverse Cell-Based Therapies Through Scalable Design
Peterman, Emma L; Ploessl, Deon S; Galloway, Kate E
Augmenting cells with novel, genetically encoded functions will support therapies that expand beyond natural capacity for immune surveillance and tissue regeneration. However, engineering cells at scale with transgenic cargoes remains a challenge in realizing the potential of cell-based therapies. In this review, we introduce a range of applications for engineering primary cells and stem cells for cell-based therapies. We highlight tools and advances that have launched mammalian cell engineering from bioproduction to precision editing of therapeutically relevant cells. Additionally, we examine how transgenesis methods and genetic cargo designs can be tailored for performance. Altogether, we offer a vision for accelerating the translation of innovative cell-based therapies by harnessing diverse cell types, integrating the expanding array of synthetic biology tools, and building cellular tools through advanced genome writing techniques.
</description>
<pubDate>Wed, 24 Jul 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159844</guid>
<dc:date>2024-07-24T00:00:00Z</dc:date>
</item>
<item>
<title>Guaranteeing System-level Properties in Genetic Circuits Subject to Context Effects</title>
<link>https://hdl.handle.net/1721.1/159843</link>
<description>Guaranteeing System-level Properties in Genetic Circuits Subject to Context Effects
Incer, Inigo; Pandey, Ayush; Nolan, Nicholas; Peterman, Emma L; Galloway, Kate E; Sontag, Eduardo D; Del Vecchio, Domitilla
The identification of constraints on system parameters that will ensure that a system achieves desired requirements remains a challenge in synthetic biology, where components unintentionally affect one another by perturbing the cellular environment in which they operate. This paper shows how to solve this problem optimally for a class of input/output system-level specifications, and for unintended interactions due to resource sharing. Specifically, we show how to solve the problem based on the input/output properties of the subsystems and on the unintended interaction map. Our approach is based on the elimination of quantifiers in monotone properties of the system. We illustrate applications of this methodology to guaranteeing system-level performance of multiplexed and sequential biosensing and of bistable genetic circuits.
2024 IEEE 63rd Conference on Decision and Control (CDC), December 16-19, 2024. MiCo, Milan, Italy
</description>
<pubDate>Mon, 16 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159843</guid>
<dc:date>2024-12-16T00:00:00Z</dc:date>
</item>
<item>
<title>Engineered transcription factor-binding arrays for DNA-based gene expression control in mammalian cells</title>
<link>https://hdl.handle.net/1721.1/159842</link>
<description>Engineered transcription factor-binding arrays for DNA-based gene expression control in mammalian cells
Zouein, Annalise; Lende-Dorn, Brittany; Galloway, Kate E; Ellis, Tom; Ceroni, Francesca
Tools that manipulate gene expression in mammalian cells without any additional expression are critical for cell engineering applications. Here, we demonstrate the use of arrays of transcription factor (TF) recognition elements (REs) as DNA tools for controlling gene expression. We first demonstrate that TetR-based RE arrays can alter synthetic gene circuit performance. We then open the approach to any TF with a known binding site by developing a new technique called Cloning Troublesome Repeats in Loops (CTRL), which can assemble plasmids with up to 256 RE repeats. Transfection of custom RE array plasmids assembled by CTRL into mammalian cells modifies host cell gene regulation by sequestration of TFs of interest and can sequester both synthetic and native TFs, offering applications in the control of gene circuits and for directing cell fate. This work advances our ability to assemble repetitive DNA arrays and shows how TF-binding RE arrays expand possibilities in mammalian cell engineering.
</description>
<pubDate>Sun, 01 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159842</guid>
<dc:date>2025-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>High-resolution profiling reveals coupled transcriptional and translational regulation of transgenes</title>
<link>https://hdl.handle.net/1721.1/159841</link>
<description>High-resolution profiling reveals coupled transcriptional and translational regulation of transgenes
Peterman, Emma L; Ploessl, Deon S; Love, Kasey S; Sanabria, Valeria; Daniels, Rachel F; Johnstone, Christopher P; Godavarti, Diya R; Kabaria, Sneha R; Oakes, Conrad G; Pai, Athma A; Galloway, Kate E
Concentrations of RNAs and proteins provide important determinants of cell fate. Robust gene circuit design requires an understanding of how the combined actions of individual genetic components influence both messenger RNA (mRNA) and protein levels. Here, we simultaneously measure mRNA and protein levels in single cells using hybridization chain reaction Flow-FISH (HCR Flow-FISH) for a set of commonly used synthetic promoters. We find that promoters generate differences in both the mRNA abundance and the effective translation rate of these transcripts. Stronger promoters not only transcribe more RNA but also show higher effective translation rates. While the strength of the promoter is largely preserved upon genome integration with identical elements, the choice of polyadenylation signal and coding sequence can generate large differences in the profiles of the mRNAs and proteins. We used long-read direct RNA sequencing to define the transcription start and splice sites of common synthetic promoters and independently vary the defined promoter and 5′ UTR sequences in HCR Flow-FISH. Together, our high-resolution profiling of transgenic mRNAs and proteins offers insight into the impact of common synthetic genetic components on transcriptional and translational mechanisms. By developing a novel framework for quantifying expression profiles of transgenes, we have established a system for building more robust transgenic systems.
</description>
<pubDate>Fri, 06 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159841</guid>
<dc:date>2025-06-06T00:00:00Z</dc:date>
</item>
<item>
<title>Adenomyosis and Hematometra in a Non-communicating Rudimentary Horn of a Unicornuate Uterus: A Case Report</title>
<link>https://hdl.handle.net/1721.1/159840</link>
<description>Adenomyosis and Hematometra in a Non-communicating Rudimentary Horn of a Unicornuate Uterus: A Case Report
Wali Jebran, Farzana; Jebran, Ahmad M.; Jalalzai, Rana; Saadaat, Ramin; Alizai, Huma; Sadat, Karima
Congenital uterine anomalies with outflow tract obstruction caused by abnormal Mullerian duct system development are rare conditions that can lead to painful pelvic emergencies such as hematometra. A unicornuate uterus with a rudimentary horn occurs in about 2.4–13% of all Müllerian duct anomalies with a prevalence of 1 in 100000 fertile women. Clinical symptoms such as dysmenorrhea, dyspareunia, and acute and chronic pelvic pain usually develop at and after menarche. Accurate diagnosis and timely surgical intervention are crucial to prevent complications such as adenomyosis, endometriosis, and infertility. A 21-year-old nulligravida Afghan female presented with severe chronic pelvic pain that worsened during menstruation and coitus. Ultrasonography and magnetic resonance imaging (MRI) revealed two separate uterine bodies. The left body was a unicornuate uterus with an endometrial lining connected to the cervix and the vaginal canal. The right body showed a functioning non-communicating rudimentary horn containing hematometra without a cervical opening. The patient underwent a hemi-hysterectomy and the rudimentary horn was removed. The histopathological findings revealed adenomyosis with cystic changes. The patient recovered after the operation and her severe pain subsided. This case underscores the importance of timely diagnosis and management of rare non-communicating functioning rudimentary horn anomalies that occur in less than 0.06% of reproductive-age women. Notably, this report adds to the limited literature on this anomaly where the rudimentary horn contained extensive adenomyosis, highlighting the potential for significant histopathological changes in such anomalies. Precise diagnosis and surgical excision are critical to prevent complications.
</description>
<pubDate>Mon, 10 Mar 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159840</guid>
<dc:date>2025-03-10T00:00:00Z</dc:date>
</item>
<item>
<title>The Problem-Ladenness of Theory</title>
<link>https://hdl.handle.net/1721.1/159839</link>
<description>The Problem-Ladenness of Theory
Levenstein, Daniel; De Santo, Aniello; Heijnen, Saskia; Narayan, Manjari; Oude Maatman, Freek J. W.; Rawski, Jonathan; Wright, Cory
The cognitive sciences are facing questions of how to select from competing theories or develop those that suit their current needs. However, traditional accounts of theoretical virtues have not yet proven informative to theory development in these fields. We advance a pragmatic account by which theoretical virtues are heuristics we use to estimate a theory’s contribution to a field’s body of knowledge and the degree to which it increases that knowledge’s ability to solve problems in the field’s domain or problem space. From this perspective, properties that are traditionally considered epistemic virtues, such as a theory’s fit to data or internal coherence, can be couched in terms of problem space coverage, and additional virtues come to light that reflect a theory’s alignment with problem-having agents and the context in a societally embedded scientific system. This approach helps us understand why the needs of different fields result in different kinds of theories and allows us to formulate the challenges facing cognitive science in terms that we hope will facilitate their resolution through further theoretical development.
</description>
<pubDate>Wed, 16 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159839</guid>
<dc:date>2024-10-16T00:00:00Z</dc:date>
</item>
<item>
<title>Characteristics of two polarized groups in online social networks’ controversial discourse</title>
<link>https://hdl.handle.net/1721.1/159838</link>
<description>Characteristics of two polarized groups in online social networks’ controversial discourse
Mahmoudi, Amin; Jemielniak, Dariusz; Ciechanowski, Leon
In today’s interconnected world, online social networks play a pivotal role in facilitating global communication. These platforms often host discussions on contentious topics such as climate change, vaccines, and war, leading to the formation of two distinct groups: deniers and believers. Understanding the characteristics of these groups is crucial for predicting information flow and managing the diffusion of information. Moreover, such understanding can enhance machine learning algorithms designed to automatically detect these groups, thereby contributing to the development of strategies to curb the spread of disinformation, including fake news and rumors. In this study, we employ social network analysis measures to extract the characteristics of these groups, conducting experiments on three large-scale datasets of over 22 million tweets. Our findings indicate that, based on network science measures, the denier (anti) group exhibits greater coherence than the believer (pro) group.
</description>
<pubDate>Mon, 30 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159838</guid>
<dc:date>2024-12-30T00:00:00Z</dc:date>
</item>
<item>
<title>Research and development of time resolution and time reference adjustment for CMS improved resistive plate chambers (iRPCs)</title>
<link>https://hdl.handle.net/1721.1/159837</link>
<description>Research and development of time resolution and time reference adjustment for CMS improved resistive plate chambers (iRPCs)
Song, J.; Zhao, J.; Hou, Q.; Diao, W.; Cao, P.; Kou, H.; Gong, W.; Wang, N.; Liu, Z.-A.; Samalan, A.; Tytgat, M.; Alves, G. A.; Marujo, F.; Coelho, E. A.
Purpose Improved resistive plate chambers (iRPCs) will be installed in the challenging forward region of the compact muon solenoid (CMS) during its Phase-2 upgrade. The design target of iRPC time resolution is 1.5 ns. It will help the Level-1 trigger system distinguish the muons from high backgrounds and improve the trigger efficiency. Studying the time resolution after integrating the new backend electronics boards (BEB) is essential for ensuring timely performance. In this system, a time reference (Tref) signal is distributed by the BEB to several frontend electronics boards (FEB) to reset the time-to-digital converters (TDC). In the CMS experiment, the arrangement of the iRPC chambers and on-chamber FEBs is at different positions, resulting in varying Tref arrival times on the FEB side. This paper describes the measures taken to ensure the time resolution of the single path and adjust the time base for multi-paths. Method Unique designs were implemented in the chamber, FEB, and BEB to ensure a satisfactory time resolution. Tref adjustments for different paths were performed in bunch crossing steps (24.950 ns) in the BEB using shift registers. And the sub-bunch crossing adjustment steps were performed in the FEB using the TDC correction module. Finally, the arrival time differences of Tref on different FEBs were less than 1.25 ns after adjustment. Results The time resolution of the FEB–BEB system was observed to be 32 ps. The time resolution of the chamber FEB–BEB system was first measured and is 554 ps at an iRPC working point of 7200 V. In addition, the Tref arrival time differences of different paths were adjusted from − 99.923 (− 90.113) ns to 0.073 (− 0.141) ns. Conclusion The test results revealed that the system time resolution and Tref adjustment performed by the BEB met the Phase-2 upgrade goals.
</description>
<pubDate>Wed, 03 Jul 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159837</guid>
<dc:date>2024-07-03T00:00:00Z</dc:date>
</item>
<item>
<title>EEA Presidential Address: The Past, Present and Future of Health Care Reform</title>
<link>https://hdl.handle.net/1721.1/159836</link>
<description>EEA Presidential Address: The Past, Present and Future of Health Care Reform
Gruber, Jonathan
The starting point for my speech is the explosive growth in the field of health economics. In&#13;
1990, the American Economic Review published just two articles in health economics; now it publishes&#13;
about five per year. In the American Economic Journal: Economic Policy and American Economic Journal:&#13;
Applied Economics, major new general-interest journals in health economics, about one in eight articles&#13;
published in 2017 was in health economics. And what has made health economics so fascinating is that&#13;
its impact was felt not just in the scholarly world but also in the policy world as well, most notably&#13;
through the Affordable Care Act (ACA) in 2010.&#13;
One of the most frustrating aspects of being a health economist is that expectations for health&#13;
care suffer from extreme black and white thinking. Is the ACA a failure or a success? Are health care&#13;
costs under control or not under control? Is health care reform over or still going? The answer to all of&#13;
these is yes! When you have a sector that is 18% of the US economy, there are never simple yes and no&#13;
answers.&#13;
And in particular, one of the most frustrating aspects of working on health care reform is the&#13;
idea that we have ever “done” health care reform. Health care reform is not a single battle; it is an&#13;
ongoing war that will never be fully resolved. So when thinking about health care reform, it is important&#13;
to understand where we have been, where we are, and where we need to go next – and that’s what I’ll&#13;
try to cover in this lecture.
</description>
<pubDate>Tue, 27 Aug 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159836</guid>
<dc:date>2024-08-27T00:00:00Z</dc:date>
</item>
<item>
<title>Higher rank flag sheaves on surfaces</title>
<link>https://hdl.handle.net/1721.1/159829</link>
<description>Higher rank flag sheaves on surfaces
Sheshmani, Artan; Yau, Shing-Tung
We study moduli space of holomorphic triples E 1 → ϕ E 2 , composed of torsion-free sheaves E i , i = 1 , 2 , and a holomorphic mophism between them, over a smooth complex projective surface S. The triples are equipped with Schmitt stability condition (Schmitt in Algebras Represent Theory 6(1):1–32, 2000). We observe that when Schmitt stability parameter q(m) becomes sufficiently large, the moduli space of triples benefits from having a perfect relative and absolute deformation-obstruction theory in some cases. We further generalize our construction by gluing triple moduli spaces, and extend the earlier work (Gholampour et al. in Nested Hilbert schemes on surfaces: virtual fundamental class, preprint, arXiv:1701.08899 ) where the obstruction theory of nested Hilbert schemes over the surface was studied. Here we extend the earlier results to the moduli space of chains E 1 → ϕ 1 E 2 → ϕ 2 ⋯ → ϕ n - 1 E n , where ϕ i are injective morphisms and rk ( E i ) ⩾ 1 for all i. There is a connection, by wallcrossing in the master space, between the theory of such higher rank flags, and the theory of Higgs pairs on the surface, which provides the means to relate the flag invariants to the local DT invariants of threefold given by a line bundle on the surface, X := Tot(L → S).
</description>
<pubDate>Tue, 16 Jul 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159829</guid>
<dc:date>2024-07-16T00:00:00Z</dc:date>
</item>
<item>
<title>Hyperbolic knotoids</title>
<link>https://hdl.handle.net/1721.1/159828</link>
<description>Hyperbolic knotoids
Adams, Colin; Bonat, Alexandra; Chande, Maya; Chen, Joye; Jiang, Maxwell; Romrell, Zachary; Santiago, Daniel; Shapiro, Benjamin; Woodruff, Dora
In 2010, Turaev introduced knotoids as a variation on knots that replaces the embedding of a circle with the embedding of a closed interval with two endpoints. A variety of knot invariants have been extended to knotoids. Here we provide definitions of hyperbolicity for both spherical and planar knotoids. We prove that the product of hyperbolic spherical knotoids is hyperbolic and the volumes add. We also determine the least volume of a rational spherical knotoid and provide various classes of hyperbolic knotoids. We also include tables of hyperbolic volumes for both spherical and planar knotoids.
</description>
<pubDate>Mon, 15 Jul 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159828</guid>
<dc:date>2024-07-15T00:00:00Z</dc:date>
</item>
<item>
<title>An Evaluation of the Genesis and Construct of State-Level Climate Action Plans Within the USA: Envisioning Carbon-Neutrality</title>
<link>https://hdl.handle.net/1721.1/159827</link>
<description>An Evaluation of the Genesis and Construct of State-Level Climate Action Plans Within the USA: Envisioning Carbon-Neutrality
Hernandez, Rafael E.; Zeringue, Chelsea T.; Zappi, Alex M.; Zappi, Mark E.
Purpose of Review Responsible environmental stewardship and the protection of our energy stability are issues being faced by the modern world, requiring the collaboration of policymakers, researchers, and the entire population. In order to avoid irreversible damage to the planet and a loss of energy sources, many countries are outlining plans with specific goals of reaching carbon neutrality and accelerating the adoption of cleaner energy production. Recent Findings States within the USA have and are drafting climate action plans to reduce greenhouse gas emissions with some also focusing on alternative energy use. Most of these plans expect a near-complete or complete shift away from practices that release greenhouse gases into the environment, promising a net-zero release of greenhouse gases statewide within some timeframe; generally, their timelines are 2050. Summary These documents and reports are integral to each state and the country as a whole to accelerate research, industry investment, and implementation of important sustainable energy practices. This review analyzes the genesis, document design protocols used, plan format, contents of note, and overriding goals of these state-created action plans.
</description>
<pubDate>Sun, 16 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159827</guid>
<dc:date>2025-02-16T00:00:00Z</dc:date>
</item>
<item>
<title>Well-posedness and regularity properties of 2d β -plane stochastic Navier–Stokes equations in a periodic channel</title>
<link>https://hdl.handle.net/1721.1/159826</link>
<description>Well-posedness and regularity properties of 2d β -plane stochastic Navier–Stokes equations in a periodic channel
Cacchió, Yuri; Hannani, Amirali; Staffilani, Gigliola
We consider the 2d β -plane stochastic Navier–Stokes equations in a periodic channel. We prove the well-posedness and the existence of the stationary measure, as well as certain regularity estimates concerning the support of the stationary measure. The mentioned explicit estimates are crucial for the rigorous study conducted in [8] of cascade phenomena for these equations. Although one may be able to invoke a more abstract theory to claim well-posedness for the associated initial value, to the best of our knowledge, this is the first mathematically rigorous and explicit treatment of this problem involving both the stochastic noise and the Coriolis force in a periodic channel.
</description>
<pubDate>Tue, 10 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159826</guid>
<dc:date>2024-12-10T00:00:00Z</dc:date>
</item>
<item>
<title>Medically Important Fungi in Multi-Species Biofilms: Microbial Interactions, Clinical Implications and Therapeutic Strategies</title>
<link>https://hdl.handle.net/1721.1/159825</link>
<description>Medically Important Fungi in Multi-Species Biofilms: Microbial Interactions, Clinical Implications and Therapeutic Strategies
Mace, Manoela A. M.; Krummenauer, Maria E.; Lopes, William; Vainstein, Marilene H.
Purpose of Review This review aims to elucidate clinically important sites where multi-species biofilms are formed. We highlight key in vitro and in vivo studies, discuss the clinical implications of these biofilms, and explore strategies for their prevention and eradication. Recent Findings Multi-species biofilms significantly enhance antimicrobial resistance and pathogenicity. Synergistic interactions, such as those between Candida albicans and Staphylococcus aureus or Pseudomonas aeruginosa, illustrate how fungal biofilms can elevate bacterial drug resistance. Innovative treatments, including combination therapies and targeting specific biofilm components, show promise in disrupting these resilient communities. Summary Understanding the molecular and environmental factors driving multi-species biofilm formation is crucial for developing effective therapies. Future research should emphasize in vivo interactions, host responses, and the potential of natural substances and polymeric devices to improve treatment outcomes and reduce the clinical burden of multi-species biofilm-associated infections.
</description>
<pubDate>Wed, 02 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159825</guid>
<dc:date>2024-10-02T00:00:00Z</dc:date>
</item>
<item>
<title>Developing and Validating the Artificial Intelligence Literacy Concept Inventory: an Instrument to Assess Artificial Intelligence Literacy among Middle School Students</title>
<link>https://hdl.handle.net/1721.1/159823</link>
<description>Developing and Validating the Artificial Intelligence Literacy Concept Inventory: an Instrument to Assess Artificial Intelligence Literacy among Middle School Students
Zhang, Helen; Perry, Anthony; Lee, Irene
The rapid expansion of Artificial Intelligence (AI) in our society makes it urgent and necessary to develop young students’ AI literacy so that they can become informed citizens and critical consumers of AI technology. Over the past decade many efforts have focused on developing curricular materials that make AI concepts accessible and engaging to young learners; and yet, limited research investigated how to assess learners’ AI literacy, which is critically important to inform the teaching and learning of AI. This paper addresses this issue by reporting the development and validation findings of the AI Literacy Concept Inventory Assessment (AI-CI), a set of multiple-choice questions designed to assess understanding of AI literacy concepts among middle school students. The AI-CI consists of 20 multiple choice questions examining student understanding of four topics: AI general concepts, logic systems, machine learning general concepts, and supervised learning. The content validity of AI-CI was established through multiple rounds of expert panel reviews with AI educators and experts, observations of student learning of AI, and cognitive validation interviews. The validity of the AI-CI was established with a sample of 981 students and the pre-posttest reliability was established with a sample of 108 middle school students who learned AI through experiencing the Developing AI literacy (DAILy) curriculum. The findings show that the AI-CI is a valid and reliable tool to assess AI literacy at the middle school level.
</description>
<pubDate>Sun, 05 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159823</guid>
<dc:date>2024-05-05T00:00:00Z</dc:date>
</item>
<item>
<title>Electrochemical Strategy for Low-Cost Viral Detection</title>
<link>https://hdl.handle.net/1721.1/159821</link>
<description>Electrochemical Strategy for Low-Cost Viral Detection
Zamani, Marjon; Robson, James M; Fan, Andy; Bono, Michael S; Furst, Ariel L; Klapperich, Catherine M
Sexually transmitted infections, including the human immunodeficiency virus (HIV) and the human papillomavirus (HPV), disproportionally impact those in low-resource settings. Early diagnosis is essential for managing HIV. Similarly, HPV causes nearly all cases of cervical cancer, the majority (90%) of which occur in low-resource settings. Importantly, infection with HPV is six times more likely to progress to cervical cancer in women who are HIV-positive. An inexpensive, adaptable point-of-care test for viral infections would make screening for these viruses more accessible to a broader set of the population. Here, we report a novel, cost-effective electrochemical platform using gold leaf electrodes to detect clinically relevant viral loads. We have combined this platform with loop-mediated isothermal amplification and a CRISPR-based recognition assay to detect HPV. Lower limits of detection were demonstrated down to 104 total copies of input nucleic acids, which is a clinically relevant viral load for HPV DNA. Further, proof-of-concept experiments with cervical swab samples, extracted using standard extraction protocols, demonstrated that the strategy is extendable to complex human samples. This adaptable technology could be applied to detect any viral infection rapidly and cost-effectively.
</description>
<pubDate>Wed, 23 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159821</guid>
<dc:date>2021-06-23T00:00:00Z</dc:date>
</item>
<item>
<title>DNA Electrochemistry: Charge-Transport Pathways through DNA Films on Gold</title>
<link>https://hdl.handle.net/1721.1/159819</link>
<description>DNA Electrochemistry: Charge-Transport Pathways through DNA Films on Gold
Nano, Adela; Furst, Ariel L; Hill, Michael G; Barton, Jacqueline K
Over the past 25 years, collective evidence has demonstrated that the DNA base-pair stack serves as a medium for charge transport chemistry in solution and on DNA-modified gold surfaces. Since this charge transport depends sensitively upon the integrity of the DNA base pair stack, perturbations in base stacking, as may occur with DNA base mismatches, lesions, and protein binding, interrupt DNA charge transport (DNA CT). This sensitivity has led to the development of powerful DNA electrochemical sensors. Given the utility of DNA electrochemistry for sensing and in response to recent literature, we describe critical protocols and characterizations necessary for performing DNA-mediated electrochemistry. We demonstrate DNA electrochemistry with a fully AT DNA sequence using a thiolated preformed DNA duplex and distinguish this DNA-mediated chemistry from that of electrochemistry of largely single-stranded DNA adsorbed to the surface. We also demonstrate the dependence of DNA CT on a fully stacked duplex. An increase in the percentage of mismatches within the DNA monolayer leads to a linear decrease in current flow for a DNA-bound intercalator, where the reaction is DNA-mediated; in contrast, for ruthenium hexammine, which binds electrostatically to DNA and the redox chemistry is not DNA-mediated, there is no effect on current flow with mismatches. We find that, with DNA as a well hybridized duplex, upon assembly, a DNA-mediated pathway facilitates the electron transfer between a well coupled redox probe and the gold surface. Overall, this report highlights critical points to be emphasized when utilizing DNA electrochemistry and offers explanations and controls for analyzing confounding results.
</description>
<pubDate>Wed, 04 Aug 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159819</guid>
<dc:date>2021-08-04T00:00:00Z</dc:date>
</item>
<item>
<title>Strategies for Engineering Affordable Technologies for Point-of-Care Diagnostics of Infectious Diseases</title>
<link>https://hdl.handle.net/1721.1/159818</link>
<description>Strategies for Engineering Affordable Technologies for Point-of-Care Diagnostics of Infectious Diseases
Zamani, Marjon; Furst, Ariel L; Klapperich, Catherine M
Disease prevalence is highest in low-resource settings (LRS) due to the lack of funds, infrastructure, and personnel required to carry out laboratory-based molecular tests. In high-resource settings, gold-standard molecular tests for diseases consist of nucleic acid amplification tests (NAATs) due to their excellent sensitivity and specificity. These tests require the extraction, amplification, and detection of nucleic acids from clinical samples. In high-resource settings, all three of these steps require highly specialized, costly, and onerous equipment that cannot be used in LRS. Nucleic acid extraction involves multiple centrifugation steps. Amplification consists of the polymerase chain reaction (PCR), which requires thermal cyclers. The detection of amplified DNA is typically done with specialized thermal cyclers that are capable of fluorescence detection. Traditional methods used to extract, amplify, and detect nucleic acids cannot be used outside of a laboratory in LRS. Thus, there is a need for affordable point-of-care devices to ease the high burden of disease in LRS.The past decade of work on paper-based fluidic devices has resulted in the invention of many paper-based biosensors for disease detection as well as isothermal amplification techniques that replace PCR. However, a challenge still remains in detecting pathogenic biomarkers from complex human samples without specialized laboratory equipment. Our research has focused on the development of affordable technologies to extract and detect nucleic acids in clinical samples with minimal equipment. Here we describe methods for the paper-based extraction, amplification, and detection of nucleic acids. This Account provides an overview of our latest technologies developed to detect an array of diseases in low-resource settings. We focus on detecting nucleic acids of H1N1, human papillomavirus (HPV), Neisseria gonorrheae (NG), Chlamydia trachomatis (CT), Trichomonas vaginalis (TV), and malaria from a variety of clinical sample types. H1N1 RNA was extracted from nasopharyngeal swabs; HPV, NG, and CT DNA were extracted from either cervical, urethral, or vaginal swabs; TV DNA was extracted from urine; and malaria DNA was extracted from whole blood. Different sample types necessitate different nucleic extraction protocols; we provide guidelines for assay design based on the clinical sample type used. We compare the pros and cons of different isothermal amplification techniques, namely, helicase-dependent amplification (HDA), loop-mediated isothermal amplification (LAMP), and a novel isothermal amplification technique that we developed: isothermal-identical multirepeat sequences (iso-IMRS). Finally, we compare various detection mechanisms, including lateral-flow and electrochemical readouts. Electrochemical readouts frequently employ gold electrodes due to strong gold-thiol coupling. However, the high cost of gold precludes their use in LRS. We discuss our development of novel gold leaf electrodes that can be made without specialized equipment for a fraction of the cost of commercially available gold electrodes.
</description>
<pubDate>Tue, 19 Oct 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159818</guid>
<dc:date>2021-10-19T00:00:00Z</dc:date>
</item>
<item>
<title>A Microbial Electrochemical Technology to Detect and Degrade Organophosphate Pesticides</title>
<link>https://hdl.handle.net/1721.1/159817</link>
<description>A Microbial Electrochemical Technology to Detect and Degrade Organophosphate Pesticides
Karbelkar, Amruta A; Reynolds, Erin E; Ahlmark, Rachel; Furst, Ariel L
Organophosphate (OP) pesticides cause hundreds of illnesses and deaths annually. Unfortunately, exposures are often detected by monitoring degradation products in blood and urine, with few effective methods for detection and remediation at the point of dispersal. We have developed an innovative strategy to remediate these compounds: an engineered microbial technology for the targeted detection and destruction of OP pesticides. This system is based upon microbial electrochemistry using two engineered strains. The strains are combined such that the first microbe (E. coli) degrades the pesticide, while the second (S. oneidensis) generates current in response to the degradation product without requiring external electrochemical stimulus or labels. This cellular technology is unique in that the E. coli serves only as an inert scaffold for enzymes to degrade OPs, circumventing a fundamental requirement of coculture design: maintaining the viability of two microbial strains simultaneously. With this platform, we can detect OP degradation products at submicromolar levels, outperforming reported colorimetric and fluorescence sensors. Importantly, this approach affords a modular, adaptable strategy that can be expanded to additional environmental contaminants.
</description>
<pubDate>Wed, 27 Oct 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159817</guid>
<dc:date>2021-10-27T00:00:00Z</dc:date>
</item>
<item>
<title>Drug Pricing Stewardship from Mark Cuban’s Cost Plus Generic Drug Program</title>
<link>https://hdl.handle.net/1721.1/159810</link>
<description>Drug Pricing Stewardship from Mark Cuban’s Cost Plus Generic Drug Program
Gulati, Snigdha; Gupta, Mohak; Yan, TingTing; Yelamanchili, Sneha; Xu, Lucy Q.; Bharani, Tina; Naji, Ali; Agarwal, Divyansh
Importance The exceedingly high US spending per capita on prescription medications is mediated, at least in part, by the inefficiencies of existing generic pharmaceutical distribution and reimbursement systems; yet, the extent of potential savings and areas for targeted interventions for generic drug prescribers remains underexplored. Objective We aimed to analyze 2021 Medicare Part D spending on generic drugs in comparison with pricing of a low-cost generic drug program, the Mark Cuban Cost Plus Drug Company (MCCPDC), to gauge the extent of achievable potential savings. Design, Setting, and Participants In this retrospective, observational study, we performed a systematic analysis of potential Medicare Part D savings when using MCCPDC generic pricing. The 2023 MCCPDC data, as of August 2023, were obtained from the provider’s publicly available database. The 2021 Medicare Part D data and prescriber datasets were obtained from the US Centers for Medicare and Medicaid Services. Main Outcomes and Measures Outcomes included total prescription volume, proportion of drugs with savings, total US dollar Medicare savings, and average weighted price reduction per unit drug. Results were stratified by medical and surgical subspecialties to identify areas for targeted interventions. Subspecialty-wise contribution to total savings versus contribution to total prescription volume was characterized. Results Total estimated Medicare Part D savings were $8.6 billion using 90-day MCCPDC pricing, with surgical drugs accounting for over $900 million. Nearly 80% of the examined drugs were more price effective through MCCPDC using 90-day supply. Commonly prescribed drugs in cardiology, psychiatry, neurology, transplant surgery, and urology demonstrated the highest estimated absolute savings. The most disproportionate savings relative to prescription volume were observed for drugs in oncology, gynecology, infectious disease, transplant surgery, and colorectal surgery. Conclusions and Relevance This study underscores the significant potential for Medicare Part D savings through strategies that address the systemic overpayment for generic medications. We identified key areas for reform as well as specific medical and surgical subspecialties where targeted interventions could yield substantial savings.
</description>
<pubDate>Wed, 21 Aug 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159810</guid>
<dc:date>2024-08-21T00:00:00Z</dc:date>
</item>
<item>
<title>Equivariant Neural Networks for Controlling Dynamic Spatial Light Modulators</title>
<link>https://hdl.handle.net/1721.1/159809</link>
<description>Equivariant Neural Networks for Controlling Dynamic Spatial Light Modulators
Vasisht Shankar, Sumukh; Wang, Rui; D’Souza, Darrel; Singer, Jonathan P.; Walters, Robin
Spatial light modulators (SLMs) are devices that are capable of manipulating incident light by passing it through an array of phase/intensity altering pixels. A recent alternative design involves creating a phase mask by directing a thin film of fluid with thermocapillary forces generated by a controlled temperature map. However, it is difficult to determine the input temperature signal necessary to induce a given height profile. The relationship between temperature and height is given by the thin film equation, a fourth-order nonlinear PDE, which is difficult to solve numerically. To address this problem, we train deep neural networks to directly solve the inverse problem, mapping from the desired height profiles to the needed temperature patterns. We design novel equivariant networks incorporating scale and rotation symmetry of the underlying thin film equation. We demonstrate the effectiveness of equivariant models for learning the complex relationship between input temperature signals and the resulting light patterns, showing they are more accurate than non-equivariant baselines and very computationally efficient. This work has implications for a range of applications, including high-power laser systems, and could lead to more efficient and effective ways to deploy the process of modulation of light in SLMs in a variety of applications.
</description>
<pubDate>Fri, 22 Nov 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159809</guid>
<dc:date>2024-11-22T00:00:00Z</dc:date>
</item>
<item>
<title>Quantum Computational Finance: Quantum Algorithm for Portfolio Optimization</title>
<link>https://hdl.handle.net/1721.1/159808</link>
<description>Quantum Computational Finance: Quantum Algorithm for Portfolio Optimization
Rebentrost, Patrick; Lloyd, Seth
We present a quantum algorithm for portfolio optimization. We discuss the market data input of asset prices, the processing of such data via quantum operations, and the output of financially relevant results. Given quantum access to a historical record of asset returns, the algorithm determines the optimal risk-return tradeoff curve and allows one to sample from the optimal portfolio. The algorithm can in principle attain a run time of poly ( log ( N ) ) , where N is the number of assets. Direct classical algorithms for determining the risk-return curve and other properties of the optimal portfolio take time poly ( N ) and we discuss potential quantum speedups in light of efficient classical sampling approaches.
</description>
<pubDate>Mon, 12 Aug 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159808</guid>
<dc:date>2024-08-12T00:00:00Z</dc:date>
</item>
<item>
<title>Smartphone and Wearable Device-Based Digital Phenotyping to Understand Substance use and its Syndemics</title>
<link>https://hdl.handle.net/1721.1/159807</link>
<description>Smartphone and Wearable Device-Based Digital Phenotyping to Understand Substance use and its Syndemics
Lee, Jasper S.; Browning, Emma; Hokayem, Joanne; Albrechta, Hannah; Goodman, Georgia R.; Venkatasubramanian, Krishna; Dumas, Arlen; Carreiro, Stephanie P.; O’Cleirigh, Conall; Chai, Peter R.
Digital phenotyping is a process that allows researchers to leverage smartphone and wearable data to explore how technology use relates to behavioral health outcomes. In this Research Concepts article, we provide background on prior research that has employed digital phenotyping; the fundamentals of how digital phenotyping works, using examples from participant data; the application of digital phenotyping in the context of substance use and its syndemics; and the ethical, legal and social implications of digital phenotyping. We discuss applications for digital phenotyping in medical toxicology, as well as potential uses for digital phenotyping in future research. We also highlight the importance of obtaining ground truth annotation in order to identify and establish digital phenotypes of key behaviors of interest. Finally, there are many potential roles for medical toxicologists to leverage digital phenotyping both in research and in the future as a clinical tool to better understand the contextual features associated with drug poisoning and overdose. This article demonstrates how medical toxicologists and researchers can progress through phases of a research trajectory using digital phenotyping to better understand behavior and its association with smartphone usage.
</description>
<pubDate>Mon, 04 Mar 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159807</guid>
<dc:date>2024-03-04T00:00:00Z</dc:date>
</item>
<item>
<title>A School-Based Evaluation of the FRIENDS Resilience Programs: Implications for Mental Health Concerns in Rural Students</title>
<link>https://hdl.handle.net/1721.1/159806</link>
<description>A School-Based Evaluation of the FRIENDS Resilience Programs: Implications for Mental Health Concerns in Rural Students
Schwartz-Mette, Rebecca A.; Lawrence, Hannah R.; Fearey, Eliot; Shankman, Jessica; Nichols, Janet; Walters, Joy; Perello, Elena; Smith, Susan
The FRIENDS Resilience programs provide cognitive-behavioral skills across the developmental spectrum and can be applied as a universal or selective prevention program. In the current study, we assessed whether, relative to the schools’ existing counseling curriculum (“guidance”), FRIENDS improved social skills, problem behaviors, and academic competence in a sample of 650 students in kindergarten, 2nd, 5th, and 7th grade in a rural community in the northeastern United States. Student, parent, and teacher reports were obtained pre-intervention, post-intervention, and 4 months later. Analyses examined FRIENDS as a universal prevention program in the general school population and as a selective intervention for at-risk students (those with elevated existing symptoms). Teachers reported improvements in social skills, problem behaviors, and academic competence, and parents reported improved problem behaviors immediately post-intervention for all students receiving FRIENDS and guidance. However, at-risk students who received FRIENDS experienced significantly greater improvements in teacher-reported problem behaviors compared to those who received guidance. When assessing changes over time once all students had received FRIENDS, teacher-rated social skills and academic competence improved, and student- and parent-rated problem behaviors decreased from pre- to post-FRIENDS and 4-month follow-up. Effects were consistent for the overall sample and at-risk students, with stronger effects for those at-risk. These small yet significant effects of FRIENDS as universal prevention may be more limited relative to usual guidance curriculum, but preventative effects may be enhanced for those students in more immediate need of support. Directions for future evaluation of FRIENDS are discussed.
</description>
<pubDate>Sat, 29 Jun 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159806</guid>
<dc:date>2024-06-29T00:00:00Z</dc:date>
</item>
<item>
<title>Protection of Anaerobic Microbes from Processing Stressors Using Metal–Phenolic Networks</title>
<link>https://hdl.handle.net/1721.1/159794</link>
<description>Protection of Anaerobic Microbes from Processing Stressors Using Metal–Phenolic Networks
Fan, Gang; Wasuwanich, Pris; Rodriguez-Otero, Mariela R; Furst, Ariel L
The gut microbiome is essential to maintain overall health and prevent disease, which can occur when these microbes are not in homeostasis. Microbial biotherapeutics are important to combat these issues, but they must be alive at the time of delivery for efficacy. Many potentially therapeutic species are anaerobes and thus are difficult to manufacture because of the limited efficacy of existing protective methods, making their production nearly impossible. We have developed a self-assembling cellular coating to improve the viability and stability of the next-generation biotherapeutic Bacteroides thetaiotaomicron. We show protection from both harsh processing conditions and oxygen exposure, even in the absence of canonical cryoprotectants. This advance will increase the range of microbes that can be stably manufactured and facilitate the development of emerging strains of interest by ensuring their postproduction viability.
</description>
<pubDate>Wed, 16 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159794</guid>
<dc:date>2022-02-16T00:00:00Z</dc:date>
</item>
<item>
<title>What’s for Dessert?</title>
<link>https://hdl.handle.net/1721.1/159793</link>
<description>What’s for Dessert?
Khovanova, Tanya; Klain, Daniel A.
Dan and Tanya meet in a coffeehouse and decide to have dessert. Both are watching their calories, so they decide to share. They would like to find a dessert that they will both enjoy, and to do so quickly, with a minimum of negotiation or calculation. How should they choose?
</description>
<pubDate>Mon, 05 Aug 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159793</guid>
<dc:date>2024-08-05T00:00:00Z</dc:date>
</item>
<item>
<title>The changing climate of the Mediterranean basin</title>
<link>https://hdl.handle.net/1721.1/159789</link>
<description>The changing climate of the Mediterranean basin
Carli, Bruno; Malanotte-Rizzoli, Paola; Sanso’, Fernando
This short paper introduces the topical collection of rendiconti lincei. scienze fisiche e naturali, which includes contributions originating from those presented at the Conference on “The Mediterranean System: a hotspot for climate changes and adaptation”, 21-22 March 2023 at Accademia Nazionale dei Lincei in Rome. The physics of the Earth system, particularly the coupled ocean/atmosphere, constitutes the foundation for modelling the processes of climate change; its consequences strongly impact human society, and adaptation measures are required to mitigate its effects. This paper summarizes these factors by focusing on the Mediterranean Basin which can be considered a laboratory for studying, understanding and modelling global processes worldwide.
</description>
<pubDate>Mon, 17 Jun 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159789</guid>
<dc:date>2024-06-17T00:00:00Z</dc:date>
</item>
<item>
<title>Differential Effects of Pharmacologic and Mechanical Support on Right-Left Ventricular Coupling</title>
<link>https://hdl.handle.net/1721.1/159788</link>
<description>Differential Effects of Pharmacologic and Mechanical Support on Right-Left Ventricular Coupling
Lamberti, Kimberly K.; Goffer, Efrat M.; Edelman, Elazer R.; Keller, Steven P.
Background Percutaneous ventricular assist devices are increasingly relied on to maintain perfusion for cardiogenic shock patients. Optimal medical management strategies however remain uncertain from limited understanding of interventricular effects. This study analyzed the effects of pharmacologic and left-sided mechanical support on right ventricular function. Methods A porcine model was developed to assess biventricular function during bolus pharmacologic administration before and after left-sided percutaneous ventricular assist and in cardiogenic shock. Results The presence of mechanical support increased right ventricular load and stress with respect to the left ventricle. This shifted and exaggerated the relative effects of commonly used vasoactive agents. Furthermore, induction of cardiogenic shock led to differential pulmonary vascular and right ventricular responses. Conclusions Left ventricular ischemia and mechanical support altered interventricular coupling. Resulting impacts of pharmacologic agents indicate differential right heart responses and sensitivity to treatments and the need for further study to optimize biventricular function in shock patients. Graphical Abstract
</description>
<pubDate>Mon, 20 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159788</guid>
<dc:date>2024-05-20T00:00:00Z</dc:date>
</item>
<item>
<title>On Polynomial Carleson Operators Along Quadratic Hypersurfaces</title>
<link>https://hdl.handle.net/1721.1/159787</link>
<description>On Polynomial Carleson Operators Along Quadratic Hypersurfaces
Anderson, Theresa C.; Maldague, Dominique; Pierce, Lillian B.; Yung, Po-Lam
We prove that a maximally modulated singular oscillatory integral operator along a hypersurface defined by ( y , Q ( y ) ) ⊆ R n + 1 , for an arbitrary non-degenerate quadratic form Q, admits an a priori bound on L p for all 1 &lt; p &lt; ∞ , for each n ≥ 2 . This operator takes the form of a polynomial Carleson operator of Radon-type, in which the maximally modulated phases lie in the real span of { p 2 , … , p d } for any set of fixed real-valued polynomials p j such that p j is homogeneous of degree j, and p 2 is not a multiple of Q(y). The general method developed in this work applies to quadratic forms of arbitrary signature, while previous work considered only the special positive definite case Q ( y ) = | y | 2 .
</description>
<pubDate>Fri, 23 Aug 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159787</guid>
<dc:date>2024-08-23T00:00:00Z</dc:date>
</item>
<item>
<title>Virtual Reality for Postoperative Pain Management: A Review of Current Evidence</title>
<link>https://hdl.handle.net/1721.1/159786</link>
<description>Virtual Reality for Postoperative Pain Management: A Review of Current Evidence
Malik, Aila; Elshazly, Tarek; Pokuri, Krishna; Apai, Carol; Rothkrug, Alex; Hasoon, Jamal; Chung, Matthew; Ye, Zhewei; Bhayani, Sadiq; Kaye, Alan D.; Liu, Henry; Lang, Min; Yong, R. J.; Donjow, Aleksy R.
Purpose of Review With the ongoing opioid crisis, there is a continued need to develop multimodal pain management strategies inclusive of non-pharmacological treatments. Virtual reality (VR) offers a non-invasive treatment approach for the management of acute and chronic pain including postoperative pain. The aim of this review is to describe the use of VR and its effect on pain-related outcome measures compared to routine care in various types of surgical procedures. Recent Findings Severe postoperative pain is associated with an increased risk of medical complications and may lead to the development of chronic pain. VR-based interventions are a form of distraction therapy that attenuates pain perception and have been shown to reduce activity in central pain-processing regions. In patients undergoing cardiac surgery, VR may reduce postoperative pain and improve physiological parameters such as heart rate and blood pressure. VR technology was found to have a high satisfaction rate in patients undergoing laparoscopic abdominal surgeries. Three-dimensional (3D) VR interventions may be useful for postoperative pain control in patients undergoing head and neck surgery. VR technology has revealed mixed results for postoperative pain control following orthopedic procedures although it has beneficial effects on functional outcomes during postoperative rehabilitation. In the pediatric population, VR is notable for its applicability in postoperative pain control and anxiety. Summary VR technology is a novel, non-pharmacologic adjunct in the management of postoperative pain. Current studies are limited regarding therapy adaptations for the elderly population. High-quality randomized controlled trials are needed to establish the clinical effectiveness of VR-based therapies in the postoperative setting.
</description>
<pubDate>Tue, 29 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159786</guid>
<dc:date>2024-10-29T00:00:00Z</dc:date>
</item>
<item>
<title>Uniform robot relocation is hard in only two directions even without obstacles</title>
<link>https://hdl.handle.net/1721.1/159784</link>
<description>Uniform robot relocation is hard in only two directions even without obstacles
Caballero, David; Cantu, Angel A.; Gomez, Timothy; Luchsinger, Austin; Schweller, Robert; Wylie, Tim
Given n unit-sized robots contained within a square grid surrounded by four walls, we ask the question of whether it is possible to move a particular robot a to a specific grid location b by performing a sequence of global step operations in which all robots move one grid step in the same cardinal direction (if not blocked by a wall or other blocked robots). We show this problem is NP-complete when restricted to just two directions (south and west). This answers the simplest fundamental problem in uniform global unit tilt swarm robotics. We then consider a relaxed version of this problem called row relocation in which the goal is to move a robot a to a specific row regardless of its horizontal placement. We show that if asking about the first row of the square grid (bottom-most), then this version of the problem is solvable in polynomial time. Finally, we discuss several areas for future research and open problems.
</description>
<pubDate>Fri, 13 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159784</guid>
<dc:date>2024-12-13T00:00:00Z</dc:date>
</item>
<item>
<title>Exterior-Point Optimization for Sparse and Low-Rank Optimization</title>
<link>https://hdl.handle.net/1721.1/159783</link>
<description>Exterior-Point Optimization for Sparse and Low-Rank Optimization
Das Gupta, Shuvomoy; Stellato, Bartolomeo; Van Parys, Bart P. G.
Many problems of substantial current interest in machine learning, statistics, and data science can be formulated as sparse and low-rank optimization problems. In this paper, we present the nonconvex exterior-point optimization solver (NExOS)—a first-order algorithm tailored to sparse and low-rank optimization problems. We consider the problem of minimizing a convex function over a nonconvex constraint set, where the set can be decomposed as the intersection of a compact convex set and a nonconvex set involving sparse or low-rank constraints. Unlike the convex relaxation approaches, NExOS finds a locally optimal point of the original problem by solving a sequence of penalized problems with strictly decreasing penalty parameters by exploiting the nonconvex geometry. NExOS solves each penalized problem by applying a first-order algorithm, which converges linearly to a local minimum of the corresponding penalized formulation under regularity conditions. Furthermore, the local minima of the penalized problems converge to a local minimum of the original problem as the penalty parameter goes to zero. We then implement and test NExOS on many instances from a wide variety of sparse and low-rank optimization problems, empirically demonstrating that our algorithm outperforms specialized methods.
</description>
<pubDate>Sun, 26 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159783</guid>
<dc:date>2024-05-26T00:00:00Z</dc:date>
</item>
<item>
<title>The road to renal denervation for hypertension and beyond (HF): two decades of failed, succeeded, and to be determined</title>
<link>https://hdl.handle.net/1721.1/159671</link>
<description>The road to renal denervation for hypertension and beyond (HF): two decades of failed, succeeded, and to be determined
Jiang, Haoran; Kittipibul, Veraprapas; Mahfoud, Felix; Böhm, Michael; Sobotka, Paul A.; Esler, Murray; Wang, Jie; Fudim, Marat
Activation of the sympathetic nervous system has been attributed to the development of hypertension. Two established approaches for treating hypertension are pharmacotherapy and lifestyle changes. With an improved understanding of renal nerve anatomy and physiology, renal denervation has been proposed as an alternative treatment for hypertension. Specifically, it has been shown that the interruption of sympathetic nerves connecting the kidney and the sympathetic nervous system can reduce blood pressure. Here, we present a review on how renal denervation can help hypertension patients, specifically focusing on our novel understanding of renal nerve anatomy, denervation technique, and subsequent clinical trials, and how it may be used to treat other cardiovascular diseases like heart failure.
</description>
<pubDate>Thu, 07 Nov 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159671</guid>
<dc:date>2024-11-07T00:00:00Z</dc:date>
</item>
<item>
<title>Cardiac contractility modulation in patients with heart failure — A review of the literature</title>
<link>https://hdl.handle.net/1721.1/159670</link>
<description>Cardiac contractility modulation in patients with heart failure — A review of the literature
Bazoukis, George; Saplaouras, Athanasios; Efthymiou, Polyxeni; Yiannikourides, Andronicos; Liu, Tong; Letsas, Konstantinos P.; Efremidis, Michael; Lampropoulos, Konstantinos; Xydonas, Sotirios; Tse, Gary; Armoundas, Antonis A.
Experimental in vivo and in vitro studies showed that electric currents applied during the absolute refractory period can modulate cardiac contractility. In preclinical studies, cardiac contractility modulation (CCM) was found to improve calcium handling, reverse the foetal myocyte gene programming associated with heart failure (HF), and facilitate reverse remodeling. Randomized control trials and observational studies have provided evidence about the safety and efficacy of CCM in patients with HF. Clinically, CCM therapy is indicated to improve the 6-min hall walk, quality of life, and functional status of HF patients who remain symptomatic despite guideline-directed medical treatment without an indication for cardiac resynchronization therapy (CRT) and have a left ventricular ejection fraction (LVEF) ranging from 25 to 45%. Although there are promising results about the role of CCM in HF patients with preserved LVEF (HFpEF), further studies are needed to elucidate the role of CCM therapy in this population. Late gadolinium enhancement (LGE) assessment before CCM implantation has been proposed for guiding the lead placement. Furthermore, the optimal duration of CCM application needs further investigation. This review aims to present the existing evidence regarding the role of CCM therapy in HF patients and identify gaps and challenges that require further studies.
</description>
<pubDate>Fri, 23 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159670</guid>
<dc:date>2024-02-23T00:00:00Z</dc:date>
</item>
<item>
<title>Assessment of reinforcement learning algorithms for nuclear power plant fuel optimization</title>
<link>https://hdl.handle.net/1721.1/159669</link>
<description>Assessment of reinforcement learning algorithms for nuclear power plant fuel optimization
Seurin, Paul; Shirvan, Koroush
The nuclear fuel loading pattern optimization problem belongs to the class of large-scale combinatorial optimization. It is also characterized by multiple objectives and constraints, which makes it impossible to solve explicitly. Stochastic optimization methodologies including Genetic Algorithms and Simulated Annealing are used by different nuclear utilities and vendors but hand-designed solutions continue to be the prevalent method in the industry. To improve the state-of-the-art, Deep Reinforcement Learning (RL), in particular, Proximal Policy Optimization is leveraged. This work presents a first-of-a-kind approach to utilize deep RL to solve the loading pattern problem and could be leveraged for any engineering design optimization. This paper is also to our knowledge the first to propose a study of the behavior of several hyper-parameters that influence the RL algorithm. The algorithm is highly dependent on multiple factors such as the shape of the objective function derived for the core design that behaves as a fudge factor that affects the stability of the learning. But also an exploration/exploitation trade-off that manifests through different parameters such as the number of loading patterns seen by the agents per episode, the number of samples collected before a policy update , and an entropy factor that increases the randomness of the policy during training. We found that RL must be applied similarly to a Gaussian Process in which the acquisition function is replaced by a parametrized policy. Then, once an initial set of hyper-parameters is found, reducing and until no more learning is observed will result in the highest sample efficiency robustly and stably. This resulted in an economic benefit of 535,000 - 642,000 $/year/plant. Future work must extend this research to multi-objective settings and comparing them to state-of-the-art implementation of stochastic optimization methods.
</description>
<pubDate>Mon, 29 Jan 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159669</guid>
<dc:date>2024-01-29T00:00:00Z</dc:date>
</item>
<item>
<title>Leveraging A Digital Pill System to Understand Prevention-Effective Adherence to Oral Hiv Pre-Exposure Prophylaxis Among Men Who Have Sex with Men with Substance Use</title>
<link>https://hdl.handle.net/1721.1/159668</link>
<description>Leveraging A Digital Pill System to Understand Prevention-Effective Adherence to Oral Hiv Pre-Exposure Prophylaxis Among Men Who Have Sex with Men with Substance Use
Chai, Peter R.; Goodman, Georgia R.; Mohamed, Yassir; Bustamante, Maria J.; Albrechta, Hannah; Lee, Jasper S.; Glynn, Tiffany R.; Boland, Kel; Hokayem, Joanne; Boyer, Edward W.; Rosen, Rochelle K.; Mayer, Kenneth H.; O’Cleirigh, Conall
Daily oral pre-exposure prophylaxis (PrEP) is highly effective for HIV prevention, though efficacy depends on adherence. Digital pill systems (DPS) can enable direct, real-time adherence measurement. HIV-negative men who have sex with men (MSM) with substance use (excluding alcohol) utilized a DPS over 90 days and completed weekly surveys reporting sexual activity, condom use, and substance use. Responses indicating (1) any sexual activity and substance use or (2) condomless anal intercourse (CAI) in the prior week were categorized as high risk for HIV acquisition. PrEP adherence data for the 7-day period preceding each response was dichotomized as ≤ 3 and ≥ 4 doses/week, indicating prevention-effective adherence, and compared by HIV risk level. Thirteen MSM were analyzed (median age: 32). Of 113 surveys, 48.7% indicated high HIV risk, with 12.4% reporting CAI alone, 16.8% any sexual activity and substance use, and 19.5% both CAI and substance use. Weekly mean PrEP adherence was 90.3% (6.3 of 7 doses/week), with ≥ 4 doses/week recorded during 92.0% of weeks. The proportion of participants with ≥ 4 recorded doses/week was 88.9% during weeks with CAI alone, 89.5% during weeks with any sexual activity and substance use, 92.0% during weeks with both CAI and substance use, and 92.8% during lower risk weeks. Participants ingested ≥ 4 doses/week during 89.1% of all high-risk weeks and 94.8% of low-risk weeks. Overall, participants maintained high levels of PrEP adherence while engaging in HIV risk behaviors. DPS can be deployed concurrently with data collection tools to assess ingestion patterns during periods of elevated risk.
</description>
<pubDate>Wed, 10 Jul 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159668</guid>
<dc:date>2024-07-10T00:00:00Z</dc:date>
</item>
<item>
<title>Sexual Networking and HIV/STI Prevention Among Men who have Sex with Men and Identify as Persons of Color in the Era of COVID-19 in Boston, MA: Qualitative Findings from the National HIV Behavioral Surveillance Project</title>
<link>https://hdl.handle.net/1721.1/159651</link>
<description>Sexual Networking and HIV/STI Prevention Among Men who have Sex with Men and Identify as Persons of Color in the Era of COVID-19 in Boston, MA: Qualitative Findings from the National HIV Behavioral Surveillance Project
O’Cleirigh, Conall; Foley, Jacklyn D.; Stanton, Amelia M.; McKetchnie, Samantha M.; Gulbicki, Lauren R.; Muten, Jennifer; Chai, Peter; Fitch, Calvin; Onofrey, Shauna; Klevens, R. M.; Psaros, Christina
Men who have sex with men and identify as persons of color (MSM of color) are significantly impacted by HIV in the United States. The COVID-19 pandemic may have disproportionately exacerbated HIV-related disparities among MSM of color by affecting sexual networking behaviors and disrupting access to sexual health care. The current study explored the impact of COVID-19 on sexual networking and HIV/sexually transmitted infection (STI) prevention behaviors among MSM of color in Boston, MA. Eighteen semi-structured interviews were conducted via the 2020–2021 Boston sample of the National HIV Behavioral Surveillance (NHBS) project. Eligible participants were at least 18 years old, identified as a man or non-binary person assigned male at birth and as a person of color, and endorsed ever having sex with men. Interviews were coded using inductive and deductive approaches, and themes were extracted using thematic analysis. When participants were asked about the impact of COVID-19 on sexual networking and HIV/STI prevention, the following themes emerged: (1) differing interpretations of COVID-19 public health guidance, (2) behavior change to meet social and sexual needs, (3) limited or changed access to HIV/STI prevention services; and (4) avoidance of healthcare appointments. Overall, the pandemic affected sexual networking and HIV/STI prevention behaviors among MSM of color. Though changes in sexual networking varied, most participants decreased in-person networking, increased dating app use, and prioritized longer-term relationships. Despite loosening of restrictions, these impacts may persist and should inform the adaptation of sexual networking guidance and interventions to mitigate HIV-related disparities in communities of color.
</description>
<pubDate>Tue, 02 Jan 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159651</guid>
<dc:date>2024-01-02T00:00:00Z</dc:date>
</item>
<item>
<title>The D-equivalence conjecture for hyper-Kähler varieties via hyperholomorphic bundles</title>
<link>https://hdl.handle.net/1721.1/159543</link>
<description>The D-equivalence conjecture for hyper-Kähler varieties via hyperholomorphic bundles
Maulik, Davesh; Shen, Junliang; Yin, Qizheng; Zhang, Ruxuan
We show that birational hyper-Kähler varieties of K 3 [ n ] -type are derived equivalent, establishing the D -equivalence conjecture in these cases. The Fourier–Mukai kernels of our derived equivalences are constructed from projectively hyperholomorphic bundles, following ideas of Markman. Our method also proves a stronger version of the D -equivalence conjecture for hyper-Kähler varieties of K 3 [ n ] -type with Brauer classes.
</description>
<pubDate>Mon, 09 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159543</guid>
<dc:date>2025-06-09T00:00:00Z</dc:date>
</item>
<item>
<title>Research priorities for supporting subnational climate policies</title>
<link>https://hdl.handle.net/1721.1/159447</link>
<description>Research priorities for supporting subnational climate policies
Klemun, Magdalena M; Edwards, Morgan R; Trancik, Jessika E
Growing momentum for decentralized climate policy and the falling costs of low‐carbon technologies are creating new climate change mitigation opportunities for subnational actors. Here we discuss how research can best support these subnational efforts to allow limited resources to stretch further. To stimulate this discussion, we identify four research priorities. (1) Innovation mechanisms examines local policy opportunities for technology improvement to achieve high returns on investments. (2) Co‐benefits analyzes the non‐climate benefits of emissions reductions to highlight how local policies can affect communities directly. (3) Emissions monitoring develops rapid, low‐cost, local measurement strategies to allow communities to assess and weigh in on the emissions impacts of local energy systems. (4) Decision levers reframes large‐scale analyses into more targeted and actionable metrics for local policy decisions. This piece was informed and inspired by a set of interviews we conducted with representatives in business, government, NGOs, and educational institutions actively engaged in local climate action, and by our own research.
</description>
<pubDate>Thu, 13 Aug 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159447</guid>
<dc:date>2020-08-13T00:00:00Z</dc:date>
</item>
<item>
<title>Air pollution: A systematic review of its psychological, economic, and social effects</title>
<link>https://hdl.handle.net/1721.1/159446</link>
<description>Air pollution: A systematic review of its psychological, economic, and social effects
Lu, Jackson G
This review (178 published articles) is the first to systematically examine the psychological (affective, cognitive, behavioral), economic, and social effects of air pollution beyond its physiological and environmental effects. Affectively, air pollution decreases happiness and life satisfaction, and increases annoyance, anxiety, mental disorders, self-harm, and suicide. Cognitively, it impairs cognitive functioning and decision making. Behaviorally, air pollution triggers avoidance behavior, defensive expenditure, and migration as coping strategies. Economically, it hurts work productivity and stock markets. Socially, it exacerbates criminal activities and worsens perception of the government. Importantly, both actual and perceived air pollution levels matter. Limitations of past research and future directions are discussed.
</description>
<pubDate>Wed, 01 Apr 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159446</guid>
<dc:date>2020-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>There’s plenty of room at the Top: What will drive computer performance after Moore’s law?</title>
<link>https://hdl.handle.net/1721.1/159445</link>
<description>There’s plenty of room at the Top: What will drive computer performance after Moore’s law?
Leiserson, Charles E; Thompson, Neil C; Emer, Joel S; Kuszmaul, Bradley C; Lampson, Butler W; Sanchez, Daniel; Schardl, Tao B
The doubling of the number of transistors on a chip every 2 years, a seemly inevitable trend that has been called Moore's law, has contributed immensely to improvements in computer performance. However, silicon-based transistors cannot get much smaller than they are today, and other approaches should be explored to keep performance growing. Leiserson et al. review recent examples and argue that the most promising place to look is at the top of the computing stack, where improvements in software, algorithms, and hardware architecture can bring the much-needed boost.
</description>
<pubDate>Fri, 05 Jun 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159445</guid>
<dc:date>2020-06-05T00:00:00Z</dc:date>
</item>
<item>
<title>A functional approach for studying technological progress: Extension to energy technology</title>
<link>https://hdl.handle.net/1721.1/159444</link>
<description>A functional approach for studying technological progress: Extension to energy technology
Koh, Heebyung; Magee, Christopher L
This paper extends a broad functional category approach for the study of technological capability progress recently developed and applied to information technology to a second key case—that of energy based technologies. The approach is applied to the same three functional operations—storage, transportation and transformation—that were used for information technology by first building a 100 plus year database for each of the three energy-based functional categories. In agreement with the results for information technology in the first paper, the energy technology results indicate that the functional approach offers a stable methodology for assessing longer time technological progress trends. Moreover, similar to what was found with information technology in the first study, the functional capability for energy technology shows continual—if not continuous—improvement that is best quantitatively described as exponential with respect to time. The absence of capability discontinuities—even with large technology displacement—and the lack of clear saturation effects are found with energy as it was with information. However, some key differences between energy and information technology are seen and these include:&#13;
&#13;
*Lower rates of progress for energy technology over the entire period: 19–37% annually for Information Technology and 3–13% for Energy Technology.&#13;
&#13;
*Substantial variability of progress rates is found within given functional categories for energy compared to relatively small variation within any one category for information technology. The strongest variation is found among capability progress among different energy types.&#13;
&#13;
*More challenging data recovery and metric definition for energy as compared to information technology.&#13;
&#13;
These findings are interpreted in terms of fundamental differences between energy and information including the losses and efficiency constraints on energy. We apply Whitney's insight that these fundamental differences lead to naturally modular information technology artifacts. The higher progress rates of information-based as opposed to energy-based technologies follows since decomposable systems can progress more rapidly due to the greater ease of independent as opposed to simultaneous development. In addition, the broad implications of our findings to studies of the relationships between technical and social change are briefly discussed.
</description>
<pubDate>Tue, 01 Jul 2008 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159444</guid>
<dc:date>2008-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Complex behaviors and various soliton profiles of (2+1)-dimensional complex modified Korteweg-de-Vries Equation</title>
<link>https://hdl.handle.net/1721.1/159433</link>
<description>Complex behaviors and various soliton profiles of (2+1)-dimensional complex modified Korteweg-de-Vries Equation
ur Rahman, Mati; Karaca, Yeliz; Sun, Mei; Baleanu, Dumitru; Alfwzan, Wafa F.
Nonlinear dynamical problems, characterized by unpredictable and chaotic changes among variables over time, pose unique challenges in understanding. This paper explores the coupled nonlinear (2+1)-dimensional complex modified Korteweg-de-Vries (cmKdV) equation-a fundamental equation in applied magnetism and nanophysics. The study focuses on dynamic behaviors, specifically examining bifurcations and equilibrium points leading to chaotic phenomena by introducing an external term to the system. Employing chaos theory, we showcase the chaotic tendencies of the perturbed dynamical system. Additionally, a sensitivity analysis using the Runge-Kutta method reveals the solution’s stability under slight variations in initial conditions. Innovatively, the paper utilizes the planar dynamical system technique to construct various solitons within the governing model. This research provides novel insights into the behavior of the (2+1)-dimensional cmKdV equation and its applications in applied magnetism and nanophysics.
</description>
<pubDate>Sat, 06 Apr 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159433</guid>
<dc:date>2024-04-06T00:00:00Z</dc:date>
</item>
<item>
<title>Riemannian Trust Region Methods for SC 1 Minimization</title>
<link>https://hdl.handle.net/1721.1/159432</link>
<description>Riemannian Trust Region Methods for SC 1 Minimization
Zhang, Chenyu; Xiao, Rufeng; Huang, Wen; Jiang, Rujun
Manifold optimization has recently gained significant attention due to its wide range of applications in various areas. This paper introduces the first Riemannian trust region method for minimizing an SC 1 function, which is a differentiable function that has a semismooth gradient vector field, on manifolds with convergence guarantee. We provide proof of both global and local convergence results, along with demonstrating the local superlinear convergence rate of our proposed method. As an application and to demonstrate our motivation, we utilize our trust region method as a subproblem solver within an augmented Lagrangian method for minimizing nonsmooth nonconvex functions over manifolds. This represents the first approach that fully explores the second-order information of the subproblem in the context of augmented Lagrangian methods on manifolds. Numerical experiments confirm that our method outperforms existing methods.
</description>
<pubDate>Fri, 20 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159432</guid>
<dc:date>2024-09-20T00:00:00Z</dc:date>
</item>
<item>
<title>Hybridizable Discontinuous Galerkin Methods for the Two-Dimensional Monge–Ampère Equation</title>
<link>https://hdl.handle.net/1721.1/159431</link>
<description>Hybridizable Discontinuous Galerkin Methods for the Two-Dimensional Monge–Ampère Equation
Nguyen, Ngoc C.; Peraire, Jaime
We introduce two hybridizable discontinuous Galerkin (HDG) methods for numerically solving the two-dimensional Monge–Ampère equation. The first HDG method is devised to solve the nonlinear elliptic Monge–Ampère equation by using Newton’s method. The second HDG method is devised to solve a sequence of the Poisson equation until convergence to a fixed-point solution of the Monge–Ampère equation is reached. Numerical examples are presented to demonstrate the convergence and accuracy of the HDG methods. Furthermore, the HDG methods are applied to r-adaptive mesh generation by redistributing a given scalar density function via the optimal transport theory. This r-adaptivity methodology leads to the Monge–Ampère equation with a nonlinear Neumann boundary condition arising from the optimal transport of the density function to conform the resulting high-order mesh to the boundary. Hence, we extend the HDG methods to treat the nonlinear Neumann boundary condition. Numerical experiments are presented to illustrate the generation of r-adaptive high-order meshes on planar and curved domains.
</description>
<pubDate>Thu, 27 Jun 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159431</guid>
<dc:date>2024-06-27T00:00:00Z</dc:date>
</item>
<item>
<title>Active velocity estimation using light curtains via self-supervised multi-armed bandits</title>
<link>https://hdl.handle.net/1721.1/159430</link>
<description>Active velocity estimation using light curtains via self-supervised multi-armed bandits
Ancha, Siddharth; Pathak, Gaurav; Zhang, Ji; Narasimhan, Srinivasa; Held, David
To navigate in an environment safely and autonomously, robots must accurately estimate where obstacles are and how they move. Instead of using expensive traditional 3D sensors, we explore the use of a much cheaper, faster, and higher resolution alternative: programmable light curtains. Light curtains are a controllable depth sensor that sense only along a surface that the user selects. We adapt a probabilistic method based on particle filters and occupancy grids to explicitly estimate the position and velocity of 3D points in the scene using partial measurements made by light curtains. The central challenge is to decide where to place the light curtain to accurately perform this task. We propose multiple curtain placement strategies guided by maximizing information gain and verifying predicted object locations. Then, we combine these strategies using an online learning framework. We propose a novel self-supervised reward function that evaluates the accuracy of current velocity estimates using future light curtain placements. We use a multi-armed bandit framework to intelligently switch between placement policies in real time, outperforming fixed policies. We develop a full-stack navigation system that uses position and velocity estimates from light curtains for downstream tasks such as localization, mapping, path-planning, and obstacle avoidance. This work paves the way for controllable light curtains to accurately, efficiently, and purposefully perceive and navigate complex and dynamic environments.
</description>
<pubDate>Sat, 10 Aug 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159430</guid>
<dc:date>2024-08-10T00:00:00Z</dc:date>
</item>
<item>
<title>Educators’ motivations in massive open online courses for professional development</title>
<link>https://hdl.handle.net/1721.1/159429</link>
<description>Educators’ motivations in massive open online courses for professional development
Anghel, Ella; Littenberg-Tobias, Joshua; von Davier, Matthias
Massive Open Online Courses (MOOCs) are increasingly popular for teachers’ professional development (PD). Understanding why teachers take MOOCs and how this relates to course completion could help identify underserved needs in teachers’ professional learning. In the current study, we explored this question, as well as potential gaps between intention to complete the course and actual completion. Using a sample of 3,212 participants in four PD MOOCs, we applied topic modeling to open-ended and Likert-style data to identify teachers’ motivations. The results show that most participants had intrinsic or professional motivations, but a subgroup of participants had prosocial motivations, namely, they wanted to support their students. In a set of logistic regression predicting course completion, we found that participants with intrinsic motivations were less likely to complete a course and participants with prosocial motivations were more likely to do so even after controlling for their initial intention. Our study contributes to the field by, first, identifying an underexplored group of learners, the prosocial learners. More research is needed to better understand this group. We also found that among teachers taking MOOCs, intrinsic motivations were associated with lower levels of engagement, contrary to findings in other populations, making a contribution to motivation theory as well as online learning practice. We concluded that the motivation-engagement relationship is more complex than previously thought, and recommend researchers continue examining this association to understand this discrepancy. Finally, we suggest practitioners take learners’ a-priori motivations into account when designing MOOCs, as these could be important for course engagement.
</description>
<pubDate>Mon, 18 Mar 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159429</guid>
<dc:date>2024-03-18T00:00:00Z</dc:date>
</item>
<item>
<title>Modeling and Characterization of TES-Based Detectors for the Ricochet Experiment</title>
<link>https://hdl.handle.net/1721.1/159428</link>
<description>Modeling and Characterization of TES-Based Detectors for the Ricochet Experiment
Chen, R.; Figueroa-Feliciano, E.; Bratrud, G.; Chang, C. L.; Chaplinsky, L.; Cudmore, E.; Van De Pontseele, W.; Formaggio, J. A.; Harrington, P.; Hertel, S. A.; Hong, Z.; Kennard, K. T.; Li, M.; Lisovenko, M.; Mateo, L. O.
Coherent elastic neutrino-nucleus scattering (CEνNS) offers a valuable approach in searching for physics beyond the standard model. The Ricochet experiment aims to perform a precision measurement of the CEνNS spectrum at the Institut Laue–Langevin nuclear reactor with cryogenic solid-state detectors. The experiment plans to employ an array of cryogenic thermal detectors, each with a mass of around 30 g and an energy threshold of below 100 eV. The array includes nine detectors read out by transition-edge sensors (TES). These TES-based detectors will also serve as demonstrators for future neutrino experiments with thousands of detectors. In this article, we present an update on the characterization and modeling of a prototype TES detector.
</description>
<pubDate>Mon, 15 Apr 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159428</guid>
<dc:date>2024-04-15T00:00:00Z</dc:date>
</item>
<item>
<title>Properties of Low TC AlMn TES</title>
<link>https://hdl.handle.net/1721.1/159427</link>
<description>Properties of Low TC AlMn TES
Wang, G.; Bratrud, G.; Chang, C. L.; Chaplinsky, L.; Chen, R.; Cudmore, E.; Van De Pontseele, W.; Figueroa-Feliciano, E.; Formaggio, J. A.; Harrington, P.; Hertel, S. A.; Hong, Z.; Kennard, K. T.; Li, M.; Lisovenko, M.; Mateo, L. O.
Low T C AlMn transition-edge sensors (TESs) have been developed as sensitive thermometers for the Q-Array, which will use superconducting targets to measure the coherent elastic neutrino nucleus scattering spectrum in the RICOCHET experiment. The TESs are made of manganese-doped aluminum with a titanium and gold antioxidation layer. A prototype TES thermometer consists of two TESs in parallel, an input gold pad in metallic contact with the TESs and an output gold pad and gold thermal link meanders, which are each designed to control the flow of heat through the TESs. We have fabricated and measured low T C AlMn TES chips with or without thermal flow control structures. We present T C measurements of the TESs after the initial fabrication and further T C tuning by re-heating and summarize the thermal property studies of the prototype TES thermometer by measuring I-V curves and complex impedance.
</description>
<pubDate>Mon, 15 Apr 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159427</guid>
<dc:date>2024-04-15T00:00:00Z</dc:date>
</item>
<item>
<title>FluidFlower: A Meter-Scale Experimental Laboratory for Geological CO2 Storage</title>
<link>https://hdl.handle.net/1721.1/159426</link>
<description>FluidFlower: A Meter-Scale Experimental Laboratory for Geological CO2 Storage
Nordbotten, Jan M.; Fernø, Martin; Flemisch, Bernd; Juanes, Ruben
The original idea of constructing the FluidFlower was to construct an experimental laboratory that was well suited to both scientific research and public outreach. Indeed, a core principle was to allow for demonstrating the key physical mechanisms underpinning geological CO2 storage to the public in what can be perceived as a realistic setting. This motivated the design of a relatively large experiment (about 3 by 2 m), with a transparent glass plate, and where pH sensitive dye was used to mark the CO2 concentration in the water phase. With these dimensions, some geological complexity could be included in the experiment, and the use of high-permeable unconsolidated sands reduced the timescales to hours and days, as opposed to the years and centuries of relevance at field conditions.&#13;
&#13;
The science part of the FluidFlower study was facilitated by the serendipitous arrival of the Covid-19 pandemic. We realized that the construction of the FluidFlower was at a scale and purpose which was quite unique, and that the travel restrictions imposed by Covid-19 allowed us to limit the insight non-local scientists would have in the experiments we conducted. This motivated the design of, and call for participation in, a forecasting study during spring 2021—and to our great fortune, good colleagues from around the globe agreed to participate.&#13;
&#13;
The main part of the study took place from early fall 2021 through April 2022, and during this process, it quickly became clear that there was much more to be said about this study than what could fit within a single paper. The idea for creating the special issue you are now reading was thus formed.
</description>
<pubDate>Mon, 08 Apr 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159426</guid>
<dc:date>2024-04-08T00:00:00Z</dc:date>
</item>
<item>
<title>Numerical investigation of a potential landslide-induced tsunami at the Suofengying reservoir in China</title>
<link>https://hdl.handle.net/1721.1/159425</link>
<description>Numerical investigation of a potential landslide-induced tsunami at the Suofengying reservoir in China
Kafle, Laxman; Xu, Wen-Jie; Nagel, Thomas
Landslides are a severe geohazard around the world. When moving soil masses discharge into a large water body, a tsunami can be generated and exacerbate the devastating effects of the landslide as well as extend the affected area. In this study, based on on-site geological investigation and monitoring, a numerical depth-averaged, two-phase model is established for a hypothetical tsunami in the Suofengying reservoir induced by the potential Bianjiazhai landslide in China which has been previously identified as critical. The analysis of the simulation results shows that the maximum wave amplitude measured at gauge point closest to the landslide is 31 m, and the tsunami reaches the reservoir’s dam about 66 s after the landslide initiation. The inundation map provides potential risk areas that could be affected if a landslide occurs with the anticipated characteristics. Under similar conditions, the research results will help guide reservoir operation and landslide-tsunami disaster prevention. Simultaneously, examining the sequence of events in the tsunami disaster chain facilitates the analysis of the fundamental physics governing the propagation of tsunamis within reservoirs. This analysis contributes to the prediction and prevention of landslide-induced tsunami disasters occurring along reservoir banks.
</description>
<pubDate>Sat, 10 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159425</guid>
<dc:date>2024-02-10T00:00:00Z</dc:date>
</item>
<item>
<title>Magnetic resonance metrics for identification of cuprizone-induced demyelination in the mouse model of neurodegeneration: a review</title>
<link>https://hdl.handle.net/1721.1/159424</link>
<description>Magnetic resonance metrics for identification of cuprizone-induced demyelination in the mouse model of neurodegeneration: a review
Friesen, Emma; Hari, Kamya; Sheft, Maxina; Thiessen, Jonathan D.; Martin, Melanie
Neurodegenerative disorders, including Multiple Sclerosis (MS), are heterogenous disorders which affect the myelin sheath of the central nervous system (CNS). Magnetic Resonance Imaging (MRI) provides a non-invasive method for studying, diagnosing, and monitoring disease progression. As an emerging research area, many studies have attempted to connect MR metrics to underlying pathophysiological presentations of heterogenous neurodegeneration. Most commonly, small animal models are used, including Experimental Autoimmune Encephalomyelitis (EAE), Theiler’s Murine Encephalomyelitis (TMEV), and toxin models including cuprizone (CPZ), lysolecithin, and ethidium bromide (EtBr). A contrast and comparison of these models is presented, with focus on the cuprizone model, followed by a review of literature studying neurodegeneration using MRI and the cuprizone model. Conventional MRI methods including T1 Weighted (T1W) and T2 Weighted (T2W) Imaging are mentioned. Quantitative MRI methods which are sensitive to diffusion, magnetization transfer, susceptibility, relaxation, and chemical composition are discussed in relation to studying the CPZ model. Overall, additional studies are needed to improve both the sensitivity and specificity of MRI metrics for underlying pathophysiology of neurodegeneration and the relationships in attempts to clear the clinico-radiological paradox. We therefore propose a multiparametric approach for the investigation of MR metrics for underlying pathophysiology.
</description>
<pubDate>Thu, 18 Apr 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159424</guid>
<dc:date>2024-04-18T00:00:00Z</dc:date>
</item>
<item>
<title>On the Spielman-Teng Conjecture</title>
<link>https://hdl.handle.net/1721.1/159423</link>
<description>On the Spielman-Teng Conjecture
Sah, Ashwin; Sahasrabudhe, Julian; Sawhney, Mehtaab
Let M be an n×n matrix with iid subgaussian entries with mean 0 and variance 1 and let σn(M) denote the least singular value of M. We prove that $$ \mathbb{P}\big( \sigma _{n}(M) \leqslant \varepsilon n^{-1/2} \big) = (1+o(1)) \varepsilon + e^{- \Omega (n)} $$ for all 0⩽ε≪1. This resolves, up to a 1+o(1) factor, a seminal conjecture of Spielman and Teng.
</description>
<pubDate>Thu, 13 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159423</guid>
<dc:date>2025-02-13T00:00:00Z</dc:date>
</item>
<item>
<title>Large Genus Bounds for the Distribution of Triangulated Surfaces in Moduli Space</title>
<link>https://hdl.handle.net/1721.1/159422</link>
<description>Large Genus Bounds for the Distribution of Triangulated Surfaces in Moduli Space
Vasudevan, Sahana
Triangulated surfaces are compact Riemann surfaces equipped with a conformal triangulation by equilateral triangles. In 2004, Brooks and Makover asked how triangulated surfaces are distributed in the moduli space of Riemann surfaces as the genus tends to infinity. Mirzakhani raised this question in her 2010 ICM address. We show that in the large genus case, triangulated surfaces are well distributed in moduli space in a fairly strong sense. We do this by proving upper and lower bounds for the number of triangulated surfaces lying in a Teichmüller ball in moduli space. In particular, we show that the number of triangulated surfaces lying in a Teichmüller unit ball is at most exponential in the number of triangles, independent of the genus.
</description>
<pubDate>Mon, 04 Mar 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159422</guid>
<dc:date>2024-03-04T00:00:00Z</dc:date>
</item>
<item>
<title>Colored line ensembles for stochastic vertex models</title>
<link>https://hdl.handle.net/1721.1/159421</link>
<description>Colored line ensembles for stochastic vertex models
Aggarwal, Amol; Borodin, Alexei
In this paper we assign a family of n coupled line ensembles to any U q ( sl ^ n + 1 ) colored stochastic fused vertex model, which satisfies two properties. First, the joint law of their top curves coincides with that of the colored height functions for the vertex model. Second, the n line ensembles satisfy an explicit Gibbs property prescribing their laws if all but a few of their curves are conditioned upon. We further describe several examples of such families of line ensembles, including the ones for the colored stochastic six-vertex and q-boson models. The appendices (which may be of independent interest) include an explanation of how the U q ( sl ^ n + 1 ) colored stochastic fused vertex model degenerates to the log-gamma polymer, and an effective rate of convergence of the colored stochastic six-vertex model to the colored ASEP.
</description>
<pubDate>Thu, 07 Nov 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159421</guid>
<dc:date>2024-11-07T00:00:00Z</dc:date>
</item>
<item>
<title>Turán Problems for Oriented Graphs</title>
<link>https://hdl.handle.net/1721.1/159420</link>
<description>Turán Problems for Oriented Graphs
Grzesik, Andrzej; Jaworska, Justyna; Kielak, Bartłomiej; Novik, Aliaksandra; Ślusarczyk, Tomasz
A classical Turán problem asks for the maximum possible number of edges in a graph of a given order that does not contain a particular graph H as a subgraph. It is well-known that the chromatic number of H is the graph parameter which describes the asymptotic behavior of this maximum. Here, we consider an analogous problem for oriented graphs, where compressibility plays the role of the chromatic number. Since any oriented graph having a directed cycle is not contained in any transitive tournament, it makes sense to consider only acyclic oriented graphs as forbidden subgraphs. We provide basic properties of the compressibility, show that the compressibility of acyclic oriented graphs with out-degree at most 2 is polynomial with respect to the maximum length of a directed path, and that the same holds for a larger out-degree bound if the Erdős–Hajnal conjecture is true. Additionally, generalizing previous results on powers of paths and arbitrary orientations of cycles, we determine the compressibility of acyclic oriented graphs with restricted distances of vertices to sinks and sources.
</description>
<pubDate>Thu, 29 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159420</guid>
<dc:date>2024-02-29T00:00:00Z</dc:date>
</item>
<item>
<title>Hampshire Sheep as a Large-Animal Model for Cochlear Implantation</title>
<link>https://hdl.handle.net/1721.1/159419</link>
<description>Hampshire Sheep as a Large-Animal Model for Cochlear Implantation
Waring, Nicholas A.; Chern, Alexander; Vilarello, Brandon J.; Cheng, Yew S.; Zhou, Chaoqun; Lang, Jeffrey H.; Olson, Elizabeth S.; Nakajima, Hideko H.
Background Sheep have been proposed as a large-animal model for studying cochlear implantation. However, prior sheep studies report that the facial nerve (FN) obscures the round window membrane (RWM), requiring FN sacrifice or a retrofacial opening to access the middle-ear cavity posterior to the FN for cochlear implantation. We investigated surgical access to the RWM in Hampshire sheep compared to Suffolk-Dorset sheep and the feasibility of Hampshire sheep for cochlear implantation via a facial recess approach. Methods Sixteen temporal bones from cadaveric sheep heads (ten Hampshire and six Suffolk-Dorset) were dissected to gain surgical access to the RWM via an extended facial recess approach. RWM visibility was graded using St. Thomas’ Hospital (STH) classification. Cochlear implant (CI) electrode array insertion was performed in two Hampshire specimens. Micro-CT scans were obtained for each temporal bone, with confirmation of appropriate electrode array placement and segmentation of the inner ear structures. Results Visibility of the RWM on average was 83% in Hampshire specimens and 59% in Suffolk-Dorset specimens (p = 0.0262). Hampshire RWM visibility was Type I (100% visibility) for three specimens and Type IIa (&gt; 50% visibility) for seven specimens. Suffolk-Dorset RWM visibility was Type IIa for four specimens and Type IIb (&lt; 50% visibility) for two specimens. FN appeared to course more anterolaterally in Suffolk-Dorset specimens. Micro-CT confirmed appropriate CI electrode array placement in the scala tympani without apparent basilar membrane rupture. Conclusions Hampshire sheep appear to be a suitable large-animal model for CI electrode insertion via an extended facial recess approach without sacrificing the FN. In this small sample, Hampshire specimens had improved RWM visibility compared to Suffolk-Dorset. Thus, Hampshire sheep may be superior to other breeds for ease of cochlear implantation, with FN and facial recess anatomy more similar to humans.
</description>
<pubDate>Mon, 15 Apr 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159419</guid>
<dc:date>2024-04-15T00:00:00Z</dc:date>
</item>
<item>
<title>An Implantable Piezofilm Middle Ear Microphone: Performance in Human Cadaveric Temporal Bones</title>
<link>https://hdl.handle.net/1721.1/159418</link>
<description>An Implantable Piezofilm Middle Ear Microphone: Performance in Human Cadaveric Temporal Bones
Zhang, John Z.; Graf, Lukas; Banerjee, Annesya; Yeiser, Aaron; McHugh, Christopher I.; Kymissis, Ioannis; Lang, Jeffrey H.; Olson, Elizabeth S.; Nakajima, Hideko H.
Purpose One of the major reasons that totally implantable cochlear microphones are not readily available is the lack of good implantable microphones. An implantable microphone has the potential to provide a range of benefits over external microphones for cochlear implant users including the filtering ability of the outer ear, cosmetics, and usability in all situations. This paper presents results from experiments in human cadaveric ears of a piezofilm microphone concept under development as a possible component of a future implantable microphone system for use with cochlear implants. This microphone is referred to here as a drum microphone (DrumMic) that senses the robust and predictable motion of the umbo, the tip of the malleus. Methods The performance was measured by five DrumMics inserted in four different human cadaveric temporal bones. Sensitivity, linearity, bandwidth, and equivalent input noise were measured during these experiments using a sound stimulus and measurement setup. Results The sensitivity of the DrumMics was found to be tightly clustered across different microphones and ears despite differences in umbo and middle ear anatomy. The DrumMics were shown to behave linearly across a large dynamic range (46 dB SPL to 100 dB SPL) across a wide bandwidth (100 Hz to 8 kHz). The equivalent input noise (over a bandwidth of 0.1–10 kHz) of the DrumMic and amplifier referenced to the ear canal was measured to be about 54 dB SPL in the temporal bone experiment and estimated to be 46 dB SPL after accounting for the pressure gain of the outer ear. Conclusion The results demonstrate that the DrumMic behaves robustly across ears and fabrication. The equivalent input noise performance (related to the lowest level of sound measurable) was shown to approach that of commercial hearing aid microphones. To advance this demonstration of the DrumMic concept to a future prototype implantable in humans, work on encapsulation, biocompatibility, and connectorization will be required.
</description>
<pubDate>Thu, 18 Jan 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159418</guid>
<dc:date>2024-01-18T00:00:00Z</dc:date>
</item>
<item>
<title>Explaining deep multi-class time series classifiers</title>
<link>https://hdl.handle.net/1721.1/159417</link>
<description>Explaining deep multi-class time series classifiers
Doddaiah, Ramesh; Parvatharaju, Prathyush S.; Rundensteiner, Elke; Hartvigsen, Thomas
Explainability helps users trust deep learning solutions for time series classification. However, existing explainability methods for multi-class time series classifiers focus on one class at a time, ignoring relationships between the classes. Instead, when a classifier is choosing between many classes, an effective explanation must show what sets the chosen class apart from the rest. We now formalize this notion, studying the open problem of class-specific explainability for deep time series classifiers, a challenging and impactful problem setting. We design a novel explainability method, DEMUX, which learns saliency maps for explaining deep multi-class time series classifiers by adaptively ensuring that its explanation spotlights the regions in an input time series that a model uses specifically to its predicted class. DEMUX adopts a gradient-based approach composed of three interdependent modules that combine to generate consistent, class-specific saliency maps that remain faithful to the classifier’s behavior yet are easily understood by end users. We demonstrate that DEMUX outperforms nine state-of-the-art alternatives on seven popular datasets when explaining two types of deep time series classifiers. We analyze runtime performance, show the impacts of hyperparameter selection, and introduce a detailed study of perturbation methods for time series. Further, through a case study, we demonstrate that DEMUX’s explanations indeed highlight what separates the predicted class from the others in the eyes of the classifier.
</description>
<pubDate>Mon, 04 Mar 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159417</guid>
<dc:date>2024-03-04T00:00:00Z</dc:date>
</item>
<item>
<title>Technology, liberty, and guardrails</title>
<link>https://hdl.handle.net/1721.1/159416</link>
<description>Technology, liberty, and guardrails
Mills, Kevin
Technology companies are increasingly being asked to take responsibility for the technologies they create. Many of them are rising to the challenge. One way they do this is by implementing “guardrails”: restrictions on functionality that prevent people from misusing their technologies (per some standard of misuse). While there can be excellent reasons for implementing guardrails (and doing so is sometimes morally obligatory), I argue that the unrestricted authority to implement guardrails is incompatible with proper respect for user freedom, and is not something we should welcome. I argue instead that guardrails should be implemented for only two reasons: to prevent accidental misuse of the technology, and as a proportionate means of preventing people from using the technology to violate other people’s rights. If I’m right, then we may have to get more comfortable with developers releasing technologies that can, and to some extent inevitably will, be misused; people using technologies in ways we disagree with is one of the costs of liberty, but it is a cost we have excellent reasons to bear.
</description>
<pubDate>Sat, 21 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159416</guid>
<dc:date>2024-12-21T00:00:00Z</dc:date>
</item>
<item>
<title>Enhanced photocatalytic degradation of rhodamine B dye (RhB), fluorescein dye (Flu), and their mixture using AgI/SnO2 photocatalyst</title>
<link>https://hdl.handle.net/1721.1/159415</link>
<description>Enhanced photocatalytic degradation of rhodamine B dye (RhB), fluorescein dye (Flu), and their mixture using AgI/SnO2 photocatalyst
Alaskary, Saddam A.; El-Shahat, M. F.; Ahmed, M. A.; Elmahgary, Maryam G.
AgI/SnO2 composites, synthesized via sol–gel method, emerge as highly efficient photocatalysts for degrading rhodamine B (RhB), fluorescein (Flu), and their mixtures, shifting photocatalytic activity into the visible spectrum. Characterized by XRD, FESEM, DRS, EDX, mapping, TEM, BET, and XPS, these nanocomposites, especially with 15% AgI, show a remarkable increase in photocatalytic efficiency for Flu, achieving a rate constant of 0.0189 min−1 which is triple that of pure SnO2 at 0.0061 min−1. The optimal degradation of Flu, RhB, and their mixture occurs with 0.1 g of the 15 wt% AgI/SnO2 composite. This enhancement is attributed to the Z-scheme mechanism facilitated by the small energy gap between AgI and SnO2 conduction bands, effectively minimizing electron–hole recombination and boosting photocatalytic performance through the generation of superoxide and hydroxyl radicals. These findings position AgI/SnO2 composites as promising candidates for treating both cationic and anionic dye pollutants.
</description>
<pubDate>Wed, 12 Jun 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159415</guid>
<dc:date>2024-06-12T00:00:00Z</dc:date>
</item>
<item>
<title>Self-healing mechanisms for Ge–Sb–S chalcogenide glasses upon gamma irradiation</title>
<link>https://hdl.handle.net/1721.1/159412</link>
<description>Self-healing mechanisms for Ge–Sb–S chalcogenide glasses upon gamma irradiation
Kang, Myungkoo; Sohn, Byoung-Uk; Du, Qingyang; Ma, Danhao; Pujari, Ruturaj; Sisken, Laura; Blanco, Cesar; Goncalves, Claudia; Arias, Chanelle; Zachariou, Anna; Yadav, Anupama; Lynch, Patrick E.; Lee, Jonathan; Novak, Spencer; Schwarz, Casey M.
We report atomistic mechanisms that directly correlate the time-dependent optical responses of bulk Ge23Sb7S70 chalcogenide glasses to their metastable structural defects created and subsequently annihilated following gamma irradiation. These defects are characterized by an irradiation-induced increase in the concentration of edge-shared GeS4/2 tetrahedra bonding units, which gradually decreases to a pre-irradiation level during recovery, thus illustrating the glass’ metastable behavior. This time-dependent structural change gives rise to the evolution of the glass’s mass density that correspondingly induces a change and subsequent relaxation of linear refractive index and bandgap energy. Concurrent with this evolution in linear optical properties, the glass’ nonlinear response is found to be unaffected, likely due to a counter effect associated with the glass network’s free electrons. Graphical abstract Impact statement Our work is the first study to employ a combined theoretical-experimental approach to the quantitative processing–structure–property relationship correlating the time-dependent structural and linear/nonlinear optical responses of chalcogenide Ge–Sb–S bulk glasses to their metastable topological coordination defects. These defects are created upon gamma-ray exposure and subsequently undergo relaxation at room temperature. The novelty of our study is that multifaceted aspects of such a key infrared chalcogenide glass, including optical, electronic, morphological, chemical, and microstructural properties, were monitored and cross-correlated as a function of time following gamma irradiation in order to identify origins behind the material system’s behavior as compared to base unirradiated material. This is, to our knowledge, the first-ever integrated approach (summarizing pre- and postexposure properties on the same samples) to the phenomenon. The behavior in metastable bulk chalcogenide glasses serves as a key cornerstone that will enable the material system to be deployed as robust, reversible radiation sensors in extreme environments such as space and ground-based radioactive facilities where gamma ray is characteristically abundant. Findings in our paper may shed light on the lingering question on the microscopic origin behind the self-healing process in chalcogenide glasses.
</description>
<pubDate>Wed, 17 Apr 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159412</guid>
<dc:date>2024-04-17T00:00:00Z</dc:date>
</item>
<item>
<title>The Role of Virtual Reality and Artificial Intelligence in Cognitive Pain Therapy: A Narrative Review</title>
<link>https://hdl.handle.net/1721.1/159410</link>
<description>The Role of Virtual Reality and Artificial Intelligence in Cognitive Pain Therapy: A Narrative Review
Mazzolenis, Maria V.; Mourra, Gabrielle N.; Moreau, Sacha; Mazzolenis, Maria E.; Cerda, Ivo H.; Vega, Julio; Khan, James S.; Thérond, Alexandra
Purpose of Review This review investigates the roles of artificial intelligence (AI) and virtual reality (VR) in enhancing cognitive pain therapy for chronic pain management. The work assesses current research, outlines benefits and limitations and examines their potential integration into existing pain management methods. Recent Findings Advances in VR have shown promise in chronic pain management through immersive cognitive therapy exercises, with evidence supporting VR's effectiveness in symptom reduction. AI's personalization of treatment plans and its support for mental health through AI-driven avatars are emerging trends. The integration of AI in hybrid programs indicates a future with real-time adaptive technology tailored to individual needs in chronic pain management. Summary Incorporating AI and VR into chronic pain cognitive therapy represents a promising approach to enhance management by leveraging VR's immersive experiences and AI's personalized tactics, aiming to improve patient engagement and outcomes. Nonetheless, further empirical studies are needed to standardized methodologies, compare these technologies to traditional therapies and fully realize their clinical potential.
</description>
<pubDate>Sat, 08 Jun 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159410</guid>
<dc:date>2024-06-08T00:00:00Z</dc:date>
</item>
<item>
<title>Efficient Automation of Neural Network Design: A Survey on Differentiable Neural Architecture Search</title>
<link>https://hdl.handle.net/1721.1/159409</link>
<description>Efficient Automation of Neural Network Design: A Survey on Differentiable Neural Architecture Search
Heuillet, Alexandre; Nasser, Ahmad; Arioui, Hichem; Tabia, Hedi
In the past few years, Differentiable Neural Architecture Search (DNAS) rapidly imposed itself as the trending approach to automate the discovery of deep neural network architectures. This rise is mainly due to the popularity of DARTS (Differentiable ARchitecTure Search), one of the first major DNAS methods. In contrast with previous works based on Reinforcement Learning or Evolutionary Algorithms, DNAS is faster by several orders of magnitude and uses fewer computational resources. In this comprehensive survey, we focused specifically on DNAS and reviewed recent approaches in this field. Furthermore, we proposed a novel challenge-based taxonomy to classify DNAS methods. We also discussed the contributions brought to DNAS in the past few years and its impact on the global NAS field. Finally, we concluded by giving some insights into future research directions for the DNAS field.
</description>
<pubDate>Fri, 28 Jun 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159409</guid>
<dc:date>2024-06-28T00:00:00Z</dc:date>
</item>
<item>
<title>OmniCharger: CNN-based Hand Gesture Interface to Operate an Electric Car Charging Robot through Teleconference</title>
<link>https://hdl.handle.net/1721.1/159407</link>
<description>OmniCharger: CNN-based Hand Gesture Interface to Operate an Electric Car Charging Robot through Teleconference
Altamirano Cabrera, Miguel; Rakhmatulin, Viktor; Fedoseev, Aleksey; Sautenkov, Oleg; Alyounes, Oussama; Puchkov, Andrei; Tsetserukou, Dzmitry
The automation of the car charging process is motivated by the rapid development of technologies for self-driving cars and the increasing importance of ecological transportation units. However, it remains challenging to precisely position the charger plug autonomously due to the sensitivity of CV algorithms to lighting and weather conditions. We introduce a novel robotic teleoperation system based on hand gesture recognition through teleconferencing software. The users, connected by teleconference, use their hand gestures to teleoperate the electric plug located on the collaborative robot end-effector.   We conducted a user study to evaluate the system performance and suitability using OmniCharger and two baseline interfaces (a UR10 Teach Pendant and a Logitech F710 Wireless Gamepad). Except for two trials, all the users were able to locate the plug inside of a 5 cm target using the interfaces. The distance to the target and the orientation error did not present statistically significant differences in the use of the three interfaces. The NASA TLX questionnaire results showed low values in all the sub-classes, the SUS results rated the usability of the proposed interface above average (68\%), and the UEQ showed excellent performance of the OmniCharger interface in the attractiveness, stimulation, and novelty attributes.
</description>
<pubDate>Wed, 25 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159407</guid>
<dc:date>2024-09-25T00:00:00Z</dc:date>
</item>
<item>
<title>Brain Logger: A Web3 Based Decentralized Social Network</title>
<link>https://hdl.handle.net/1721.1/159406</link>
<description>Brain Logger: A Web3 Based Decentralized Social Network
Plishan, Yaroslav; Madhwal, Yash; Yanovich, Yury
The social network Twitter possesses a key characteristic: it is centralized, allowing Twitter Corporation to control data substitution and set moderation rules. However, this issue can only be resolved through the complete decentralization of the platform. To achieve this, blockchain technology is crucial as it enables the operation of decentralized applications with smart contracts serving as the foundation for system-user interaction. In this paper, we propose the minimum viable implementation of a similar social network called Brain Logger, which stores all necessary data in the NEAR blockchain network. Introducing a small fee for modifying data on the network ensures a solution to the aforementioned problems. The implementation includes a smart contract for blockchain networks and a web application frontend. The smart contract was developed using the NEAR software development kit in Rust, while the frontend was created in JavaScript using the React framework and Redux for local data storage. The code can be accessed on Github.
ICBTA 2023, December 15–17, 2023, Xi’an, China
</description>
<pubDate>Fri, 15 Dec 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159406</guid>
<dc:date>2023-12-15T00:00:00Z</dc:date>
</item>
<item>
<title>Web3 Based Digital Rights Management in the Music Industry</title>
<link>https://hdl.handle.net/1721.1/159405</link>
<description>Web3 Based Digital Rights Management in the Music Industry
Cherdakov, Mikhail; Kudashkin, Aleksey; Madhwal, Yash; Yanovich, Yury
The music industry faces numerous challenges regarding tracking and distributing royalties. This article explores how a decentralized and transparent system for managing music royalties can be created with a smart contract implementation on the Polygon network, utilizing Chainlink oracles. By enhancing the efficiency and transparency of royalty distribution, this groundbreaking smart contract has the potential to revolutionize the music industry and stand as a blueprint for diverse sectors grappling with analogous challenges in intellectual property rights and revenue allocation. At its core, this smart contract, functioning on the bedrock of blockchain, ensures meticulous oversight of financial transactions between consumers and content creators. It streamlines the process and facilitates the collection of user view statistics crucial for strategic fund allocation. With the potential impact of transaction fees on the user experience of blockchain payments, a new solution has been developed that includes a payment channel that reserves significant amounts of funds and enables off-chain payments through specialized messages. By combining these technological advancements, we are taking a transformative step towards a more seamless and fair future in the complex world of royalty management. The application is structured into conventional backend and frontend segments, with the code available on Github.
ICBTA 2023, December 15–17, 2023, Xi’an, China
</description>
<pubDate>Fri, 15 Dec 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159405</guid>
<dc:date>2023-12-15T00:00:00Z</dc:date>
</item>
<item>
<title>Take It, Leave It, or Fix It: Measuring Productivity and Trust in Human-AI Collaboration</title>
<link>https://hdl.handle.net/1721.1/159404</link>
<description>Take It, Leave It, or Fix It: Measuring Productivity and Trust in Human-AI Collaboration
Qian, Crystal; Wexler, James
Although recent developments in generative AI have greatly enhanced the capabilities of conversational agents such as Google’s Bard or OpenAI’s ChatGPT, it’s unclear whether the usage of these agents aids users across various contexts. To better understand how access to conversational AI affects productivity and trust, we conducted a mixed-methods, task-based user study, observing 76 software engineers (N=76) as they completed a programming exam with and without access to Bard. Effects on performance, efficiency, satisfaction, and trust vary depending on user expertise, question type (open-ended "solve" questions vs. definitive "search" questions), and measurement type (demonstrated vs. self-reported). Our findings include evidence of automation complacency, increased reliance on the AI over the course of the task, and increased performance for novices on “solve”-type questions when using the AI. We discuss common behaviors, design recommendations, and impact considerations to improve collaborations with conversational AI.
IUI ’24, March 18–21, 2024, Greenville, SC, USA
</description>
<pubDate>Mon, 18 Mar 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159404</guid>
<dc:date>2024-03-18T00:00:00Z</dc:date>
</item>
<item>
<title>A systems approach to evaluating the air quality co-benefits of US carbon policies</title>
<link>https://hdl.handle.net/1721.1/159403</link>
<description>A systems approach to evaluating the air quality co-benefits of US carbon policies
Thompson, Tammy M; Rausch, Sebastian; Saari, Rebecca K; Selin, Noelle E
Because human activities emit greenhouse gases (GHGs) and conventional air pollutants from common sources, policy designed to reduce GHGs can have co-benefits for air quality that may offset some or all of the near-term costs of GHG mitigation.We present a systems approach to quantify air quality co-benefits of US policies to reduce GHG (carbon) emissions. We assess health-related benefits from reduced ozone and particulate matter (PM2.5) by linking three advanced models, representing the full pathway from policy to pollutant damages. We also examine the sensitivity of co-benefits to key policyrelevant sources of uncertainty and variability. We find that monetized human health benefits associated with air quality improvements can offset 26-1,050% of the cost of US carbon policies. More flexible policies that minimize costs, such as cap-and-trade standards, have larger net co-benefits than policies that target specific sectors (electricity and transportation). Although air quality co-benefits can be comparable to policy costs for present-day air quality and near-term US carbon policies, potential co-benefits rapidly diminish as carbon policies become more stringent.
</description>
<pubDate>Sun, 24 Aug 2014 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159403</guid>
<dc:date>2014-08-24T00:00:00Z</dc:date>
</item>
<item>
<title>Cross-Domain Latent Factors Sharing via Implicit Matrix Factorization</title>
<link>https://hdl.handle.net/1721.1/159402</link>
<description>Cross-Domain Latent Factors Sharing via Implicit Matrix Factorization
Samra, Abdulaziz; Frolov, Evgeny; Vasilev, Alexey; Grigorevskiy, Alexander; Vakhrushev, Anton
Data sparsity has been one of the long-standing problems for recommender systems. One of the solutions to mitigate this issue is to exploit knowledge available in other source domains. However, many cross-domain recommender systems introduce a complex architecture that makes them less scalable in practice. On the other hand, matrix factorization methods are still considered to be strong baselines for single-domain recommendations. In this paper, we introduce the CDIMF, a model that extends the standard implicit matrix factorization with ALS to cross-domain scenarios. We apply the Alternating Direction Method of Multipliers to learn shared latent factors for overlapped users while factorizing the interaction matrix. In a dual-domain setting, experiments on industrial datasets demonstrate a competing performance of CDIMF for both cold-start and warm-start. The proposed model can outperform most other recent cross-domain and single-domain models. We also provide the code to reproduce experiments on GitHub.
RecSys ’24, October 14–18, 2024, Bari, Italy
</description>
<pubDate>Tue, 08 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159402</guid>
<dc:date>2024-10-08T00:00:00Z</dc:date>
</item>
<item>
<title>Scalable Cross-Entropy Loss for Sequential Recommendations with Large Item Catalogs</title>
<link>https://hdl.handle.net/1721.1/159401</link>
<description>Scalable Cross-Entropy Loss for Sequential Recommendations with Large Item Catalogs
Mezentsev, Gleb; Gusak, Danil; Oseledets, Ivan; Frolov, Evgeny
Scalability issue plays a crucial role in productionizing modern recommender systems. Even lightweight architectures may suffer from high computational overload due to intermediate calculations, limiting their practicality in real-world applications. Specifically, applying full Cross-Entropy (CE) loss often yields state-of-the-art performance in terms of recommendations quality. Still, it suffers from excessive GPU memory utilization when dealing with large item catalogs. This paper introduces a novel Scalable Cross-Entropy (SCE) loss function in the sequential learning setup. It approximates the CE loss for datasets with large-size catalogs, enhancing both time efficiency and memory usage without compromising recommendations quality. Unlike traditional negative sampling methods, our approach utilizes a selective GPU-efficient computation strategy, focusing on the most informative elements of the catalog, particularly those most likely to be false positives. This is achieved by approximating the softmax distribution over a subset of the model outputs through the maximum inner product search. Experimental results on multiple datasets demonstrate the effectiveness of SCE in reducing peak memory usage by a factor of up to 100 compared to the alternatives, retaining or even exceeding their metrics values. The proposed approach also opens new perspectives for large-scale developments in different domains, such as large language models.
RecSys ’24, October 14–18, 2024, Bari, Italy
</description>
<pubDate>Tue, 08 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159401</guid>
<dc:date>2024-10-08T00:00:00Z</dc:date>
</item>
<item>
<title>From Variability to Stability: Advancing RecSys Benchmarking Practices</title>
<link>https://hdl.handle.net/1721.1/159400</link>
<description>From Variability to Stability: Advancing RecSys Benchmarking Practices
Shevchenko, Valeriy; Belousov, Nikita; Vasilev, Alexey; Zholobov, Vladimir; Sosedka, Artyom; Semenova, Natalia; Volodkevich, Anna; Savchenko, Andrey; Zaytsev, Alexey
In the rapidly evolving domain of Recommender Systems (RecSys), new algorithms frequently claim state-of-the-art performance based on evaluations over a limited set of arbitrarily selected datasets. However, this approach may fail to holistically reflect their effectiveness due to the significant impact of dataset characteristics on algorithm performance. Addressing this deficiency, this paper introduces a novel benchmarking methodology to facilitate a fair and robust comparison of RecSys algorithms, thereby advancing evaluation practices. By utilizing a diverse set of 30 open datasets, including two introduced in this work, and evaluating 11 collaborative filtering algorithms across 9 metrics, we critically examine the influence of dataset characteristics on algorithm performance. We further investigate the feasibility of aggregating outcomes from multiple datasets into a unified ranking. Through rigorous experimental analysis, we validate the reliability of our methodology under the variability of datasets, offering a benchmarking strategy that balances quality and computational demands. This methodology enables a fair yet effective means of evaluating RecSys algorithms, providing valuable guidance for future research endeavors.
KDD ’24, August 25–29, 2024, Barcelona, Spain
</description>
<pubDate>Sun, 25 Aug 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159400</guid>
<dc:date>2024-08-25T00:00:00Z</dc:date>
</item>
<item>
<title>RECE: Reduced Cross-Entropy Loss for Large-Catalogue Sequential Recommenders</title>
<link>https://hdl.handle.net/1721.1/159399</link>
<description>RECE: Reduced Cross-Entropy Loss for Large-Catalogue Sequential Recommenders
Gusak, Danil; Mezentsev, Gleb; Oseledets, Ivan; Frolov, Evgeny
Scalability is a major challenge in modern recommender systems. In sequential recommendations, full Cross-Entropy (CE) loss achieves state-of-the-art recommendation quality but consumes excessive GPU memory with large item catalogs, limiting its practicality. Using a GPU-efficient locality-sensitive hashing-like algorithm for approximating large tensor of logits, this paper introduces a novel RECE (REduced Cross-Entropy) loss. RECE significantly reduces memory consumption while allowing one to enjoy the state-of-the-art performance of full CE loss. Experimental results on various datasets show that RECE cuts training peak memory usage by up to 12 times compared to existing methods while retaining or exceeding performance metrics of CE loss. The approach also opens up new possibilities for large-scale applications in other domains.
</description>
<pubDate>Mon, 21 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159399</guid>
<dc:date>2024-10-21T00:00:00Z</dc:date>
</item>
<item>
<title>Product Mixing in Compact Lie Groups</title>
<link>https://hdl.handle.net/1721.1/159398</link>
<description>Product Mixing in Compact Lie Groups
Ellis, David; Kindler, Guy; Lifshitz, Noam; Minzer, Dor
If G is a group, we say a subset S of G is product-free if the equation xy=z has no solutions with x,y,z ∈ S.In 1985, Babai and Sós [] asked, for a finite group G, how large a subset S⊆ G can be if it is product-free. The main tool (hitherto) for studying this problem has been the notion of a quasirandom group. For D ∈ ℕ, a group G is said to be D-quasirandom if the minimal dimension of a nontrivial complex irreducible representation of G is at least D. Gowers showed that in a D-quasirandom finite group G, the maximal size of a product-free set is at most |G|/D1/3. This disproved a longstanding conjecture of Babai and Sós from 1985. For the special unitary group, G=(n), Gowers observed that his argument yields an upper bound of n−1/3 on the measure of a measurable product-free subset. In this paper, we improve Gowers’ upper bound to exp(−cn1/3), where c&gt;0 is an absolute constant. In fact, we establish something stronger, namely, product-mixing for measurable subsets of (n) with measure at least exp(−cn1/3); for this product-mixing result, the n1/3 in the exponent is sharp. Our approach involves introducing novel hypercontractive inequalities, which imply that the non-Abelian Fourier spectrum of the indicator function of a small set concentrates on high-dimensional irreducible representations. Our hypercontractive inequalities are obtained via methods from representation theory, harmonic analysis, random matrix theory and differential geometry. We generalize our hypercontractive inequalities from (n) to an arbitrary D-quasirandom compact connected Lie group for D at least an absolute constant, thereby extending our results on product-free sets to such groups. We also demonstrate various other applications of our inequalities to geometry (viz., non-Abelian Brunn-Minkowski type inequalities), mixing times, and the theory of growth in compact Lie groups. A subsequent work due to Arunachalam, Girish and Lifshitz uses our inequalities to establish new separation results between classical and quantum communication complexity.
</description>
<pubDate>Mon, 10 Jun 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159398</guid>
<dc:date>2024-06-10T00:00:00Z</dc:date>
</item>
<item>
<title>Graphene-driven growth of large-area ultrathin Mo2C</title>
<link>https://hdl.handle.net/1721.1/159397</link>
<description>Graphene-driven growth of large-area ultrathin Mo2C
Okay, Elif; Caylan, Omer; Atli, Eren; Adabasi, Gokay; Baykara, Mehmet Z.; Gogotsi, Yury; Cambaz Buke, Goknur
Two-dimensional transition metal carbides, particularly chemical vapor deposition (CVD)-grown molybdenum carbide (Mo2C), are promising for next-generation electronic applications. However, achieving large-area, high-quality single crystals with controlled thickness remains challenging due to the non-self-limiting nature of conventional CVD. Moreover, Mo2C synthesis is often accompanied by undesired graphene coverage, necessitating additional processing steps that can degrade its electronic properties. Here, we present a graphene-driven approach that enables the direct synthesis of ultrathin Mo2C on copper without an external carbon source. Through systematic comparative experiments, we elucidate the role of graphene in Mo2C synthesis via CVD and develop a novel method marked as Process Route 3, where graphene serves as the sole carbon source, eliminating the need for CH4. We demonstrate that annealing a layered Mo/Cu/graphene film at 1100 °C enables the complete transformation of graphene into Mo2C. At this temperature, graphene tearing exposes a fresh liquid Cu surface. Mo atoms diffuse from the underlying Mo foil through molten Cu and react with carbon coming from the graphene layer via surface diffusion. This process enables preferential lateral growth, allowing Mo2C crystals to expand with minimal impingement, resulting in thin (~ 10 nm), well-faceted Mo2C domains with lateral sizes reaching up to 60 µm. X-ray diffraction and transmission electron microscopy confirm the high-quality orthorhombic structure of the synthesized Mo2C, while Raman spectroscopy verifies the complete conversion of graphene, yielding graphene-free Mo2C. This study provides a deeper understanding of metal carbide formation via CVD, overcomes key limitations of conventional approaches, and offers a viable route toward the scalable fabrication of large-area Mo2C with potential applications in high-performance electronics.
</description>
<pubDate>Wed, 09 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159397</guid>
<dc:date>2025-04-09T00:00:00Z</dc:date>
</item>
<item>
<title>Criticality and Magnetic Phases of Ising Shastry–Sutherland Candidate Holmium Tetraboride</title>
<link>https://hdl.handle.net/1721.1/159396</link>
<description>Criticality and Magnetic Phases of Ising Shastry–Sutherland Candidate Holmium Tetraboride
Khundzakishvili, Guga; Belbase, Bishnu Prasad; Mahendran, Pravin; Zhang, Kevin; Xu, Hanjing; Stoyanoff, Eliana; Checkelsky, Joseph George; Liu, Yaohua; Ye, Linda; Banerjee, Arnab
Frustrated magnetic systems arising in geometrically constrained lattices represent rich platforms for exploring unconventional phases of matter, including fractional magnetization plateaus, incommensurate orders and complex domain dynamics. However, determining the microscopic spin configurations that stabilize such phases is a key challenge, especially when in-plane and out-of-plane spin components coexist and compete. Here, we combine neutron scattering and magnetic susceptibility experiments with simulations to investigate the emergence of field-induced fractional plateaus and the related criticality in a frustrated magnet holmium tetraboride (HoB&lt;sub&gt;4&lt;/sub&gt;) that represents the family of rare earth tetraborides that crystalize in a Shastry&amp;ndash;Sutherland lattice in the &lt;inline-formula&gt;&lt;math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"&gt;&lt;semantics&gt;&lt;mrow&gt;&lt;mi&gt;a&lt;/mi&gt;&lt;mi&gt;b&lt;/mi&gt;&lt;/mrow&gt;&lt;/semantics&gt;&lt;/math&gt;&lt;/inline-formula&gt; plane. We focus on the interplay between classical and quantum criticality near phase boundaries, as well as the role of material defects in the stabilization of the ordered phases. We find that simulations using classical annealing can explain certain observed features in the experimental Laue diffraction and the origin of multiple magnetization plateaus. Our results show that defects and out-of-plane interactions play an important role and can guide the route towards resolving microscopic spin textures in highly frustrated magnets.
</description>
<pubDate>Mon, 26 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159396</guid>
<dc:date>2025-05-26T00:00:00Z</dc:date>
</item>
<item>
<title>Load asymptotics and dynamic speed optimization for the greenest path problem: a comprehensive analysis</title>
<link>https://hdl.handle.net/1721.1/159395</link>
<description>Load asymptotics and dynamic speed optimization for the greenest path problem: a comprehensive analysis
Moradi, Poulad; Arts, Joachim; Velázquez-Martínez, Josué C.
We study the effect of using high-resolution elevation data on the selection of the most fuel-efficient (greenest) path for different trucks in various urban environments. We adapt a variant of the Comprehensive Modal Emission Model (CMEM) to show that the optimal speed and the greenest path are slope dependent (dynamic). When there are no elevation changes in a road network, the most fuel-efficient path is the shortest path with a constant (static) optimal speed throughout. However, if the network is not flat, then the shortest path is not necessarily the greenest path, and the optimal driving speed is dynamic. We prove that the greenest path converges to an asymptotic greenest path as the payload approaches infinity and that this limiting path is attained for a finite load. In a set of extensive numerical experiments, we benchmark the CO 2 emissions reduction of our dynamic speed and the greenest path policies against policies that ignore elevation data. We use the geo-spatial data of 25 major cities across 6 continents. We observe numerically that the greenest path quickly diverges from the shortest path and attains the asymptotic greenest path even for moderate payloads. Based on an analysis of variance, the main determinants of the CO 2 emissions reduction potential are the variation of the road gradients along the shortest path as well as the relative elevation of the source from the target. Using speed data estimates for rush hour in New York City, we test CO 2 emissions reduction by comparing the greenest paths with optimized speeds against the fastest paths with traffic speed. We observe that selecting the greenest paths instead of the fastest paths can significantly reduce CO 2 emissions. Additionally, our results show that while speed optimization on uphill arcs can significantly help CO 2 reduction, the potential to leverage gravity for acceleration on downhill arcs is limited due to traffic congestion.
</description>
<pubDate>Mon, 25 Nov 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159395</guid>
<dc:date>2024-11-25T00:00:00Z</dc:date>
</item>
<item>
<title>Science and the city: critical reflections on connecting the two</title>
<link>https://hdl.handle.net/1721.1/159394</link>
<description>Science and the city: critical reflections on connecting the two
Paul, Abhijit; Kotsopoulos, Sotirios D.; Aber, Jasmin; Rajah, Ratnam
Many aspects of city planning and design connected to science deserve justification. We are interested in learning what key performance matrices are used in developing a model of the built environment. What is the basis for selecting or deciding on an effective model? Are we able to test the resilience of the model? How sensitive are these models to design features from outline planning to detailed planning applications? And so on. Building on the critical insights from Michael Batty’s recent dialogue with the authors, the paper looks into these questions through the lenses that city science is seldom seen to deal with, such as the circular economy, social justice and equity. The conclusions suggest that the failure to apply knowledge synchronously and negligence to promote cross-disciplinary collaboration hamper progress and innovation. Successful integration of urban initiatives and a correct understanding of city science must require a comprehensive and balanced approach, engaging in enhanced communication, participatory planning, education and outreach, interdisciplinary teams, data integration with technology, regular reviews and adjustments, and conducive policy frameworks. Engaging in these integrations signifies aligning city science approaches to derive development solutions with various urban initiatives, ensuring they are relevant, inclusive, and adaptable to changing urban dynamics.
</description>
<pubDate>Tue, 03 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159394</guid>
<dc:date>2025-06-03T00:00:00Z</dc:date>
</item>
<item>
<title>Homology in combinatorial refraction billiards</title>
<link>https://hdl.handle.net/1721.1/159393</link>
<description>Homology in combinatorial refraction billiards
Defant, Colin; Liu, Derek
Given a graph G with vertex set { 1 , … , n } , we can project the graphical arrangement of G to an ( n - 1 ) -dimensional torus to obtain a toric hyperplane arrangement. Adams, Defant, and Striker constructed a toric combinatorial refraction billiard system in which beams of light travel in the torus, refracting (with refraction coefficient - 1 ) whenever they hit one of the toric hyperplanes in this toric arrangement. Each billiard trajectory in this system is periodic. We adopt a topological perspective and view the billiard trajectories as closed loops in the torus. We say G is ensnaring if all of the billiard trajectories are contractible, and we say G is expelling if none of the billiard trajectories is contractible. Our first main result states that a graph is expelling if and only if it is bipartite. We then provide several necessary conditions and several sufficient conditions for a graph to be ensnaring. For example, we show that the complement of an ensnaring graph cannot have a clique as a connected component. We also discuss ways to construct ensnaring graphs from other ensnaring graphs. For example, gluing two ensnaring graphs at a single vertex always yields another ensnaring graph.
</description>
<pubDate>Wed, 14 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159393</guid>
<dc:date>2025-05-14T00:00:00Z</dc:date>
</item>
<item>
<title>On Approximability of Satisfiable k-CSPs: IV</title>
<link>https://hdl.handle.net/1721.1/159392</link>
<description>On Approximability of Satisfiable k-CSPs: IV
Bhangale, Amey; Khot, Subhash; Minzer, Dor
We prove a stability result for general 3-wise correlations over distributions satisfying mild connectivity properties. More concretely, we show that if Σ,Γ and Φ are alphabets of constant size, and µ is a distribution over Σ×Γ×Φ satisfying: (1) the probability of each atom is at least Ω(1), (2) µ is pairwise connected, and (3) µ has no Abelian embeddings into (ℤ,+), then the following holds. Any triplets of 1-bounded functions f∶ Σn→ℂ, g∶ Γn→ℂ, h∶ Φn→ℂ satisfying&#13;
 (x,y,z)∼ µ⊗ nf(x)g(y)h(z)≥   &#13;
must arise from an Abelian group associated with the distribution µ. More specifically, we show that there is an Abelian group (H,+) of constant size such that for any such f,g and h, the function f (and similarly g and h) is correlated with a function of the form f(x) = χ(σ(x1),…,σ(xn)) L (x), where σ∶ Σ → H is some map, χ∈ Ĥ⊗ n is a character, and L∶ Σn→ℂ is a low-degree function with bounded 2-norm.&#13;
En route we prove a few additional results that may be of independent interest, such as an improved direct product theorem, as well as a result we refer to as a “restriction inverse theorem” about the structure of functions that, under random restrictions, with noticeable probability have significant correlation with a product function.&#13;
In companion papers, we show applications of our results to the fields of Probabilistically Checkable Proofs, as well as various areas in discrete mathematics such as extremal combinatorics and additive combinatorics.
STOC ’24, June 24–28, 2024, Vancouver, BC, Canada
</description>
<pubDate>Mon, 10 Jun 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159392</guid>
<dc:date>2024-06-10T00:00:00Z</dc:date>
</item>
<item>
<title>DogSurf: Quadruped Robot Capable of GRU-based Surface Recognition for Blind Person Navigation</title>
<link>https://hdl.handle.net/1721.1/159391</link>
<description>DogSurf: Quadruped Robot Capable of GRU-based Surface Recognition for Blind Person Navigation
Bazhenov, Artem; Berman, Vladimir; Satsevich, Sergei; Shalopanova, Olga; Cabrera, Miguel; Lykov, Artem; Tsetserukou, Dzmitry
This paper introduces DogSurf - a newapproach of using quadruped robots to help visually impaired people navigate in real world. The presented method allows the quadruped robot to detect slippery surfaces, and to use audio and haptic feedback to inform the user when to stop. A state-of-the-art GRU-based neural network architecture with mean accuracy of 99.925% was proposed for the task of multiclass surface classification for quadruped robots. A dataset was collected on a Unitree Go1 Edu robot. The dataset and code have been posted to the public domain.
HRI 2024, March 11–14, 2024, Boulder, Colorado, USA
</description>
<pubDate>Mon, 11 Mar 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159391</guid>
<dc:date>2024-03-11T00:00:00Z</dc:date>
</item>
<item>
<title>End-to-End Graph-Sequential Representation Learning for Accurate Recommendations</title>
<link>https://hdl.handle.net/1721.1/159390</link>
<description>End-to-End Graph-Sequential Representation Learning for Accurate Recommendations
Baikalov, Vladimir; Frolov, Evgeny
Recent recommender system advancements have focused on developing sequence-based and graph-based approaches. Both approaches proved useful in modeling intricate relationships within&#13;
behavioral data, leading to promising outcomes in personalized&#13;
ranking and next-item recommendation tasks while maintaining&#13;
good scalability. However, they capture very different signals from&#13;
data. While the former approach represents users directly through&#13;
ordered interactions with recent items, the latter aims to capture&#13;
indirect dependencies across the interactions graph. This paper&#13;
presents a novel multi-representational learning framework exploiting these two paradigms’ synergies. Our empirical evaluation&#13;
on several datasets demonstrates that mutual training of sequential&#13;
and graph components with the proposed framework significantly&#13;
improves recommendations performance.
WWW ’24 Companion, May 13–17, 2024, Singapore, Singapore
</description>
<pubDate>Mon, 13 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159390</guid>
<dc:date>2024-05-13T00:00:00Z</dc:date>
</item>
<item>
<title>Deep Learning for Solving and Estimating Dynamic Macro-finance Models</title>
<link>https://hdl.handle.net/1721.1/159389</link>
<description>Deep Learning for Solving and Estimating Dynamic Macro-finance Models
Fan, Benjamin; Qiao, Edward; Jiao, Anran; Gu, Zhouzhou; Li, Wenhao; Lu, Lu
We develop a methodology that utilizes deep learning to simultaneously solve and estimate canonical continuous-time general equilibrium models in financial economics. We illustrate our method in two examples: (1) industrial dynamics of firms and (2) macroeconomic models with financial frictions. Through these applications, we illustrate the advantages of our method: generality, simultaneous solution and estimation, leveraging the state-of-art machine-learning techniques, and handling large state space. The method is versatile and can be applied to a vast variety of problems.
</description>
<pubDate>Fri, 09 Aug 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159389</guid>
<dc:date>2024-08-09T00:00:00Z</dc:date>
</item>
<item>
<title>MvfR Shapes Pseudomonas aeruginosa Interactions in Polymicrobial Contexts: Implications for Targeted Quorum-Sensing Inhibition</title>
<link>https://hdl.handle.net/1721.1/159388</link>
<description>MvfR Shapes Pseudomonas aeruginosa Interactions in Polymicrobial Contexts: Implications for Targeted Quorum-Sensing Inhibition
Wheeler, Kelsey M.; Oh, Myung Whan; Fusco, Julianna; Mershon, Aishlinn; Kim, Erin; De Oliveira, Antonia; Rahme, Laurence G.
Infections often occur in complex niches consisting of multiple bacteria. Despite the increasing awareness, there is a fundamental gap in understanding which interactions govern microbial community composition. Pseudomonas aeruginosa is frequently isolated from monomicrobial and polymicrobial human infections. This pathogen forms polymicrobial infections with other ESKAPEE pathogens and defies eradication by conventional therapies. By analyzing the competition within co-cultures of P. aeruginosa and representative secondary pathogens that commonly co-infect patients, we demonstrate the antagonism of P. aeruginosa against other ESKAPEE pathogens and the contribution of this pathogen’s multiple quorum-sensing (QS) systems in these interactions. QS is a highly conserved bacterial cell-to-cell communication mechanism that coordinates collective gene expressions at the population level, and it is also involved in P. aeruginosa virulence. Using a collection of P. aeruginosa QS mutants of the three major systems, LasR/LasI, MvfR/PqsABCDE, and RhlR/RhlI, and mutants of several QS-regulated functions, we reveal that MvfR and, to a lesser extent, LasR and RhlR, control competition between P. aeruginosa and other microbes, possibly through their positive impact on pyoverdine, pyochelin, and phenazine genes. We show that MvfR inhibition alters competitive interspecies interactions and preserves the coexistence of P. aeruginosa with the ESKAPEE pathogens tested while disarming the pathogens’ ability to form biofilm and adhere to lung epithelial cells. Our results highlight the role of MvfR inhibition in modulating microbial competitive interactions across multiple species, while simultaneously attenuating virulence traits. These findings reveal the complexity and importance of QS in interspecies interactions and underscore the impact of the anti-virulence approach in microbial ecology and its importance for treating polymicrobial infections.
</description>
<pubDate>Tue, 20 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159388</guid>
<dc:date>2025-05-20T00:00:00Z</dc:date>
</item>
<item>
<title>Clonable key fobs: Analyzing and breaking RKE protocols</title>
<link>https://hdl.handle.net/1721.1/159387</link>
<description>Clonable key fobs: Analyzing and breaking RKE protocols
Gesteira-Miñarro, Roberto; López, Gregorio; Palacios, Rafael
The automotive industry has been a target for cyber criminals for decades. New regulations have come into force in the automotive industry and manufacturers must take cybersecurity into account. One of the most interesting vehicle systems is the Remote Keyless Entry (RKE) system, which allows users to lock and unlock their cars, among other actions, with a remote control integrated in the car key. If this system is compromised, a malicious user could gain access to a vehicle remaining unnoticed. This paper presents the identification and analysis of a vulnerability in an RKE protocol that can be exploited to gain access to the car at any time, thus cloning the key fob. The reverse-engineering methodology used to uncover the vulnerability is outlined, along with other tested vehicles to show its applicability. A relevant aspect of the research is the fact that only open-source tools and available commercial hardware are needed to perform the analysis. This black-box approach is equally valid to learn RKE protocol features, without the need to extract and analyze ECU firmware, which is considerably more expensive. As a result, a detailed analysis of eight protocols from different manufacturers is shown and they are compared from a cybersecurity point of view, with one of them being totally broken.
</description>
<pubDate>Sat, 31 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159387</guid>
<dc:date>2025-05-31T00:00:00Z</dc:date>
</item>
<item>
<title>Dynamical Characteristics of Isolated Donors, Acceptors, and Complex Defect Centers in Novel ZnO</title>
<link>https://hdl.handle.net/1721.1/159386</link>
<description>Dynamical Characteristics of Isolated Donors, Acceptors, and Complex Defect Centers in Novel ZnO
Talwar, Devki N.; Becla, Piotr
Novel wide-bandgap ZnO, BeO, and ZnBeO materials have recently gained considerable interest due to their stellar optoelectronic properties. These semiconductors are being used in developing high-resolution, flexible, transparent nanoelectronics/photonics and achieving high-power radio frequency modules for sensors/biosensors, photodetectors/solar cells, and resistive random-access memory applications. Despite earlier evidence of attaining p-type wz ZnO with N doping, the problem persists in achieving reproducible p-type conductivity. This issue is linked to charging compensation by intrinsic donors and/or background impurities. In ZnO: Al (Li), the vibrational features by infrared and Raman spectroscopy have been ascribed to the presence of isolated AlZn(LiZn)&#13;
 defects, nearest-neighbor (NN) [AlZn−NO&#13;
] pairs, and second NN [AlZn−O−LiZn;VZn−O−LiZn]&#13;
 complexes. However, no firm identification has been established. By integrating accurate perturbation models in a realistic Green’s function method, we have meticulously simulated the impurity vibrational modes of AlZn&#13;
 (LiZn)&#13;
 and their bonding to form complexes with dopants as well as intrinsic defects. We strongly feel that these phonon features in doped ZnO will encourage spectroscopists to perform similar measurements to check our theoretical conjectures.
</description>
<pubDate>Fri, 16 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159386</guid>
<dc:date>2025-05-16T00:00:00Z</dc:date>
</item>
<item>
<title>Eco-Friendly Synthesis of ZnO Nanoparticles from Natural Agave, Chiku, and Soursop Extracts: A Sustainable Approach to Antibacterial Applications</title>
<link>https://hdl.handle.net/1721.1/159385</link>
<description>Eco-Friendly Synthesis of ZnO Nanoparticles from Natural Agave, Chiku, and Soursop Extracts: A Sustainable Approach to Antibacterial Applications
Channa, G. Mustafa; Iturbe-Ek, Jackeline; Sustaita, Alan O.; Melo-Maximo, Dulce V.; Bhatti, Atiya; Esparza-Sanchez, Juan; Navarro-Lopez, Diego E.; Lopez-Mena, Edgar R.; Sanchez-Lopez, Angelica Lizeth; Lozano, Luis Marcelo
Traditional methods of synthesizing nanoparticles often rely on physical and chemical processes using synthetic hazardous chemicals. In contrast, the rise in green chemistry emphasizes using bioactive compounds from plants for the eco-friendly synthesis of nanostructures. These green synthesis techniques are increasingly recognized for their simplicity, cost-effectiveness, and ability to yield non-toxic by-products, an approach that aligns with sustainable practices. In this research, a straightforward, cheap, environmentally friendly, and sustainable procedure was developed to fabricate Zinc oxide nanoparticles (ZnO-NPs) employing three different pulp extracts: Agave (Agave americana), Chiku (Manilkara zapota), and Soursop (Annona muricata) to serve in the synthesis as capping, reduction, or stabilization agent. Analytical characterization techniques confirmed the successful phytosynthesis of ZnO-NPs, evidenced by significant absorbance peaks of UV-Vis spectra at 362 nm, and the chemical composition of ZnO without noticeable traces of phytochemical residues by carrying out ATR-FTIR analysis. SEM, STEM microscopies, and XRD analysis verified that the ZnO nanoparticles possess spherical geometries and hexagonal crystal structures. The average size of these nanoparticles was around 15.94, 18.08, and 23.32 nm for Agave, Chiku, and Soursop extract-based synthesis, respectively. Additionally, the in vitro antibacterial activity of phytosynthetized ZnO-NPs was evaluated against E. coli and S. aureus, confirming effective bacterial growth inhibition and demonstrating their significant antimicrobial potential.
</description>
<pubDate>Fri, 16 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159385</guid>
<dc:date>2025-05-16T00:00:00Z</dc:date>
</item>
<item>
<title>Refining Zero-Shot Text-to-SQL Benchmarks via Prompt Strategies with Large Language Models</title>
<link>https://hdl.handle.net/1721.1/159384</link>
<description>Refining Zero-Shot Text-to-SQL Benchmarks via Prompt Strategies with Large Language Models
Zhou, Ruikang; Zhang, Fan
Text-to-SQL leverages large language models (LLMs) for natural language database queries, yet existing benchmarks like BIRD (12,751 question&amp;ndash;SQL pairs, 95 databases) suffer from inconsistencies&amp;mdash;e.g., 30% of queries misalign with SQL outputs&amp;mdash;and ambiguities that impair LLM evaluation. This study refines such datasets by distilling logically sound question&amp;ndash;SQL pairs and enhancing table schemas, yielding a benchmark of 146 high-complexity tasks across 11 domains. We assess GPT-4o, GPT-4o-Mini, Qwen-2.5-Instruct, llama 370b, DPSK-v3 and O1-Preview in zero-shot scenarios, achieving average accuracies of 51.23%, 41.65%, 44.25%, 47.80%, and 49.10% and a peak of 78.08% (O1-Preview), respectively. Prompt-based strategies improve performance by up to 4.78%, addressing issues like poor domain adaptability and inconsistent training data interpretation. Error-annotated datasets further reveal LLM limitations. This refined benchmark ensures robust evaluation of logical reasoning, supporting reliable NLP-driven database systems.
</description>
<pubDate>Fri, 09 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159384</guid>
<dc:date>2025-05-09T00:00:00Z</dc:date>
</item>
<item>
<title>Perspectives on monetary policy and the framework review</title>
<link>https://hdl.handle.net/1721.1/159383</link>
<description>Perspectives on monetary policy and the framework review
Evans, Charles; Sack, Brian; Forbes, Kristin
A panel discussed potential desirable changes in the Federal Reserve’s policy framework in light of recent experience. Issues raised included communication about when the effective lower bound is reached, the relative merits of flexible average inflation targeting and flexible inflation targeting, and the use of QE and QT policy. Emphasis was placed on the need to carefully consider the weights placed on the inflation and employment mandates, especially in light of supply shocks, the need to carefully distinguish between balance sheet actions designed to stabilize markets and those used as stimulus in an effective lower bound environment, and the need to recognize the price level impact of inflation running higher than target.
</description>
<pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159383</guid>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Antrodia cinnamomea Residual Biomass-Based Hydrogel as a Novel UV-Protective and Antimicrobial Wound-Healing Dressing for Biomedical Use</title>
<link>https://hdl.handle.net/1721.1/159382</link>
<description>Antrodia cinnamomea Residual Biomass-Based Hydrogel as a Novel UV-Protective and Antimicrobial Wound-Healing Dressing for Biomedical Use
Xu, Chunyuhang; Chen, Siyu; Liu, Tiange; Zhu, Haowen; Kuo, Chien-Liang; Zhou, Zhuoyu; Chen, Guo; Chin, Fion Wei Lin; Yang, Xin; Huang, Dejian
Antrodia cinnamomea is widely known for its bioactive properties, particularly in&#13;
anti-cancer, anti-inflammatory, and antibacterial areas. Despite the full use of the bioactive&#13;
compounds from its fruiting body, high-value residues remain largely underexploited. This&#13;
study presents a novel one-pot gel formation method, utilizing cinnamomea cellulose-riched&#13;
residues to create hydrogels as an effective wound-healing dressing. The hydrogels derived&#13;
from these residues show desirable properties, including non-drying characteristics, antibacterial activity against Staphylococcus aureus ATCC 1768, and cytocompatibility. Residual&#13;
bioactive compounds, such as Antcin-K, Dehydroeburicoic acid, and (25S,R)-Antcin H,&#13;
were identified in the residues, adding to the hydrogel’s efficacy. A UVB irradiation model&#13;
was employed to evaluate the protective effects of the residues on UVB-damaged HaCaT&#13;
skin cell lines, with an IC50 of 0.045 mg/mL. The results indicated that A. cinnamomea&#13;
residue extracts reduced the upregulation of MMP-1, MMP-2, MMP-3, MMP-7, and MMP-9&#13;
proteins caused by UVB exposure, suggesting high UV-protective activity. Additionally,&#13;
antibacterial tests on Staphylococcus aureus strains, including Staphylococcus ATTC 1768,&#13;
showed promising results, with inhibition zones ranging from 10.64 to 12.11 mm. In summary, Antrodia cinnamomea residue hydrogels combine UV protection with antimicrobial&#13;
activity, making them a promising candidate for medical applications, particularly as a&#13;
wound-healing dressing.
</description>
<pubDate>Thu, 08 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159382</guid>
<dc:date>2025-05-08T00:00:00Z</dc:date>
</item>
<item>
<title>A molecular systems architecture of neuromuscular junction in amyotrophic lateral sclerosis</title>
<link>https://hdl.handle.net/1721.1/159381</link>
<description>A molecular systems architecture of neuromuscular junction in amyotrophic lateral sclerosis
Ayyadurai, V. A. Shiva; Deonikar, Prabhakar; Kamm, Roger D.
A molecular systems architecture is presented for the neuromuscular junction (NMJ) in order to provide a framework for organizing complexity of biomolecular interactions in amyotrophic lateral sclerosis (ALS) using a systematic literature review process. ALS is a fatal motor neuron disease characterized by progressive degeneration of the upper and lower motor neurons that supply voluntary muscles. The neuromuscular junction contains cells such as upper and lower motor neurons, skeletal muscle cells, astrocytes, microglia, Schwann cells, and endothelial cells, which are implicated in pathogenesis of ALS. This molecular systems architecture provides a multi-layered understanding of the intra- and inter-cellular interactions in the ALS neuromuscular junction microenvironment, and may be utilized for target identification, discovery of single and combination therapeutics, and clinical strategies to treat ALS.
</description>
<pubDate>Mon, 17 Mar 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159381</guid>
<dc:date>2025-03-17T00:00:00Z</dc:date>
</item>
<item>
<title>Urban sensing using existing fiber-optic networks</title>
<link>https://hdl.handle.net/1721.1/159380</link>
<description>Urban sensing using existing fiber-optic networks
Liu, Jingxiao; Li, Haipeng; Noh, Hae Young; Santi, Paolo; Biondi, Biondo; Ratti, Carlo
The analysis of urban seismic signals offers valuable insights into urban environments and society. Yet, accurate detection and localization of seismic sources on a city-wide scale with conventional seismographic network is unavailable due to the prohibitive costs of ultra-dense seismic arrays required for imaging high-frequency anthropogenic sources. Here, we leverage existing fiber-optic networks as a distributed acoustic sensing system to accurately locate urban seismic sources and estimate how their intensity varies over time. By repurposing a 50-kilometer telecommunication fiber into an ultra-dense seismic array, we generate spatiotemporal maps of seismic source power (SSP) across San Jose, California. Our approach overcomes the proximity limitations of urban seismic sensing, enabling accurate localization of remote seismic sources generated by urban activities, such as traffic, construction, and school operations. We also show strong correlations between SSP values and environmental noise levels, as well as various persistent urban features, including land use patterns and demographics.
</description>
<pubDate>Mon, 31 Mar 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159380</guid>
<dc:date>2025-03-31T00:00:00Z</dc:date>
</item>
<item>
<title>Linguistic capacity was present in the Homo sapiens population 135 thousand years ago</title>
<link>https://hdl.handle.net/1721.1/159379</link>
<description>Linguistic capacity was present in the Homo sapiens population 135 thousand years ago
Miyagawa, Shigeru; DeSalle, Rob; Nóbrega, Vitor Augusto; Nitschke, Remo; Okumura, Mercedes; Tattersall, Ian
Recent genome-level studies on the divergence of early Homo sapiens, based&#13;
on single nucleotide polymorphisms, suggest that the initial population division&#13;
within H. sapiens from the original stem occurred approximately 135 thousand&#13;
years ago. Given that this and all subsequent divisions led to populations with full&#13;
linguistic capacity, it is reasonable to assume that the potential for language must&#13;
have been present at the latest by around 135 thousand years ago, before the first&#13;
division occurred. Had linguistic capacity developed later, we would expect to find&#13;
some modern human populations without language, or with some fundamentally&#13;
different mode of communication. Neither is the case. While current evidence&#13;
does not tell us exactly when language itself appeared, the genomic studies do&#13;
allow a fairly accurate estimate of the time by which linguistic capacity must&#13;
have been present in the modern human lineage. Based on the lower boundary&#13;
of 135 thousand years ago for language, we propose that language may have&#13;
triggered the widespread appearance of modern human behavior approximately&#13;
100 thousand years ago.
</description>
<pubDate>Mon, 10 Mar 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159379</guid>
<dc:date>2025-03-10T00:00:00Z</dc:date>
</item>
<item>
<title>Mapping facade materials utilizing zero-shot segmentation for applications in urban microclimate research</title>
<link>https://hdl.handle.net/1721.1/159378</link>
<description>Mapping facade materials utilizing zero-shot segmentation for applications in urban microclimate research
Tarkhan, Nada; Klimenka, Mikita; Fang, Kelly; Duarte, Fabio; Ratti, Carlo; Reinhart, Christoph
To address the Urban Heat Island (UHI) effect-a significant urban climate challenge-detailed urban microclimate modeling is essential. Such modeling typically requires data on urban surface properties and morphologies from street canyons and buildings. Most urban surveying efforts have focused on morphological attributes such as sky view factor, vegetation or building surface ratio, while the mass-collection of facade materials has been hindered by the complexity of the segmentation task and the need for large and diverse labeled datasets. Recognizing the importance of mapping facade materials for urban thermal comfort, envelope heat emissions, and building energy studies, we employ computer vision-based state-of-the-art zero-shot learning paradigms for high-fidelity facade material extraction. Our approach circumvents the traditional need for extensive labeled training data, allowing for adaptation to a variety of urban contexts and material types. Tested in Dubai, Amsterdam, and Boston (three architecturally diverse cities), our algorithm successfully detects the predominant facade material in 68% of cases and identifies the top three present material classes in 85% of cases. Additionally, we show how material coverage identification is crucial for assessing outdoor thermal comfort, as evident in shifts in annual cold and heat stress hours across the climates of the three cities in a sample urban canyon.
</description>
<pubDate>Fri, 14 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159378</guid>
<dc:date>2025-02-14T00:00:00Z</dc:date>
</item>
<item>
<title>Mobility risk: using ambulance operations data to analyze the spatial and social dimensions of health disadvantage</title>
<link>https://hdl.handle.net/1721.1/159376</link>
<description>Mobility risk: using ambulance operations data to analyze the spatial and social dimensions of health disadvantage
Brennan, Mark; Dyer, Sophia; Freemark, Yonah; Salvia, James; Segal, Laura; Serino, Erin; Steil, Justin
The risk of death and disability for people after being struck by a car is unevenly distributed geographically and socially. This paper uses Emergency Medical Services records in Boston, Massachusetts, to analyze the characteristics of the locations where vehicles struck pedestrians and cyclists – and the characteristics of the neighborhoods where those individuals live. An individual’s risk of encountering this sort of traffic risk, which we term mobility risk, is disproportionately higher among residents of neighborhoods with large shares of Black and Latino residents because of their disproportionate exposure to crashes both within and also outside of their home neighborhoods. Overall, residents of predominantly Black and Latino neighborhoods are about four times more likely than residents of predominantly white neighborhoods to be struck as a pedestrian; this disparity in crash exposure is almost 1.5 times larger than one would expect based on crash location alone. For residents of largely Black and Latino neighborhoods, being struck by a car is a common yet consequential phenomenon that reproduces health inequity.
</description>
<pubDate>Tue, 03 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159376</guid>
<dc:date>2025-06-03T00:00:00Z</dc:date>
</item>
<item>
<title>Quantum Entropy Thermalization</title>
<link>https://hdl.handle.net/1721.1/159357</link>
<description>Quantum Entropy Thermalization
Huang, Yichen; Harrow, Aram W.
In an isolated quantum many-body system undergoing unitary evolution, the entropy of a subsystem (smaller than half the system size) thermalizes if at long times, it is to leading order equal to the thermodynamic entropy of the subsystem at the same energy. In this paper, we prove entropy thermalization for a nearly integrable Sachdev–Ye–Kitaev model initialized in a pure product state. The model is obtained by adding random all-to-all 4-body interactions as a perturbation to a random free-fermion model.
</description>
<pubDate>Wed, 21 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159357</guid>
<dc:date>2025-05-21T00:00:00Z</dc:date>
</item>
<item>
<title>Continuous Maximum Coverage Location Problem with Arbitrary Shape of Service Areas and Regional Demand</title>
<link>https://hdl.handle.net/1721.1/159355</link>
<description>Continuous Maximum Coverage Location Problem with Arbitrary Shape of Service Areas and Regional Demand
Yakovlev, Sergiy; Shekhovtsov, Sergiy; Kirichenko, Lyudmyla; Matsyi, Olha; Podzeha, Dmytro; Chumachenko, Dmytro
This paper addresses the maximum coverage location problem in a generalized setting, where both facilities (service areas) and regional demand are modeled as continuous entities. Unlike traditional formulations, our approach allows for arbitrary shapes for both service areas and demand regions, with additional constraints on facility placement. The key novelty of this work is its ability to handle complex, irregularly shaped service areas, including approximating them as unions of centrally symmetric shapes. This enables the use of an analytical approach based on spatial symmetry, which allows for efficient estimation of the covered area. The problem is formulated as a nonlinear optimization task. We analyze the properties of the objective function and leverage the Shapely library in Python 3.13.3 for efficient geometric computations. To improve computational efficiency, we develop an extended elastic model that significantly reduces processing time. This model generalizes the well-known quasi-physical, quasi-human algorithm for circle packing, extending its applicability to more complex spatial configurations. The effectiveness of the proposed approach is validated through test cases in which service areas take the form of circles, ellipses, and irregular polygons. Our method provides a robust and adaptable solution for various settings of practically interesting continuous maximum coverage location problems involving irregular regional demand and service areas.
</description>
<pubDate>Tue, 29 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159355</guid>
<dc:date>2025-04-29T00:00:00Z</dc:date>
</item>
<item>
<title>Deep-time history of primate behavior and ecology as revealed by ancestral state reconstructions</title>
<link>https://hdl.handle.net/1721.1/159352</link>
<description>Deep-time history of primate behavior and ecology as revealed by ancestral state reconstructions
Spear, Jeffrey K.; Hoffman, Eva A.; Miyamae, Juri A.; Whalen, Christopher D.; Arre, Alyssa M.; Chen-Kraus, Chloe; Corley, Margaret K.; Fabbri, Matteo; Gauthier, Jacques A.; Hanson, Michael; Leiss, Amanda; Koch, Nicolás M.
Primates exhibit many key behavioral traits that differentiate them from their mammalian relatives. The origin and evolution of these traits is a major focus of primate paleontology. Here, we perform a formal ancestral state reconstruction of extant primate species to generate hypotheses about when these behaviors originated in primate evolutionary history. We compiled a large dataset of primary source data on substrate use, activity pattern, group size and structure, mating system, natal dispersal, litter size, and diet for 196 extant species. We also include data on body size, sexual dimorphism, and encephalization quotient. We performed ancestral character estimation of continuous characters using a Bayesian model and discrete or binned characters using stochastic character mapping of a k-state Markov model (Mk model). We reconstruct the ancestral crown primates as highly arboreal, nocturnal, small bodied, small brained, and eating a diet predominantly of fruit and invertebrates with the possible addition of other plant foods such as leaves and flowers. Social systems are poorly estimated at deep nodes in the tree, but the best supported states involve small to medium-sized groups. Larger, more complex social groups evolve later and emerge alongside diurnality and larger body size. In general, reconstructions at key nodes are consistent with known fossils. Exceptions include ancestral strepsirrhines, which are not reconstructed as being similar to well-known adapoids, and the brain size of ancestral anthropoids, which is reconstructed as larger than most early anthropoid taxa.
</description>
<pubDate>Sat, 24 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159352</guid>
<dc:date>2025-05-24T00:00:00Z</dc:date>
</item>
<item>
<title>CognitiveDog: Large Multimodal Model Based System to Translate Vision and Language into Action of Quadruped Robot</title>
<link>https://hdl.handle.net/1721.1/159351</link>
<description>CognitiveDog: Large Multimodal Model Based System to Translate Vision and Language into Action of Quadruped Robot
Lykov, Artem; Litvinov, Mikhail; Konenkov, Mikhail; Prochii, Rinat; Burtsev, Nikita; Abdulkarim, Ali Alridha; Bazhenov, Artem; Berman, Vladimir; Tsetserukou, Dzmitry
This paper introduces CognitiveDog, a pioneering development of quadruped robot with Large Multi-modal Model (LMM) that is capable of not only communicating with humans verbally but also physically interacting with the environment through object manipulation. The system was realized on Unitree Go1 robot-dog equipped with a custom gripper and demonstrated autonomous decision-making capabilities, independently determining the most appropriate actions and interactions with various objects to fulfill user-defined tasks. These tasks do not necessarily include direct instructions, challenging the robot to comprehend and execute them based on natural language input and environmental cues. The paper delves into the intricacies of this system, dataset characteristics, and the software architecture. Key to this development is the robot's proficiency in navigating space using Visual-SLAM, effectively manipulating and transporting objects, and providing insightful natural language commentary during task execution. Experimental results highlight the robot's advanced task comprehension and adaptability, underscoring its potential in real-world applications. The dataset used to fine-tune the robot-dog behavior generation model is provided at the following link: huggingface.co/datasets/ArtemLykov/CognitiveDog_dataset
</description>
<pubDate>Mon, 11 Mar 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159351</guid>
<dc:date>2024-03-11T00:00:00Z</dc:date>
</item>
<item>
<title>The Bochner–Riesz Problem: An Old Approach Revisited</title>
<link>https://hdl.handle.net/1721.1/159350</link>
<description>The Bochner–Riesz Problem: An Old Approach Revisited
Guo, Shaoming; Oh, Changkeun; Wang, Hong; Wu, Shukun; Zhang, Ruixiang
We show that the recent techniques developed to study the Fourier restriction problem apply equally well to the Bochner–Riesz problem. This is achieved via applying a pseudo-conformal transformation and a two-parameter induction-on-scales argument. As a consequence, we improve the Bochner–Riesz problem to the best known range of the Fourier restriction problem in all high dimensions.
</description>
<pubDate>Tue, 20 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159350</guid>
<dc:date>2024-02-20T00:00:00Z</dc:date>
</item>
<item>
<title>Finding the Fuse: Prospects for the Detection and Characterization of Hydrogen-rich Core-collapse Supernova Precursor Emission with the LSST</title>
<link>https://hdl.handle.net/1721.1/159349</link>
<description>Finding the Fuse: Prospects for the Detection and Characterization of Hydrogen-rich Core-collapse Supernova Precursor Emission with the LSST
Gagliano, A.; Berger, E.; Villar, V. A.; Hiramatsu, D.; Kessler, R.; Matsumoto, T.; Gilkis, A.
Enhanced emission in the months to years preceding explosion has been detected for several core-collapse supernovae (SNe). Though the physical mechanisms driving the emission remain hotly debated, the light curves of detected events show long-lived (≥50 days), plateau-like behavior, suggesting hydrogen recombination may significantly contribute to the total energy budget. The Vera C. Rubin Observatory's Legacy Survey of Space and Time (LSST) will provide a decade-long photometric baseline to search for this emission, both in binned pre-explosion observations after an SN is detected and in single-visit observations prior to the SN explosion. In anticipation of these searches, we simulate a range of eruptive precursor models to core-collapse SNe and forecast the discovery rates of these phenomena in LSST data. We find a detection rate of ∼40–130 yr−1 for SN IIP/IIL precursors and ∼110 yr−1 for SN IIn precursors in single-epoch photometry. Considering the first three years of observations with the effects of rolling and observing triplets included, this number grows to a total of 150–400 in binned photometry, with the highest number recovered when binning in 100 day bins for 2020tlf-like precursors and in 20 day bins for other recombination-driven models from the literature. We quantify the impact of using templates contaminated by residual light (from either long-lived or separate precursor emission) on these detection rates, and explore strategies for estimating baseline flux to mitigate these issues. Spectroscopic follow-up of the eruptions preceding core-collapse SNe and detected with LSST will offer important clues to the underlying drivers of terminal-stage mass loss in massive stars.
</description>
<pubDate>Mon, 30 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159349</guid>
<dc:date>2024-12-30T00:00:00Z</dc:date>
</item>
<item>
<title>Diversity in the medical research ecosystem: a descriptive scientometric analysis of over 49 000 studies and 150 000 authors published in high-impact medical journals between 2007 and 2022</title>
<link>https://hdl.handle.net/1721.1/159348</link>
<description>Diversity in the medical research ecosystem: a descriptive scientometric analysis of over 49 000 studies and 150 000 authors published in high-impact medical journals between 2007 and 2022
Charpignon, Marie-Laure; Matos, Joao; Nakayama, Luis Filipe; Gallifant, Jack; Alfonso, Pia Gabrielle I; Cobanaj, Marisa; Fiske, Amelia Morel; Gates, Alexander J; Ho, Frances Dominique V; Jain, Urvish; Kashkooli, Mohammad; Link, Naira; McCoy, Liam G; Shaffer, Jonathan; Celi, Leo Anthony
Objectives Health research that significantly impacts&#13;
global clinical practice and policy is often published in&#13;
high-impact factor (IF) medical journals. These outlets&#13;
play a pivotal role in the worldwide dissemination of novel&#13;
medical knowledge. However, researchers identifying as&#13;
women and those affiliated with institutions in low- and&#13;
middle-income countries (LMICs) have been largely&#13;
under-represented in high-IF journals across multiple&#13;
fields of medicine. To evaluate disparities in gender and&#13;
geographical representation among authors who have&#13;
published in any of five top general medical journals, we&#13;
conducted scientometric analyses using a large-scale&#13;
dataset extracted from the New England Journal of&#13;
Medicine, Journal of the American Medical Association,&#13;
The BMJ, The Lancet and Nature Medicine.&#13;
Methods Author metadata from all articles published&#13;
in the selected journals between 2007 and 2022 were&#13;
collected using the DimensionsAI platform. The Genderize.&#13;
io Application Programming Interface was then used to&#13;
infer each author’s likely gender based on their extracted&#13;
first name. The World Bank country classification was used&#13;
to map countries associated with researcher affiliations&#13;
to the LMIC or the high-income country (HIC) category.&#13;
We characterised the overall gender and country income&#13;
category representation across the five medical journals.&#13;
In addition, we computed article-level diversity metrics and&#13;
contrasted their distributions across the journals.&#13;
Results We studied 151 536 authors across 49 764&#13;
articles published in five top medical journals, over a&#13;
period spanning 15 years. On average, approximately&#13;
one-third (33.1%) of the authors of a given paper were&#13;
inferred to be women; this result was consistent across&#13;
the journals we studied. Further, 86.6% of the teams&#13;
were exclusively composed of HIC authors; in contrast,&#13;
only 3.9% were exclusively composed of LMIC authors.&#13;
The probability of serving as the first or last author was&#13;
significantly higher if the author was inferred to be a&#13;
man (18.1% vs 16.8%, p&lt;0.01) or was affiliated with an&#13;
institution in a HIC (16.9% vs 15.5%, p&lt;0.01). Our primary&#13;
finding reveals that having a diverse team promotes further diversity, within the same dimension (ie, gender or geography) and&#13;
across dimensions. Notably, papers with at least one woman among the&#13;
authors were more likely to also involve at least two LMIC authors (11.7%&#13;
vs 10.4% in baseline, p&lt;0.001; based on inferred gender); conversely,&#13;
papers with at least one LMIC author were more likely to also involve at&#13;
least two women (49.4% vs 37.6%, p&lt;0.001; based on inferred gender).&#13;
Conclusion We provide a scientometric framework to assess authorship&#13;
diversity. Our research suggests that the inclusiveness of high-impact&#13;
medical journals is limited in terms of both gender and geography. We&#13;
advocate for medical journals to adopt policies and practices that promote&#13;
greater diversity and collaborative research. In addition, our findings offer&#13;
a first step towards understanding the composition of teams conducting&#13;
medical research globally and an opportunity for individual authors to&#13;
reflect on their own collaborative research practices and possibilities to&#13;
cultivate more diverse partnerships in their work.
</description>
<pubDate>Wed, 01 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159348</guid>
<dc:date>2025-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Probabilistic genotype-phenotype maps reveal mutational robustness of RNA folding, spin glasses, and quantum circuits</title>
<link>https://hdl.handle.net/1721.1/159347</link>
<description>Probabilistic genotype-phenotype maps reveal mutational robustness of RNA folding, spin glasses, and quantum circuits
Sappington, Anna; Mohanty, Vaibhav
Recent studies of genotype-phenotype maps have reported universally enhanced phenotypic robustness to genotype mutations, a feature essential to evolution. Virtually all of these studies make a simplifying assumption that each genotype—represented as a sequence—maps deterministically to a single phenotype, such as a discrete structure. Here we introduce probabilistic genotype-phenotype (PrGP) maps, where each genotype maps to a vector of phenotype probabilities, as a more realistic and universal language for investigating robustness in a variety of physical, biological, and computational systems. We study three model systems to show that PrGP maps offer a generalized framework which can handle uncertainty emerging from various physical sources: (1) thermal fluctuation in RNA folding, (2) external field disorder in the spin-glass ground state search problem, and (3) superposition and entanglement in quantum circuits, which are realized experimentally on IBM quantum computers. In all three cases, we observe a biphasic robustness scaling which is enhanced relative to random expectation for more frequent phenotypes and approaches random expectation for less frequent phenotypes. We derive an analytical theory for the behavior of PrGP robustness, and we demonstrate that the theory is highly predictive of empirical robustness.
</description>
<pubDate>Thu, 30 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159347</guid>
<dc:date>2025-01-30T00:00:00Z</dc:date>
</item>
<item>
<title>Probing quantum geometry through optical conductivity and magnetic circular dichroism</title>
<link>https://hdl.handle.net/1721.1/159344</link>
<description>Probing quantum geometry through optical conductivity and magnetic circular dichroism
Ghosh, Barun; Onishi, Yugo; Xu, Su-Yang; Lin, Hsin; Fu, Liang; Bansil, Arun
Probing ground-state quantum geometry and topology through optical responses is not only of fundamental interest, but it can also offer several practical advantages. Here, using first-principles calculations on thin films of the antiferromagnetic topological insulator MnBi2Te4, we demonstrate how the generalized optical weight arising from the absorptive part of the optical conductivity can be used to probe the ground-state quantum geometry and topology. We show that three-septuple-layer MnBi2Te4 film exhibit an enhanced, almost-perfect magnetic circular dichroism for a narrow photon energy window in the infrared region. We calculate the quantum weight in this MnBi2Te4 film and show that it far exceeds the lower bound provided by the Chern number. Our results suggest that the well-known optical methods are powerful tools for probing the ground-state quantum geometry and topology.
</description>
<pubDate>Fri, 20 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159344</guid>
<dc:date>2024-12-20T00:00:00Z</dc:date>
</item>
<item>
<title>Large exchange bias enhancement and control of ferromagnetic energy landscape by solid-state hydrogen gating</title>
<link>https://hdl.handle.net/1721.1/159343</link>
<description>Large exchange bias enhancement and control of ferromagnetic energy landscape by solid-state hydrogen gating
Hasan, M Usama; Kossak, Alexander E; Beach, Geoffrey SD
Voltage control of exchange bias is desirable for spintronic device applications, however dynamic modulation of the unidirectional coupling energy in ferromagnet/antiferromagnet bilayers has not yet been achieved. Here we show that by solid-state hydrogen gating, perpendicular exchange bias can be enhanced by &gt; 100% in a reversible and analog manner, in a simple Co/Co0.8Ni0.2O heterostructure at room temperature. We show that this phenomenon is an isothermal analog to conventional field-cooling and that sizable changes in average coupling energy can result from small changes in AFM grain rotatability. Using this method, we show that a bi-directionally stable ferromagnet can be made unidirectionally stable, with gate voltage alone. This work provides a means to dynamically reprogram exchange bias, with broad applicability in spintronics and neuromorphic computing, while simultaneously illuminating fundamental aspects of exchange bias in polycrystalline films.
</description>
<pubDate>Thu, 21 Dec 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159343</guid>
<dc:date>2023-12-21T00:00:00Z</dc:date>
</item>
<item>
<title>Magnonic superconductivity</title>
<link>https://hdl.handle.net/1721.1/159342</link>
<description>Magnonic superconductivity
Nazaryan, Khachatur G; Fu, Liang
We uncover a superconducting state with partial spin polarization induced by a magnetic field. This state, which we call “magnonic superconductor,” lacks a conventional pairing order parameter but is characterized instead by a composite order parameter that represents the binding of electron pairs and magnons. We rigorously demonstrate the existence of magnonic superconductivity with high transition temperature in one-dimensional and two-dimensional Hubbard models with repulsive interaction. We further show that magnonic Cooper pairs can attract to form higher-charge bound states, which can give rise to charge-6 superconductivity.
</description>
<pubDate>Fri, 29 Nov 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159342</guid>
<dc:date>2024-11-29T00:00:00Z</dc:date>
</item>
<item>
<title>Generation of representative meteorological years through anomaly-based detection of extreme events</title>
<link>https://hdl.handle.net/1721.1/159341</link>
<description>Generation of representative meteorological years through anomaly-based detection of extreme events
Tarkhan, Nada; Crawley, Drury B; Lawrie, Linda K; Reinhart, Christoph
Typical Meteorological Years (TMYs) have long supported the building sector by integrating local climate into building design for energy, thermal comfort, and peak load assessments. As climates shift, past heat waves and cold spells signal future conditions requiring greater adaptability. This study proposes a new file generation method that preserves TMY properties while embedding extreme events. We combine three anomaly-detection methods—temperature thresholds, Graph Neural Networks (GNNs), and Extreme Value Theory (EVT)—to capture climatic deviations, detect anomalies, and model statistical extremes. An integrated hierarchical method forms the new Representative Meteorological Year (RMY) file. RMY files for six ASHRAE climate-zones consistently capture past extremes, producing worst-case scenarios for key metrics, including peak loads, indoor thermal stress, natural ventilation and outdoor comfort. The largest deviation between the TMY and RMY was a doubling of indoor thermal stress hours across all climates, while average energy use remained aligned, with a deviation of 6%.
</description>
<pubDate>Tue, 06 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159341</guid>
<dc:date>2025-05-06T00:00:00Z</dc:date>
</item>
<item>
<title>Computational Framework for Estimating Cost of Equipment Disposal for Advanced Reactors</title>
<link>https://hdl.handle.net/1721.1/159340</link>
<description>Computational Framework for Estimating Cost of Equipment Disposal for Advanced Reactors
Mokoena, Chumani; Shirvan, Koroush
The decommissioning costs of a nuclear power plant are several hundreds of millions of dollars, with waste disposal alone predicted to cost over $100 million by the U.S. Nuclear Regulatory Commission. While investment in advanced nuclear deployment continues to grow, there has yet to be a comprehensive study on the decommissioning costs of advanced reactors.&#13;
&#13;
This study creates a generic computational framework to estimate the disposal costs of major equipment for advanced reactors. The framework is compatible with both CINDER90 and ORIGEN, where reaction rates are calculated from MCNPX, SCALE, or other neutron transport packages. The framework is benchmarked against the disposal costs for a pressurized water reactor’s components (core shroud, barrel, and reactor pressure vessel), resulting in a disposal cost of ~$0.3/MWh.&#13;
&#13;
The same methodology is then applied toward estimating disposal costs for a molten salt reactor (MSR). The MSR analysis focuses on the activity and disposal costs of the graphite reflectors, core can/shroud, and reactor vessel. The metal components are modeled as either SS316 or Hastelloy N with an operating period of 5 to 10 years. The core can is greater than Class C waste, while the vessel is Class C waste for SS316. For Hastelloy, the waste classification is dependent on the operating lifetime. A 10-year safe storage is assumed for the MSR to reduce its disposal costs.&#13;
&#13;
It was found that the disposal cost of graphite reflectors alone would reach $1/MWh. Overall, the MSR nuclear equipment cost could be significantly higher (~10×) than that of large water-cooled reactors. The difference is driven by the material selection, lack of economy of scale, and shorter lifetimes.
</description>
<pubDate>Wed, 30 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159340</guid>
<dc:date>2025-04-30T00:00:00Z</dc:date>
</item>
<item>
<title>Gender inventorship equity in patent prosecution</title>
<link>https://hdl.handle.net/1721.1/159339</link>
<description>Gender inventorship equity in patent prosecution
Schuster, W. Michael; Goodman, Jordana
There are pervasive gender gaps throughout the patent process. Here, we add to the literature by providing an in-depth analysis of gendered outcomes across each stage of patent prosecution. We show that female inventors are more likely to face rejection, experience unsuccessful appeals, and exhibit lower responsiveness to rejections than male counterparts. Not only are women less likely to patent their invention, but each stage of examination individually contributes to a lower aggregate grant rate for female inventors. Our research finds that, unlike small and large entity industry equivalents, university-filed patent applications demonstrate increased gender parity in allowance rates and continued prosecution after rejection. Moreover, small entities—patent applicants with typically smaller budgets—are either more than or equally likely to exhibit gender parity when compared to larger firms. We anticipate this study to be a starting point for a more sophisticated discussion around closing gender gaps in patenting and STEM.
</description>
<pubDate>Wed, 15 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159339</guid>
<dc:date>2025-01-15T00:00:00Z</dc:date>
</item>
<item>
<title>Venus cloud catcher as a proof of concept aerosol collection instrument</title>
<link>https://hdl.handle.net/1721.1/159338</link>
<description>Venus cloud catcher as a proof of concept aerosol collection instrument
Iakubivskyi, Iaroslav; Seager, Sara; Carr, Christopher E; Petkowski, Janusz J; Agrawal, Rachana; Moreno, M Regina A; Nellutla, Snigdha
We report on the proof-of-concept of a low-mass, low-power method for collecting micron-sized sulfuric acid aerosols in bulk from the atmosphere of Venus. The collection method uses four wired meshes in a sandwich structure with a deposition area of 225 cm2. It operates in two modes: passive and electrostatic. During passive operation, aerosols are gathered on the deposition surface by aerodynamic force. During electrostatic operation, a tungsten needle discharges a high voltage of - 10 kV at the front of the grounded mesh structure. The discharge ionizes aerosols and attracts them to the mesh by Coulomb forces, resulting in improved efficiency and tentative attraction of submicron aerosols. We describe the instrument construction and testing in the laboratory under controlled conditions with aerosols composed of 25%, 50%, 70%, 80%, 90% and 98%* concentration by volume of sulfuric acid, the rest water. We demonstrated the following: (i) both modes of operation can collect the entire range of sulfuric acid solutions; (ii) the collection efficiency increases steadily (from a few percent for water to over 40% for concentrated sulfuric acid) with the increased concentration of sulfuric acid solution in water in both modes; (iii) the relative improvement in the collection of the electrostatic mode decreases as the sulfuric acid concentration increases. We also demonstrated the operation of the instrument in the field, cloud particle collection on Mt. Washington, NH, and crater-rim fumaroles' particle collection on Kīlauea volcano, HI. The collection rate in the field is wind-speed dependent, and we observed collection rates around 0.1 ml[Formula: see text] in low wind environments (1-2 m[Formula: see text]), and around 1 ml[Formula: see text] in stronger wind (7-9 m[Formula: see text]).
</description>
<pubDate>Tue, 03 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159338</guid>
<dc:date>2024-12-03T00:00:00Z</dc:date>
</item>
<item>
<title>Inter-city firm connections and the scaling of urban economic indicators</title>
<link>https://hdl.handle.net/1721.1/159337</link>
<description>Inter-city firm connections and the scaling of urban economic indicators
Yang, Vicky Chuqiao; Jackson, Jacob J; Kempes, Christopher P
Cities exhibit consistent returns to scale in economic outputs, and urban scaling analysis is widely adopted to uncover common mechanisms in cities’ socioeconomic productivity. Leading theories view cities as closed systems, with returns to scale arising from intra-city social interactions. Here, we argue that the interactions between cities, particularly via shared organizations such as firms, significantly influence a city’s economic output. By examining global data on city connectivity through multinational firms alongside urban scaling Gross Domestic Product (GDP) statistics from the United States, EU, and China, we establish that global connectivity notably enhances GDP, while controlling for population. After accounting for global connectivity, the effect of population on GDP is no longer distinguishable from linear. To differentiate between local and global mechanisms, we analyzed homicide case data, anticipating dominant local effects. As expected, inter-city connectivity showed no significant impact. Our research highlights that inter-city effects affect some urban outputs more than others. This empirical analysis lays the groundwork for incorporating inter-city organizational connections into urban scaling theories and could inform future model development.
</description>
<pubDate>Wed, 30 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159337</guid>
<dc:date>2024-10-30T00:00:00Z</dc:date>
</item>
<item>
<title>Nonreciprocal superconductivity</title>
<link>https://hdl.handle.net/1721.1/159336</link>
<description>Nonreciprocal superconductivity
Davydova, Margarita; Geier, Max; Fu, Liang
We introduce the notion of nonreciprocal superconductors where inversion and time-reversal symmetries are broken, giving rise to an asymmetric energy dispersion. We demonstrate that nonreciprocal superconductivity can be detected by Andreev reflection. In particular, a transparent junction between a normal metal and a nonreciprocal superconductor generally exhibits an asymmetric current-voltage characteristic, which serves as a defining feature of nonreciprocal superconductivity. Unlike the superconducting diode effects, our detection scheme has the advantage of avoiding large critical currents that turn the superconducting state to normal. Last, we discuss candidates for nonreciprocal superconductivity, including graphene, UTe&lt;jats:sub&gt;2&lt;/jats:sub&gt;, as well as engineered platforms.
</description>
<pubDate>Fri, 29 Nov 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159336</guid>
<dc:date>2024-11-29T00:00:00Z</dc:date>
</item>
<item>
<title>Toward cultural interpretability: A linguistic anthropological framework for describing and evaluating large language models</title>
<link>https://hdl.handle.net/1721.1/159335</link>
<description>Toward cultural interpretability: A linguistic anthropological framework for describing and evaluating large language models
Jones, Graham M; Satran, Shai; Satyanarayan, Arvind
This article proposes a new integration of linguistic anthropology and machine learning (ML) around convergent interests in both the underpinnings of language and making language technologies more socially responsible. While linguistic anthropology focuses on interpreting the cultural basis for human language use, the ML field of interpretability is concerned with uncovering the patterns that Large Language Models (LLMs) learn from human verbal behavior. Through the analysis of a conversation between a human user and an LLM-powered chatbot, we demonstrate the theoretical feasibility of a new, conjoint field of inquiry, cultural interpretability (CI). By focusing attention on the communicative competence involved in the way human users and AI chatbots coproduce meaning in the articulatory interface of human-computer interaction, CI emphasizes how the dynamic relationship between language and culture makes contextually sensitive, open-ended conversation possible. We suggest that, by examining how LLMs internally “represent” relationships between language and culture, CI can: (1) provide insight into long-standing linguistic anthropological questions about the patterning of those relationships; and (2) aid model developers and interface designers in improving value alignment between language models and stylistically diverse speakers and culturally diverse speech communities. Our discussion proposes three critical research axes: relativity, variation, and indexicality.
</description>
<pubDate>Sat, 01 Mar 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159335</guid>
<dc:date>2025-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Structural bioinformatic study of six human olfactory receptors and their AlphaFold3 predicted water-soluble QTY variants and OR1A2 with an odorant octanoate and TAAR9 with spermidine</title>
<link>https://hdl.handle.net/1721.1/159334</link>
<description>Structural bioinformatic study of six human olfactory receptors and their AlphaFold3 predicted water-soluble QTY variants and OR1A2 with an odorant octanoate and TAAR9 with spermidine
Johnsson, Finn; Karagöl, Taner; Karagöl, Alper; Zhang, Shuguang
The molecular mechanism of olfaction, namely, how we smell with limited olfactory receptors to recognize exceedingly diverse and large numbers of scents remains unknown despite the recent advances in chemistry, chemical, structural, and molecular biology. Olfactory receptors are notoriously difficult to study because they are fully embedded in the cell membrane. After decades of efforts and significant funding, there are only three olfactory receptor structures known. To understand olfaction, we carried out the structural bioinformatic study of six human olfactory receptors including OR51E1, OR51E2, OR52cs, OR1A1, OR1A2, TAAR9, and their AlphaFold3 predicted water-soluble QTY variants with odorants. We applied the QTY code to replace leucine (L) with glutamine (Q), isoleucine (I) and valine (V) with threonine (T), and phenylalanine (F) with tyrosine (Y) only in the transmembrane helices. Therefore, these QTY variants become water-soluble. We also present the superimposed structures of native olfactory receptors and their water-soluble QTY variants. The superimposed structures show remarkable similarity with RMSDs between 0.441 and 1.275 Å despite significant changes to the protein sequence of the transmembrane domains (43.03%–50.31%). We also show the differences in hydrophobicity surfaces between the native olfactory receptors and their QTY variants. Furthermore, we also used AlphaFold3 and molecular dynamics to study the odorant octanoate with OR1A2 and spermidine with TAAR9. Our bioinformatics studies provide insight into the differences between the hydrophobic helices and hydrophilic helices, and will likely further stimulate designs of water-soluble integral transmembrane proteins and other aggregated proteins.
</description>
<pubDate>Mon, 09 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159334</guid>
<dc:date>2024-12-09T00:00:00Z</dc:date>
</item>
<item>
<title>Exposing disparities in flood adaptation for equitable future interventions in the USA</title>
<link>https://hdl.handle.net/1721.1/159333</link>
<description>Exposing disparities in flood adaptation for equitable future interventions in the USA
Pecharroman, Lidia Cano; Hahn, ChangHoon
As governments race to implement new climate adaptation solutions that prepare for more frequent flooding, they must seek policies that are effective for all communities and uphold climate justice. This requires evaluating policies not only on their overall effectiveness but also on whether they benefit all communities. Using the USA as an example, we illustrate the importance of considering such disparities for flood adaptation through a FEMA dataset of  ~ 2.5 million flood insurance claims. We use CausalFlow, a causal inference method based on deep generative models, to estimate the treatment effect of flood adaptation interventions based on a community’s income, racial demographics, population, flood risk, educational attainment, and precipitation. We find that the program saves communities $5,000–15,000 per household. However, these savings are not evenly spread across communities. For example, for low-income communities savings sharply decline as flood-risk increases in contrast to their high-income counterparts. Even among low-income communities, savings are &gt;$6,000 per household higher in predominantly white communities. Future flood adaptation efforts should go beyond reducing losses overall and aim to equitably support communities in the race for climate adaptation.
</description>
<pubDate>Fri, 27 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159333</guid>
<dc:date>2024-09-27T00:00:00Z</dc:date>
</item>
<item>
<title>Air quality co-benefits of carbon pricing in China</title>
<link>https://hdl.handle.net/1721.1/159331</link>
<description>Air quality co-benefits of carbon pricing in China
Li, Mingwei; Zhang, Da; Li, Chiao-Ting; Mulvaney, Kathleen M; Selin, Noelle E; Karplus, Valerie J
Climate policies targeting energy-related CO2 emissions, which act on a global scale over long time horizons, can result in localized, near-term reductions in both air pollution and adverse human health impacts. Focusing on China, the largest energy-using and CO2-emitting nation, we develop a cross-scale modelling approach to quantify these air quality co-benefits, and compare them to the economic costs of climate policy. We simulate the effects of an illustrative climate policy, a price on CO2 emissions. In a policy scenario consistent with China's recent pledge to reach a peak in CO2 emissions by 2030, we project that national health co-benefits from improved air quality would partially or fully offset policy costs depending on chosen health valuation. Net health co-benefits are found to rise with increasing policy stringency.
</description>
<pubDate>Tue, 01 May 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159331</guid>
<dc:date>2018-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Impact of ocean acidification on the structure of future phytoplankton communities</title>
<link>https://hdl.handle.net/1721.1/159330</link>
<description>Impact of ocean acidification on the structure of future phytoplankton communities
Dutkiewicz, Stephanie; Morris, J Jeffrey; Follows, Michael J; Scott, Jeffery; Levitan, Orly; Dyhrman, Sonya T; Berman-Frank, Ilana
Phytoplankton form the foundation of the marine food web and regulate key biogeochemical processes. These organisms face multiple environmental changes, including the decline in ocean pH (ocean acidification) caused by rising atmospheric p CO 2 (ref.). A meta-analysis of published experimental data assessing growth rates of different phytoplankton taxa under both ambient and elevated p CO 2 conditions revealed a significant range of responses. This effect of ocean acidification was incorporated into a global marine ecosystem model to explore how marine phytoplankton communities might be impacted over the course of a hypothetical twenty-first century. Results emphasized that the differing responses to elevated p CO 2 caused sufficient changes in competitive fitness between phytoplankton types to significantly alter community structure. At the level of ecological function of the phytoplankton community, acidification had a greater impact than warming or reduced nutrient supply. The model suggested that longer timescales of competition- and transport-mediated adjustments are essential for predicting changes to phytoplankton community structure.
</description>
<pubDate>Sun, 01 Nov 2015 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159330</guid>
<dc:date>2015-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Responsible or reckless? A critical review of the environmental and climate assessments of mineral supply chains</title>
<link>https://hdl.handle.net/1721.1/159290</link>
<description>Responsible or reckless? A critical review of the environmental and climate assessments of mineral supply chains
Calderon, Jordan Lee; Bazilian, Morgan; Sovacool, B; Greene, Suzanne
This paper critically reviews and identifies gaps in the methodologies used to analyze the environmental impacts of mineral and metal global supply chains. Of specific focus are assessments of the extraction and production of minerals and metals needed for a low-carbon energy future. Current trends and projections suggest that the future low-carbon energy system will have greater material needs than the current one. Thus, it is important to better understand the full impacts of increased resource extraction to help ensure a sustainable and just transition. This review reveals that existing methodologies are currently insufficient in capturing the full suite of environmental, social, and governance concerns. The copper supply chain is used as a case study to highlight areas that require refined or augmented methodologies, with an in-depth examination of the corporate practices of Freeport-McMoRan, Vale, and BHP. Together, this review of existing methodologies and examples from the copper supply chain highlight the incomplete and variable nature of environmental and climate reporting within the mining industry. Areas for future work are defined with the goal of advancing accounting frameworks for the mining industry and the associated supply chain.
</description>
<pubDate>Tue, 13 Oct 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159290</guid>
<dc:date>2020-10-13T00:00:00Z</dc:date>
</item>
<item>
<title>Future phytoplankton diversity in a changing climate</title>
<link>https://hdl.handle.net/1721.1/159289</link>
<description>Future phytoplankton diversity in a changing climate
Henson, Stephanie A; Cael, BB; Allen, Stephanie R; Dutkiewicz, Stephanie
The future response of marine ecosystem diversity to continued anthropogenic forcing is poorly constrained. Phytoplankton are a diverse set of organisms that form the base of the marine ecosystem. Currently, ocean biogeochemistry and ecosystem models used for climate change projections typically include only 2−3 phytoplankton types and are, therefore, too simple to adequately assess the potential for changes in plankton community structure. Here, we analyse a complex ecosystem model with 35 phytoplankton types to evaluate the changes in phytoplankton community composition, turnover and size structure over the 21st century. We find that the rate of turnover in the phytoplankton community becomes faster during this century, that is, the community structure becomes increasingly unstable in response to climate change. Combined with alterations to phytoplankton diversity, our results imply a loss of ecological resilience with likely knock-on effects on the productivity and functioning of the marine environment.&lt;/jats:p&gt;
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159289</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Impacts of climate change policies worldwide on the Russian economy</title>
<link>https://hdl.handle.net/1721.1/159288</link>
<description>Impacts of climate change policies worldwide on the Russian economy
Makarov, Igor; Chen, Henry; Paltsev, Sergey
Because the Russian economy relies heavily on exports of fossil fuels, the primary source of human-induced greenhouse gas (GHG) emissions, it may be adversely impacted by Paris Agreement-based climate policies that target reductions in GHG emissions. Applying the MIT Economic Projection and Policy Analysis (EPPA) model to assess the impacts on the Russian economy of the efforts of the main importers of Russian fossil fuels to follow the global goals of the Paris Agreement, we project that climate-related actions outside of Russia will lower the country's GDP growth rate by about one-half of a percentage point. The Paris Agreement is also expected to raise Russia's risks of facing market barriers for its exports of energy-intensive goods, and of falling behind in the development of low-carbon energy technologies that most of the world is increasingly adopting. Key policy insights Regardless of its domestic emissions reduction efforts, Russia will not be able to sustain its current trajectory of fossil fuel export-based development due to climate policies worldwide. To address the challenge of climate-related energy transition, Russia needs a new comprehensive development strategy that accounts for the post-Paris Agreement global energy landscape. The key elements of such a strategy include diversification of the economy, moving to low-carbon energy sources, and investing in human capital development. Our diversification scenarios show that redistribution of income from the energy sector to the development of human capital would benefit the economy. The largest impact of investment re-orientation from the fossil fuel sector would be on manufacturing, services, agriculture and food production.
</description>
<pubDate>Wed, 25 Nov 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159288</guid>
<dc:date>2020-11-25T00:00:00Z</dc:date>
</item>
<item>
<title>Virtual Reality in Acute and Chronic Pain Medicine: An Updated Review</title>
<link>https://hdl.handle.net/1721.1/159287</link>
<description>Virtual Reality in Acute and Chronic Pain Medicine: An Updated Review
Moreau, Sacha; Thérond, Alexandra; Cerda, Ivo H.; Studer, Kachina; Pan, Alicia; Tharpe, Jacob; Crowther, Jason E.; Abd-Elsayed, Alaa; Gilligan, Chris; Tolba, Reda; Ashina, Sait; Schatman, Michael E.; Kaye, Alan D.; Yong, R. J.
Purpose of Review This review critically analyzes the recent literature on virtual reality’s (VR) use in acute and chronic pain management, offering insights into its efficacy, applications, and limitations. Recent Findings Recent studies, including meta-analyses and randomized controlled trials, have demonstrated VR’s effectiveness in reducing pain intensity in various acute pain scenarios, such as procedural/acute pain and in chronic pain conditions. The role of factors such as immersion and presence in enhancing VR’s efficacy has been emphasized. Further benefits have been identified in the use of VR for assessment as well as symptom gathering through conversational avatars. However, studies are limited, and strong conclusions will require further investigation. Summary VR is emerging as a promising non-pharmacological intervention in pain management for acute and chronic pain. However, its long-term efficacy, particularly in chronic pain management, remains an area requiring further research. Key findings highlight that VR programs vary in efficacy depending on the specificity of the origin of pain.
</description>
<pubDate>Mon, 08 Apr 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159287</guid>
<dc:date>2024-04-08T00:00:00Z</dc:date>
</item>
<item>
<title>Telehealth and Virtual Reality Technologies in Chronic Pain Management: A Narrative Review</title>
<link>https://hdl.handle.net/1721.1/159286</link>
<description>Telehealth and Virtual Reality Technologies in Chronic Pain Management: A Narrative Review
Cerda, Ivo H.; Therond, Alexandra; Moreau, Sacha; Studer, Kachina; Donjow, Aleksy R.; Crowther, Jason E.; Mazzolenis, Maria E.; Lang, Min; Tolba, Reda; Gilligan, Christopher; Ashina, Sait; Kaye, Alan D.; Yong, R. J.
Purpose of Review This review provides medical practitioners with an overview of the present and emergent roles of telehealth and associated virtual reality (VR) applications in chronic pain (CP) management, particularly in the post-COVID-19 healthcare landscape. Recent Findings Accumulated evidence points to the efficacy of now well-established telehealth modalities, such as videoconferencing, short messaging service (SMS), and mobile health (mHealth) applications in complementing remote CP care. More recently, and although still in early phases of clinical implementation, a wide range of VR-based interventions have demonstrated potential for improving the asynchronous remote management of CP. Additionally, VR-associated technologies at the leading edge of science and engineering, such as VR-assisted biofeedback, haptic technology, high-definition three-dimensional (HD3D) conferencing, VR-enabled interactions in a Metaverse, and the use of wearable monitoring devices, herald a new era for remote, synchronous patient-physician interactions. These advancements hold the potential to facilitate remote physical examinations, personalized remote care, and innovative interventions such as ultra-realistic biofeedback. Despite the promise of VR-associated technologies, several limitations remain, including the paucity of robust long-term effectiveness data, heterogeneity of reported pain-related outcomes, challenges with scalability and insurance coverage, and demographic-specific barriers to patient acceptability. Future research efforts should be directed toward mitigating these limitations to facilitate the integration of telehealth-associated VR into the conventional management of CP. Summary Despite ongoing barriers to widespread adoption, recent evidence suggests that VR-based interventions hold an increasing potential to complement and enhance the remote delivery of CP care.
</description>
<pubDate>Thu, 04 Jan 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159286</guid>
<dc:date>2024-01-04T00:00:00Z</dc:date>
</item>
<item>
<title>Data-Driven Insights into the Structural Essence of Plasticity in High-Entropy Alloys</title>
<link>https://hdl.handle.net/1721.1/159285</link>
<description>Data-Driven Insights into the Structural Essence of Plasticity in High-Entropy Alloys
Tung, Chi-Huan; Chang, Shou-Yi; Bai, Zhitong; Fan, Yue; Yip, Sidney; Do, Changwoo; Chen, Wei-Ren
The heterogeneous mechanical response of a crystalline alloy with multiple principal elements was investigated using molecular dynamics simulations. The local configuration of the alloy in its quiescent state was characterized by the variables derived from the gyration tensor and the atomic electronegativity. A multivariate analysis identified the geometric and chemical factors that influenced the atomic packing variations. Upon straining, the non-affine displacement exhibited spatial heterogeneity. A statistical correlation was established between the local yield events and the specific features of the local configuration. Our findings, validated by the performance metrics analysis, provided a structural criterion for the instability mechanisms in high-entropy alloys (HEAs) and enhanced the understanding of their plasticity.
</description>
<pubDate>Thu, 11 Jul 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159285</guid>
<dc:date>2024-07-11T00:00:00Z</dc:date>
</item>
<item>
<title>Artificial Intelligence Algorithms in Cardiovascular Medicine: An Attainable Promise to Improve Patient Outcomes or an Inaccessible Investment?</title>
<link>https://hdl.handle.net/1721.1/159284</link>
<description>Artificial Intelligence Algorithms in Cardiovascular Medicine: An Attainable Promise to Improve Patient Outcomes or an Inaccessible Investment?
Bota, Patrícia; Thambiraj, Geerthy; Bollepalli, Sandeep C.; Armoundas, Antonis A.
Purpose of Review This opinion paper highlights the advancements in artificial intelligence (AI) technology for cardiovascular disease (CVD), presents best practices and transformative impacts, and addresses current concerns that must be resolved for broader adoption. Recent Findings With the evolution of digitization in data collection, large amounts of data have become available, surpassing the human capacity for processing and analysis, thus enabling the application of AI. These models can learn complex spatial and temporal patterns from large amounts of data, providing patient-specific outputs. These advantages have resulted, at the moment, in more than 900 AI-based devices being approved, today, by regulatory entities, for clinical use, with similar to improved performance and efficiency compared to traditional technologies. However, issues such as model generalization, bias, transparency, interpretability, accountability, and data privacy remain significant barriers for broad adoption of these technologies. Summary AI shows great promise in enhancing CVD care through more accurate and efficient approaches. Yet, widespread adoption is hindered by unresolved concerns of interested stakeholders. Addressing these challenges is crucial for fully integrating AI into clinical practice and shaping the future of CVD prevention, diagnosis and treatment.
</description>
<pubDate>Tue, 29 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159284</guid>
<dc:date>2024-10-29T00:00:00Z</dc:date>
</item>
<item>
<title>Stress-mediated Activation of Ferroptosis, Pyroptosis, and Apoptosis Following Mild Traumatic Brain Injury Exacerbates Neurological Dysfunctions</title>
<link>https://hdl.handle.net/1721.1/159283</link>
<description>Stress-mediated Activation of Ferroptosis, Pyroptosis, and Apoptosis Following Mild Traumatic Brain Injury Exacerbates Neurological Dysfunctions
Zheng, Lexin; Pang, Qiuyu; Huang, Ruoyu; Xu, Heng; Guo, Hanmu; Gao, Cheng; Chen, Xueshi; Wang, Ying; Cao, Qun; Gao, Yuan; Gu, Zhiya; Wang, Zufeng; Luo, Chengliang; Tao, Luyang; Wang, Tao
Nearly half of mild traumatic brain injury (mTBI) patients continue to experience residual neurological dysfunction, which may be attributed to exposure to stress. Ferroptosis, a newly discovered form of cell death, is increasingly recognized for its involvement in the pathophysiology of TBI. Understanding the mechanisms by which stress influences mTBI, particularly through ferroptosis, is crucial for the effective treatment and prevention of mTBI patients who are sensitive to stressful events. In our study, a mouse mTBI model was established. An acute restraint stress (RS) and a chronic unpredictable mild stress (CUMS) model then were applied to make acute and chronic stress, respectively. We found acute RS significantly delayed the recovery of reduced body weight and short-term motor dysfunctions and exacerbated cell insults and blood–brain barrier leakage caused by mTBI. Further studies revealed that acute RS exacerbates neuronal ferroptosis, pyroptosis, and apoptosis by promoting iron overloading in the neocortex following mTBI. Interestingly, the inhibition of ferroptosis with iron chelators, including deferoxamine and ciclopirox, reversed pyroptosis and apoptosis. Moreover, CUMS aggravated neurological dysfunctions (motor function, cognitive function, and anxiety-like behavior) and exacerbated brain lesion volume. CUMS also exacerbates ferroptosis, pyroptosis, and apoptosis by intensifying iron deposition, along with decreasing the expression of neuronal brain-derived neurotrophic factor and glucocorticoid receptor in the neocortex post mTBI. These effects were also mitigated by iron chelators. Our findings suggest that alleviating ferroptosis induced by iron deposition may represent a promising therapeutic approach for mTBI patients who have experienced stressful events.
</description>
<pubDate>Thu, 10 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159283</guid>
<dc:date>2024-10-10T00:00:00Z</dc:date>
</item>
<item>
<title>Early Burst Suppression Similarity Association with Structural Brain Injury Severity on MRI After Cardiac Arrest</title>
<link>https://hdl.handle.net/1721.1/159282</link>
<description>Early Burst Suppression Similarity Association with Structural Brain Injury Severity on MRI After Cardiac Arrest
Shivdat, Shawn; Zhan, Tiange; De Palma, Alessandro; Zheng, Wei-Long; Krishnamurthy, Parimala; Paneerselvam, Ezhil; Snider, Samuel; Bevers, Matthew; O’Reilly, Una-May; Lee, Jong W.; Westover, M. B.; Amorim, Edilberto
Background Identical bursts on electroencephalography (EEG) are considered a specific predictor of poor outcomes in cardiac arrest, but its relationship with structural brain injury severity on magnetic resonance imaging (MRI) is not known. Methods This was a retrospective analysis of clinical, EEG, and MRI data from adult comatose patients after cardiac arrest. Burst similarity in first 72 h from the time of return of spontaneous circulation were calculated using dynamic time-warping (DTW) for bursts of equal (i.e., 500 ms) and varying (i.e., 100–500 ms) lengths and cross-correlation for bursts of equal lengths. Structural brain injury severity was measured using whole brain mean apparent diffusion coefficient (ADC) on MRI. Pearson’s correlation coefficients were calculated between mean burst similarity across consecutive 12–24-h time blocks and mean whole brain ADC values. Good outcome was defined as Cerebral Performance Category of 1–2 (i.e., independence for activities of daily living) at the time of hospital discharge. Results Of 113 patients with cardiac arrest, 45 patients had burst suppression (mean cardiac arrest to MRI time 4.3 days). Three study participants with burst suppression had a good outcome. Burst similarity calculated using DTW with bursts of varying lengths was correlated with mean ADC value in the first 36 h after cardiac arrest: Pearson’s r: 0–12 h: − 0.69 (p = 0.039), 12–24 h: − 0.54 (p = 0.002), 24–36 h: − 0.41 (p = 0.049). Burst similarity measured with bursts of equal lengths was not associated with mean ADC value with cross-correlation or DTW, except for DTW at 60–72 h (− 0.96, p = 0.04). Conclusions Burst similarity on EEG after cardiac arrest may be associated with acute brain injury severity on MRI. This association was time dependent when measured using DTW.
</description>
<pubDate>Wed, 24 Jul 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159282</guid>
<dc:date>2024-07-24T00:00:00Z</dc:date>
</item>
<item>
<title>Clinical Notes as Narratives: Implications for Large Language Models in Healthcare</title>
<link>https://hdl.handle.net/1721.1/159281</link>
<description>Clinical Notes as Narratives: Implications for Large Language Models in Healthcare
Brender, Teva D.; Celi, Leo A.; Cobert, Julien M.
OpenAI’s ChatGPT sparked tremendous excitement regarding potential healthcare applications of large language models (LLM). LLMs trained on electronic health record (EHR) notes could enrich the feature space for many tasks including risk prediction, data classification (e.g., identifying protected health information), augmented documentation, and patient communication. Crucially, LLMs will learn not only from objective clinical data, but also from patient narratives—subjective texts authored by human clinicians, who may be sources of bias. In recognizing clinical notes as clinical narratives, and clinicians as narrators, we gain important insights into potential downstream implications of training LLMs on EHRs. Here we argue that a richer understanding of notes’ narrative elements, informed by principles from the field of narratology, could facilitate the development of LLMs that are more conscious of bias and enable the delivery of high-quality, human-centered care.
</description>
<pubDate>Fri, 04 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159281</guid>
<dc:date>2024-10-04T00:00:00Z</dc:date>
</item>
<item>
<title>Strengthened MIP formulations for the liver region redesign models of Akshat et al.</title>
<link>https://hdl.handle.net/1721.1/159280</link>
<description>Strengthened MIP formulations for the liver region redesign models of Akshat et al.
Karagoz, Aysenur; Liu, Ruofeng; Validi, Hamidreza
Liver transplantation has been a critical issue in the U.S. healthcare system for decades, and the region redesign aims to ameliorate this issue. This paper revisits two mixed integer programming (MIP) formulations of the liver region redesign problem proposed by Akshat et al. [1]. We study their first formulation considering two different modeling approaches: one compact formulation and one with exponentially many constraints. We also propose a set of variable fixing procedures and conduct a polyhedral study on their second formulation. Our computational results show that multiple unsolved instances are solved to optimality.
</description>
<pubDate>Sat, 28 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159280</guid>
<dc:date>2024-12-28T00:00:00Z</dc:date>
</item>
<item>
<title>Mesoscale simulation of the compression and small-strain elastic shear behavior of illite nanoparticle assemblies</title>
<link>https://hdl.handle.net/1721.1/159279</link>
<description>Mesoscale simulation of the compression and small-strain elastic shear behavior of illite nanoparticle assemblies
Zhu, Hejian; Whittle, Andrew J.; Pellenq, Roland J.
The mechanical properties of clay minerals are largely dependent upon the chemical compositions and the mesoscale fabrics of the constituent particles. This paper describes results of a series of mesoscale molecular dynamics simulations of the hydrostatic compression and shear strain behavior for initially randomly oriented assemblies of 103 illite primary particles. The particles are simulated as rigid-body ellipsoids that interact through the single-site, Gay–Berne potential function. This corresponds to a coarse-grained model based on prior atomistic scale computation of the potential of mean force for water-mediated interactions between pairs of particles through free energy perturbation method. We investigate the mesoscale fabrics of the NPT-equilibrated assemblies for confining pressures ranging from 1.0 to 125 atm, including path dependence associated with unloading and reloading. We analyze and quantify the geometric arrangement including particle orientation, specific surface area, properties of particle stacks/aggregates, and interstack pair correlation functions. The compression of each particle assembly is associated with large irrecoverable changes in void ratio, while unloading and reloading involves much smaller, largely recoverable volumetric strains. The results are qualitatively similar to macroscopic compression behavior reported in laboratory tests. We simulate the uniaxial and shear behavior at each of the equilibrated pressure states through a series of strain-controlled steps, allowing full relaxation of the virial stresses computed at each step. The simulations investigate directional and path dependence of the shear behavior for strain deviations up to 0.2%. The results show the onset on nonlinear stiffness properties at strain levels ∼ 0.01% and hysteretic behavior upon unloading and reloading. Small-strain stiffness properties of the particle assemblies are qualitatively in good agreement with quasi-static, elastic stiffness properties reported for illitic clays.
</description>
<pubDate>Fri, 17 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159279</guid>
<dc:date>2025-01-17T00:00:00Z</dc:date>
</item>
<item>
<title>VESCL: an open source 2D vessel contouring library</title>
<link>https://hdl.handle.net/1721.1/159278</link>
<description>VESCL: an open source 2D vessel contouring library
Frisken, S. F.; Haouchine, N.; Chlorogiannis, D. D.; Gopalakrishnan, V.; Cafaro, A.; Wells, W. T.; Golby, A. J.; Du, R.
Purpose VESCL (pronounced ‘vessel’) is a novel vessel contouring library for computer-assisted 2D vessel contouring and segmentation. VESCL facilitates manual vessel segmentation in 2D medical images to generate gold-standard datasets for training, testing, and validating automatic vessel segmentation. Methods VESCL is an open-source C++ library designed for easy integration into medical image processing systems. VESCL provides an intuitive interface for drawing variable-width parametric curves along vessels in 2D images. It includes highly optimized localized filtering to automatically fit drawn curves to the nearest vessel centerline and automatically determine the varying vessel width along each curve. To support a variety of segmentation paradigms, VESCL can export multiple segmentation representations including binary segmentations, occupancy maps, and distance fields. Results VESCL provides sub-pixel resolution for vessel centerlines and vessel widths. It is optimized to segment small vessels with single- or sub-pixel widths that are visible to the human eye but hard to segment automatically via conventional filters. When tested on neurovascular digital subtraction angiography (DSA), VESCL’s intuitive hand-drawn input with automatic curve fitting increased the speed of fully manual segmentation by 22× over conventional methods and by 3× over the best publicly available computer-assisted manual segmentation method. Accuracy was shown to be within the range of inter-operator variability of gold standard manually segmented data from a publicly available dataset of neurovascular DSA images as measured using Dice scores. Preliminary tests showed similar improvements for segmenting DSA of coronary arteries and RGB images of retinal arteries. Conclusion VESCL is an open-source C++ library for contouring vessels in 2D images which can be used to reduce the tedious, labor-intensive process of manually generating gold-standard segmentations for training, testing, and comparing automatic segmentation methods.
</description>
<pubDate>Sun, 16 Jun 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159278</guid>
<dc:date>2024-06-16T00:00:00Z</dc:date>
</item>
<item>
<title>Remanufacturing and Energy Savings</title>
<link>https://hdl.handle.net/1721.1/159277</link>
<description>Remanufacturing and Energy Savings
Gutowski, Timothy G; Sahni, Sahil; Boustani, Avid; Graves, Stephen C
Remanufactured products that can substitute for new products are generally claimed to save energy. These claims are made from studies that look mainly at the differences in materials production and manufacturing. However, when the use phase is included, the situation can change radically. In this Article, 25 case studies for eight different product categories were studied, including: (1) furniture, (2) clothing, (3) computers, (4) electric motors, (5) tires, (6) appliances, (7) engines, and (8) toner cartridges. For most of these products, the use phase energy dominates that for materials production and manufacturing combined. As a result, small changes in use phase efficiency can overwhelm the claimed savings from materials production and manufacturing. These use phase energy changes are primarily due to efficiency improvements in new products, and efficiency degradation in remanufactured products. For those products with no, or an unchanging, use phase energy requirement, remanufacturing can save energy. For the 25 cases, we found that 8 cases clearly saved energy, 6 did not, and 11 were too close to call. In some cases, we could examine how the energy savings potential of remanufacturing has changed over time. Specifically, during times of significant improvements in energy efficiency, remanufacturing would often not save energy. A general design trend seems to be to add power to a previously unpowered product, and then to improve on the energy efficiency of the product over time. These trends tend to undermine the energy savings potential of remanufacturing.
</description>
<pubDate>Sun, 15 May 2011 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159277</guid>
<dc:date>2011-05-15T00:00:00Z</dc:date>
</item>
<item>
<title>An updated version of the global interior ocean biogeochemical data product, GLODAPv2.2020</title>
<link>https://hdl.handle.net/1721.1/159276</link>
<description>An updated version of the global interior ocean biogeochemical data product, GLODAPv2.2020
Olsen, Are; Lange, Nico; Key, Robert M.; Tanhua, Toste; Bittig, Henry C.; Kozyr, Alex; Álvarez, Marta; Azetsu-Scott, Kumiko; Becker, Susan; Brown, Peter J.; Carter, Brendan R.; Cotrim da Cunha, Leticia; Feely, Richard A.; van Heuven, Steven; Hoppema, Mario; Ishii, Masao; Jeansson, Emil; Jutterström, Sara; Landa, Camilla S.; Lauvset, Siv K.; Michaelis, Patrick; Murata, Akihiko; Pérez, Fiz F.; Pfeil, Benjamin; Schirnick, Carsten; Steinfeldt, Reiner; Suzuki, Toru; Tilbrook, Bronte; Velo, Anton; Wanninkhof, Rik; Woosley, Ryan J.
The Global Ocean Data Analysis Project (GLODAP) is a synthesis effort providing regular compilations of surface-to-bottom ocean biogeochemical data, with an emphasis on seawater inorganic carbon chemistry and related variables determined through chemical analysis of seawater samples. GLODAPv2.2020 is an update of the previous version, GLODAPv2.2019. The major changes are data from 106 new cruises added, extension of time coverage to 2019, and the inclusion of available (also for historical cruises) discrete fugacity of CO2 (fCO2) values in the merged product files. GLODAPv2.2020 now includes measurements from more than 1.2 million water samples from the global oceans collected on 946 cruises. The data for the 12 GLODAP core variables (salinity, oxygen, nitrate, silicate, phosphate, dissolved inorganic carbon, total alkalinity, pH, CFC-11, CFC-12, CFC-113, and CCl4) have undergone extensive quality control with a focus on systematic evaluation of bias. The data are available in two formats: (i) as submitted by the data originator but updated to WOCE exchange format and (ii) as a merged data product with adjustments applied to minimize bias. These adjustments were derived by comparing the data from the 106 new cruises with the data from the 840 quality-controlled cruises of the GLODAPv2.2019 data product using crossover analysis. Comparisons to empirical algorithm estimates provided additional context for adjustment decisions; this is new to this version. The adjustments are intended to remove potential biases from errors related to measurement, calibration, and data-handling practices without removing known or likely time trends or variations in the variables evaluated. The compiled and adjusted data product is believed to be consistent to better than 0.005 in salinity, 1 % in oxygen, 2 % in nitrate, 2 % in silicate, 2 % in phosphate, 4 µmol kg−1 in dissolved inorganic carbon, 4 µmol  kg−1 in total alkalinity, 0.01–0.02 in pH (depending on region), and 5 % in the halogenated transient tracers. The other variables included in the compilation, such as isotopic tracers and discrete fCO2, were not subjected to bias comparison or adjustments.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159276</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Don’t quit the long game</title>
<link>https://hdl.handle.net/1721.1/159274</link>
<description>Don’t quit the long game
Raff-Heinen, Stefan; Murray, Fiona E.
Living cells that produce biofuel; robots that assist factory workers; intelligent machines that guide drug discovery—these technologies are “deep” in that they achieve something extraordinary—often thought impossible—and push society forward. Indeed, so-called “deep tech” powers the future of medical breakthroughs, resilient energy grids, and clean industrial processes, among other frontiers. But deep tech requires more of everything to become a reality—research and development, specialized talent, time, risk-taking, and funding. The US government has been the world’s largest investor in this enterprise. Yet cuts to federal support for deep tech threaten this entrepreneurial engine at its source—university labs. Without sustained federal support, the country risks losing its technological edge, threatening economic competitiveness and national security.
</description>
<pubDate>Fri, 04 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159274</guid>
<dc:date>2025-04-04T00:00:00Z</dc:date>
</item>
<item>
<title>History of chemically and radiatively important atmospheric gases from the Advanced Global Atmospheric Gases Experiment (AGAGE)</title>
<link>https://hdl.handle.net/1721.1/159272</link>
<description>History of chemically and radiatively important atmospheric gases from the Advanced Global Atmospheric Gases Experiment (AGAGE)
Prinn, Ronald G.; Weiss, Ray F.; Arduini, Jgor; Arnold, Tim; DeWitt, H. Langley; Fraser, Paul J.; Ganesan, Anita L.; Gasore, Jimmy; Harth, Christina M.; Hermansen, Ove; Kim, Jooil; Krummel, Paul B.; Li, Shanlan; Loh, Zoë M.; Lunder, Chris R.; Maione, Michela; Manning, Alistair J.; Miller, Ben R.; Mitrevski, Blagoj; Mühle, Jens; O’Doherty, Simon; Park, Sunyoung; Reimann, Stefan; Rigby, Matt; Saito, Takuya; Salameh, Peter K.; Schmidt, Roland; Simmonds, Peter G.; Steele, L. Paul; Vollmer, Martin K.; Wang, Ray H.; Yao, Bo; Yokouchi, Yoko; Young, Dickon; Zhou, Lingxi
We present the organization, instrumentation, datasets, data interpretation, modeling, and accomplishments of the multinational global atmospheric measurement program AGAGE (Advanced Global Atmospheric Gases Experiment). AGAGE is distinguished by its capability to measure globally, at high frequency, and at multiple sites all the important species in the Montreal Protocol and all the important non-carbon-dioxide (non-CO2) gases assessed by the Intergovernmental Panel on Climate Change (CO2 is also measured at several sites). The scientific objectives of AGAGE are important in furthering our understanding of global chemical and climatic phenomena. They are the following: (1) to accurately measure the temporal and spatial distributions of anthropogenic gases that contribute the majority of reactive halogen to the stratosphere and/or are strong infrared absorbers (chlorocarbons, chlorofluorocarbons – CFCs, bromocarbons, hydrochlorofluorocarbons – HCFCs, hydrofluorocarbons – HFCs and polyfluorinated compounds (perfluorocarbons – PFCs), nitrogen trifluoride – NF3, sulfuryl fluoride – SO2F2, and sulfur hexafluoride – SF6) and use these measurements to determine the global rates of their emission and/or destruction (i.e., lifetimes); (2) to accurately measure the global distributions and temporal behaviors and determine the sources and sinks of non-CO2 biogenic–anthropogenic gases important to climate change and/or ozone depletion (methane – CH4, nitrous oxide – N2O, carbon monoxide – CO, molecular hydrogen – H2, methyl chloride – CH3Cl, and methyl bromide – CH3Br); (3) to identify new long-lived greenhouse and ozone-depleting gases (e.g., SO2F2, NF3, heavy PFCs (C4F10, C5F12, C6F14, C7F16, and C8F18) and hydrofluoroolefins (HFOs; e.g., CH2 = CFCF3) have been identified in AGAGE), initiate the real-time monitoring of these new gases, and reconstruct their past histories from AGAGE, air archive, and firn air measurements; (4) to determine the average concentrations and trends of tropospheric hydroxyl radicals (OH) from the rates of destruction of atmospheric trichloroethane (CH3CCl3), HFCs, and HCFCs and estimates of their emissions; (5) to determine from atmospheric observations and estimates of their destruction rates the magnitudes and distributions by region of surface sources and sinks of all measured gases; (6) to provide accurate data on the global accumulation of many of these trace gases that are used to test the synoptic-, regional-, and global-scale circulations predicted by three-dimensional models; and (7) to provide global and regional measurements of methane, carbon monoxide, and molecular hydrogen and estimates of hydroxyl levels to test primary atmospheric oxidation pathways at midlatitudes and the tropics.
</description>
<pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159272</guid>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Does the Earth Have an Adaptive Infrared Iris?</title>
<link>https://hdl.handle.net/1721.1/159271</link>
<description>Does the Earth Have an Adaptive Infrared Iris?
Lindzen, Richard S; Chou, Ming-Dah; Hou, Arthur Y
Observations and analyses of water vapor and clouds in the Tropics over the past decade show that the boundary between regions of high and low free-tropospheric relative humidity is sharp, and that upper-level cirrus and high free-tropospheric relative humidity tend to coincide. Most current studies of atmospheric climate feedbacks have focused on such quantities as clear sky humidity, average humidity, or differences between regions of high and low humidity, but the data suggest that another possible feedback might consist of changes in the relative areas of high and low humidity and cloudiness. Motivated by the observed relation between cloudiness (above the trade wind boundary layer) and high humidity, cloud data for the eastern part of the western Pacific from the Japanese Geostationary Meteorological Satellite-5 (which provides high spatial and temporal resolution) have been analyzed, and it has been found that the area of cirrus cloud coverage normalized by a measure of the area of cumulus coverage decreases about 22% per degree Celsius increase in the surface temperature of the cloudy region. A number of possible interpretations of this result are examined and a plausible one is found to be that cirrus detrainment from cumulus convection diminishes with increasing temperature. The implications of such an effect for climate are examined using a simple two-dimensional radiative–convective model. The calculations show that such a change in the Tropics could lead to a negative feedback in the global climate, with a feedback factor of about −1.1, which if correct, would more than cancel all the positive feedbacks in the more sensitive current climate models. Even if regions of high humidity were not coupled to cloudiness, the feedback factor due to the clouds alone would still amount to about −0.45, which would cancel model water vapor feedback in almost all models. This new mechanism would, in effect, constitute an adaptive infrared iris that opens and closes in order to control the Outgoing Longwave Radiation in response to changes in surface temperature in a manner similar to the way in which an eye's iris opens and closes in response to changing light levels. Not surprisingly, for upper-level clouds, their infrared effect dominates their shortwave effect. Preliminary attempts to replicate observations with GCMs suggest that models lack such a negative cloud/moist areal feedback.
</description>
<pubDate>Thu, 01 Mar 2001 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159271</guid>
<dc:date>2001-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Hybrid real options valuation of risky product development projects</title>
<link>https://hdl.handle.net/1721.1/159270</link>
<description>Hybrid real options valuation of risky product development projects
III, James E Neely; Neufville, Richard de
Managers and designers of technological systems face a common difficulty; new projects or products are inherently risky, both technologically and financially, especially given the rate of change in the high technology, deregulated economy. Consequently, they need solid methods for valuing prospective investments, so that they can justify their development strategies. Their fundamental problem is compounded by two methodological difficulties: (a) traditional net present value (discounted cash flow) evaluations are inadequate for many risky projects, and (b) the available methods for valuing these projects are limited and often impractical. This paper identifies practical solutions to this problem. Conceptually, it is crucial to focus on dynamic strategies of development, rather than on specific projects or products. Planners need to understand that they are consciously managing risk, and will do so most effectively by developing options they can exploit or abandon depending on future events. Methodologically, it is useful to combine the best of the alternative approaches to valuing risky projects, to achieve a practical and effective means of valuation. Hybrid real options valuation combines the best features of decision and options analysis. The paper describes this new approach and illustrates it with an application to a portfolio of technological developments of a major automobile company. The example demonstrates the effectiveness of the new method. Real options valuation has the further advantage that it rightfully increases the assessed value of risky projects, once we see them as options that can be abandoned in the context of a long-term development strategy. This increase is greatest for projects that are particularly risky or expensive to implement over time.
</description>
<pubDate>Mon, 01 Jan 2001 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159270</guid>
<dc:date>2001-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Physical restoration of a painting with a digitally-constructed mask</title>
<link>https://hdl.handle.net/1721.1/159269</link>
<description>Physical restoration of a painting with a digitally-constructed mask
Kachkine, Alex
Conservation of damaged oil paintings requires manual inpainting of losses, leading to months-long&#13;
treatments of considerable expense: 70% of paintings in institutional collections are locked away from&#13;
public view in part due to treatment cost. Recent advancements in digital image reconstruction have&#13;
helped envision treatment results, though without any direct means of achieving them. This study&#13;
demonstrates the first physically-applied digital restoration of a painting, a highly-damaged oil-on-panel attributed to the Master of the Prado Adoration (MPA) from the late 15th century. In parallel, 5,612 losses spanning 66,205 mm2 and 57,314 colors are infilled with a reversible laminate mask comprising a color-accurate bilayer of printed pigments on polymeric films. To ensure restoration effectiveness, ethical principles in paintings conservation are implemented quantitatively for digital mask construction, a critically-important foundation lacking in current digital restoration literature. The infill process takes 3.5 hours, an estimated 66 times faster than conventional inpainting, with the result closely matching simulation. This approach grants unprecedented foresight and flexibility to&#13;
conservators, enabling the restoration of countless damaged paintings deemed unworthy of high&#13;
conservation budgets.
</description>
<pubDate>Wed, 11 Jun 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159269</guid>
<dc:date>2025-06-11T00:00:00Z</dc:date>
</item>
<item>
<title>Multiphysics discovery with moving boundaries using Ensemble SINDy and peridynamic differential operator</title>
<link>https://hdl.handle.net/1721.1/159268</link>
<description>Multiphysics discovery with moving boundaries using Ensemble SINDy and peridynamic differential operator
Bekar, Ali C.; Haghighat, Ehsan; Madenci, Erdogan
This study proposes a novel framework for learning the underlying physics of phenomena with moving boundaries. The proposed approach combines Ensemble SINDy and Peridynamic Differential Operator (PDDO) and imposes an inductive bias assuming the moving boundary physics evolves in its own corotational coordinate system. The robustness of the approach is demonstrated by considering various levels of noise in the measured data using the 2D Fisher–Stefan model. The confidence intervals of recovered coefficients are listed, and the uncertainties of the moving boundary positions are depicted by obtaining the solutions with the recovered coefficients. Although the main focus of this study is the Fisher–Stefan model, the proposed approach is applicable to any type of moving boundary problem with a smooth moving boundary front without an intermediate zone of two states. The code and data for this framework is available at: https://github.com/alicanbekar/MB_PDDO-SINDy.
</description>
<pubDate>Mon, 16 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159268</guid>
<dc:date>2024-09-16T00:00:00Z</dc:date>
</item>
<item>
<title>Adoption of IPSAS 17 From the Perspective of the Accounting Bodies of the Brazilian Federal Executive Branch</title>
<link>https://hdl.handle.net/1721.1/159267</link>
<description>Adoption of IPSAS 17 From the Perspective of the Accounting Bodies of the Brazilian Federal Executive Branch
Barbosa, Valdenês P.; Macagnan, Clea B.; Silveira Thys Mutti, Cláudia
This study analyzed the perspective of accounting professionals from agencies linked to the Brazilian Federal Executive Branch on the informal institutionalization of the International Public Sector Accounting Standards 17 (IPSAS 17). We assumed that the formal institutionalization of IPSAS 17 was not enough for its implementation. Practices such as informal institutionalization would help implement IPSAS. With the formulation of five hypotheses, we applied a questionnaire to accounting professionals from the Brazilian Federal Executive Branch, which was answered by 72.56% of them. We confirmed the hypothesis that the respondents consider the practices of IPSAS 17 advantageous, with benefits outweighing costs.
</description>
<pubDate>Sat, 28 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159267</guid>
<dc:date>2024-12-28T00:00:00Z</dc:date>
</item>
<item>
<title>Transcriptomic insights into methanol utilization in Pichia pastoris lacking AOX genes under co-feeding conditions</title>
<link>https://hdl.handle.net/1721.1/159262</link>
<description>Transcriptomic insights into methanol utilization in Pichia pastoris lacking AOX genes under co-feeding conditions
Zheng, Xueyun; Ye, Zhifang; Gao, Jiao; Hao, Yuechuo; Li, Cheng; Xie, Hongsen; Lin, Ying; Liang, Shuli
The methylotrophic yeast Pichia pastoris (P. pastoris) exhibits remarkable capability for methanol-driven protein biosynthesis, positioning it as an attractive platform for carbon-neutral biomanufacturing utilizing methanol as a renewable feedstock. However, challenges arising from methanol metabolism, particularly the accumulation of toxic formaldehyde intermediates, significantly hinder efficient methanol biotransformation. To address this limitation, we implemented a metabolic engineering strategy involving dual knockout of alcohol oxidase genes (aox1 and aox2) combined with glycerol co-substrate supplementation. Using enhanced green fluorescent protein (EGFP) as a model heterologous product, we demonstrated that the ΔAOX1/2 strain achieved superior protein productivity in glycerol-methanol co-feeding cultures. Under optimized conditions (0.5% methanol + 0.4% glycerol), the engineered strain attained a biomass density of 38.5 (OD600) and EGFP fluorescence intensity of 494,723 units, representing improvements of 32.8% and 53.6%, respectively, compared to the wild-type (WT) strain cultivated with 1% methanol alone. Transcriptome profiling revealed that the observed enhancement in protein synthesis originated from optimized methanol utilization through coordinated upregulation of both assimilatory and dissimilatory metabolic modules. This study demonstrates that alcohol oxidase suppression coupled with glycerol co-metabolism constitutes an effective strategy to alleviate methanol-derived metabolic stress while enhancing heterologous protein yields in P. pastoris.
</description>
<pubDate>Fri, 09 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159262</guid>
<dc:date>2025-05-09T00:00:00Z</dc:date>
</item>
<item>
<title>Exploring the formation mechanisms of double neutron star systems: an analytical perspective</title>
<link>https://hdl.handle.net/1721.1/159261</link>
<description>Exploring the formation mechanisms of double neutron star systems: an analytical perspective
Taani, Ali; Abu-Saleem, Mohammed; Mardini, Mohammad; Aljboor, Hussam; Tayem, Mohammad
Double Neutron Stars (DNSs) are unique probes to study various aspects of modern astrophysics. Recent discoveries have confirmed direct connections between DNSs and supernova explosions. This provides valuable information about the evolutionary history of these systems, especially regarding whether the second-born Neutron Star (NS) originated from either a Core-Collapse ( C C ) or Electron-Capture Supernovae ( E C S N e ) event. The provided scale diagram illustrates the distribution of different types of DNSs on the basis of their orbital parameters and other factors, including mass loss. As a result, the physical processes in DNSs vary depending on the formation mechanisms of the second-born NS and characteristics of the systems. E C S N e processes are typically associated with merging systems ( e × P o r b &lt; 0.05 ), while C C processes are more commonly linked to non-merging systems ( e × P o r b &gt; 0.05 ). Our results suggest a critical mass threshold of 1.30 M ⊙ ± 0.22 M ⊙ (critical value) for the E C S N e process to form an NS, while C C processes might occur at higher masses. Examining the orbital parameters of DNSs in a known gravitational potential can enhance our understanding of the theoretical predictions for DNS progenitor characteristics. It turns out that the E C S N e process predominantly produces DNS systems with short orbital ( P o r b ≤ 0.25 d ), nearly circular orbits ( e ≃ 0.2 ), accompanied by minimal kick velocities imparted on the proto-NS and significant mass loss. In contrast, their orbital dynamics in a known gravitational potential plays a crucial role in enhancing our understanding of the SNe geometry and the formation and evolution processes among different NS samples.
</description>
<pubDate>Thu, 08 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159261</guid>
<dc:date>2025-05-08T00:00:00Z</dc:date>
</item>
<item>
<title>Research on the Influence of the Stroke of the Pressing-Up Cylinder of Rolling Mills on Vibration Characteristics</title>
<link>https://hdl.handle.net/1721.1/159260</link>
<description>Research on the Influence of the Stroke of the Pressing-Up Cylinder of Rolling Mills on Vibration Characteristics
Weiquan, Sun; Xiaoqiang, Yan; Shen, Wang; Chenggang, Wang; Xingdou, Jia; Yujie, Liu
Purpose In order to analyze the on-site phenomenon that the 20-high rolling mill is prone to vibration when the diameter of the roll system is small, the influence of the elongation of the piston rod and the height of the hydraulic oil column on the vibration characteristics of the 20-high rolling mill was analyzed. Methods Two kinds of simulation limit states are designed, the first is the maximum diameter of the working roll, the first intermediate roll and the second intermediate roll, and the second is the smallest diameter of the working roll, the first intermediate roll and the second intermediate roll. At the same time, the dynamic vibration characteristics of the whole machine were analyzed under these two conditions, considering the elongation of the piston rod of the press-on cylinder and the combination of the elongation of the piston rod of the press-on cylinder and the height of the hydraulic oil column. Results When the elongation of the piston rod of the cylinder is considered alone, the vibration frequency of the rolling mill is abundant, but the vibration amplitude is low. In the context of integrating the elongation of the piston rod from the pressure cylinder with the height of the hydraulic oil column, experimental observations indicate that the rolling mill exhibits a reduced vibration amplitude under the first limit state compared to the second. However, analyses further reveal that both limit states demonstrate a lower frequency concentration and diminished high-frequency vibration amplitudes. Conclusions The research results provide a theoretical reference for further analysis of the influence of hydraulic oil column flow characteristics on rolling mill vibration.
</description>
<pubDate>Mon, 05 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159260</guid>
<dc:date>2025-05-05T00:00:00Z</dc:date>
</item>
<item>
<title>A Proposed Algorithm for Moisture Fluxes from Atmospheric Rivers</title>
<link>https://hdl.handle.net/1721.1/159259</link>
<description>A Proposed Algorithm for Moisture Fluxes from Atmospheric Rivers
Zhu, Yong; Newell, Reginald E.
A new algorithm is applied to study water vapor fluxes in the troposphere using wind and moisture data from the European Centre for Medium-Range Weather Forecasts. The fluxes are divided into filamentary structures known as tropospheric rivers and what are termed here broad fields. The results show that the tropospheric rivers may carry essentially the total meridional transport observed in the extratropical atmosphere but may occupy only about 10% of the total longitudinal length at a given latitude. The transient fluxes in traditional studies do not catch the filamentary structures completely and may therefore underestimate the fraction of transport assigned to moving systems, as well as omitting the geographical concentration. The mean flow and eddy fluxes evaluated by the new algorithm are considered to be more physically realistic.
</description>
<pubDate>Sun, 01 Mar 1998 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159259</guid>
<dc:date>1998-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>History versus Expectations</title>
<link>https://hdl.handle.net/1721.1/159258</link>
<description>History versus Expectations
Krugman, Paul
In models with external economies, there are often two or more long-run equilibria. Which equilibrium is chosen? Much of the literature presumes that “history” sets initial conditions that determine the outcome, but an alternative view stresses the role of “expectations,” i.e., of self-fulfilling prophecy. This paper uses a simple trade model with both external economies and adjustment costs to show how the parameters of the economy determine the relative importance of history and expectations in determining equilibrium.
</description>
<pubDate>Wed, 01 May 1991 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159258</guid>
<dc:date>1991-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Precise Delivery of Physiological Doses of Melatonin in Planta to Control Postharvest Physiology and Extend Shelf Life Outside the Cold Chain</title>
<link>https://hdl.handle.net/1721.1/159257</link>
<description>Precise Delivery of Physiological Doses of Melatonin in Planta to Control Postharvest Physiology and Extend Shelf Life Outside the Cold Chain
Han, Yangyang; Jangir, Monika; Ngoh, Amanda Si Yi; Li, Chunhong; Sarangapani, Sreelatha; Cao, Yunteng; Zhang, Yilin; Cheerlavancha, Raju; Sarojam, Rajani; Marelli, Benedetto
Postharvest management of leafy vegetables requires refrigeration to control their rapid deterioration and loss, which accounts for ∼30% of total food waste. Here, silk microneedles with a length of 700 μm were used to deliver physiological doses of melatonin (approximately 22 μg) in the leafy vegetable Pak choy (Brassica rapa subsp. chinensis) and successfully extended the shelf life of the harvested crop by 4 and 10 days at room temperature (25 °C) and under refrigeration (4 °C), respectively, compared to nontreated control plants. The exogenous dose of melatonin did not alter the natural concentration of the hormone in the harvested plants. Transcriptome analysis showed that melatonin regulates senescence by modulating auxin synthesis, the antioxidant system, and chlorophyll degradation. Overall, silk microneedles can be used to precisely deliver in harvested products compounds that extend their shelf life outside the cold chain.
</description>
<pubDate>Wed, 21 May 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159257</guid>
<dc:date>2025-05-21T00:00:00Z</dc:date>
</item>
<item>
<title>Elderly Bodily Assistance Robot (E-BAR): A Robot System for Body-Weight Support, Ambulation Assistance, and Fall Catching, Without the Use of a Harness</title>
<link>https://hdl.handle.net/1721.1/159256</link>
<description>Elderly Bodily Assistance Robot (E-BAR): A Robot System for Body-Weight Support, Ambulation Assistance, and Fall Catching, Without the Use of a Harness
Bolli, Roberto; Asada, Harry
As over 11,000 people turn 65 each day in the&#13;
U.S., our country, like many others, is facing growing challenges&#13;
in caring for elderly persons, further exacerbated by a major&#13;
shortfall of care workers. To address this, we introduce an eldercare robot (E-BAR) capable of lifting a human body, assisting&#13;
with postural changes/ambulation, and catching a user during a&#13;
fall, all without the use of any wearable device or harness. Our&#13;
robot is the first to integrate these 3 tasks, and is capable of&#13;
lifting the full weight of a human outside of the robot’s base of&#13;
support (across gaps and obstacles). In developing E-BAR, we&#13;
interviewed nurses and care professionals and conducted userexperience tests with elderly persons. Based on their functional&#13;
requirements, the design parameters were optimized using a&#13;
computational model and trade-off analysis. We developed a&#13;
novel 18-bar linkage to lift a person from a floor to a standing&#13;
position along a natural trajectory, while providing maximal&#13;
mechanical advantage at key points. An omnidirectional, nonholonomic drive base, in which the wheels could be oriented to&#13;
passively maximize floor grip, enabled the robot to resist lateral&#13;
forces without active compensation. With a minimum width of&#13;
38 cm, the robot’s small footprint allowed it to navigate the&#13;
typical home environment. Four airbags were used to catch&#13;
and stabilize a user during a fall in ≤ 250 ms. We demonstrate&#13;
E-BAR’s utility in multiple typical home scenarios, including&#13;
getting into/out of a bathtub, bending to reach for objects, sitto-stand transitions, and ambulation.
2025 IEEE International Conference on Robotics &amp; Automation, 19–23 May, Atlanta, USA
</description>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159256</guid>
</item>
<item>
<title>A Sublinear Algorithm for Approximate Shortest Paths in Large Networks</title>
<link>https://hdl.handle.net/1721.1/159255</link>
<description>A Sublinear Algorithm for Approximate Shortest Paths in Large Networks
Basu, Sabyasachi; Eden, Talya; Ben-Eliezer, Omri; Seshadhri, C.; Koshima, Nadia
Computing distances and finding shortest paths in massive real-world networks is a fundamental algorithmic task in network analysis. There are two main approaches to solving this task. On one end are traversal-based algorithms like bidirectional breadth-first search (BiBFS), which have no preprocessing step but are slow on individual distance inquiries. On the other end are indexing-based approaches, which create and maintain a large index. This allows for answering individual inquiries very fast; however, index creation is prohibitively expensive. We seek to bridge these two extremes: quickly answer distance inquiries without the need for costly preprocessing.&#13;
We propose a new algorithm and data structure, WormHole, for approximate shortest path computations. WormHole leverages structural properties of social networks to build a sublinearly sized index, drawing upon the core-periphery decomposition of Ben-Eliezer et al. [10]. Empirically, WormHole's preprocessing time improves upon index-based solutions by orders of magnitude: indexing billion edges graphs takes only a few minutes. Real time performance is consistently much faster than in BiBFS. The acceleration comes at the cost of a minor accuracy trade-off. We complement these empirical results with provable theoretical guarantees.
WSDM ’25, March 10–14, 2025, Hannover, Germany
</description>
<pubDate>Mon, 10 Mar 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159255</guid>
<dc:date>2025-03-10T00:00:00Z</dc:date>
</item>
<item>
<title>Be the Beat: AI-Powered Boombox for Music Suggestion from Freestyle Dance</title>
<link>https://hdl.handle.net/1721.1/159254</link>
<description>Be the Beat: AI-Powered Boombox for Music Suggestion from Freestyle Dance
Chang, Ethan; Chen, Zhixing; Labrune, Jb; Coelho, Marcelo
Dance has traditionally been guided by music throughout history and across cultures, yet the concept of dancing to create music is rarely explored. In this paper, we introduce Be the Beat, an AI-powered boombox designed to suggest music from a dancer’s movement. Be the Beat uses PoseNet to describe movements for a large language model, enabling it to analyze dance style and query APIs to find music with similar style, energy, and tempo. In our pilot trials, the boombox successfully matched music to the tempo of the dancer’s movements and even distinguished the intricacies between house and Hip-Hop moves. Dancers interacting with the boombox reported having more control over artistic expression and described the boombox as a novel approach to discovering dance genres and choreographing creatively. Be the Beat creates a space for human-AI collaboration on freestyle dance, empowering dancers to rethink the traditional dynamic between dance and music.
TEI ’25, March 04–07, 2025, Bordeaux / Talence, France
</description>
<pubDate>Tue, 04 Mar 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159254</guid>
<dc:date>2025-03-04T00:00:00Z</dc:date>
</item>
<item>
<title>Colorimia: Color Picking and Image Generation with a Physical AI</title>
<link>https://hdl.handle.net/1721.1/159253</link>
<description>Colorimia: Color Picking and Image Generation with a Physical AI
Yan, Ziwen; Liu, Xuanxuan; Labrune, Jb; Coelho, Marcelo
In an era where digital interaction often overshadows our engagement with the physical world, Colorimia is a physical AI tool for color picking and image generation. By combining a color sensor and a text-to-image diffusion model, users can capture colors from the environment, which are used to generate unique images based on a chosen theme. In this paper, we detail Colorimia’s design process, its functionality, and an exploratory user study involving individuals with a variety of experiences with artificial intelligent systems. Preliminary findings suggest that creating with colors from their environment encouraged users to engage more actively with their surroundings, and inspired creative exploration with everyday encounters. In future research, we will explore the device’s potential across broader application and user testing contexts to better understand and enhance its impact on user creativity and practical use.
TEI ’25, March 04–07, 2025, Bordeaux / Talence, France
</description>
<pubDate>Tue, 04 Mar 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159253</guid>
<dc:date>2025-03-04T00:00:00Z</dc:date>
</item>
<item>
<title>DeltaZip: Efficient Serving of Multiple Full-Model-Tuned LLMs</title>
<link>https://hdl.handle.net/1721.1/159252</link>
<description>DeltaZip: Efficient Serving of Multiple Full-Model-Tuned LLMs
Yao, Xiaozhe; Hu, Qinghao; Klimovic, Ana
Fine-tuning large language models (LLMs) greatly improves model quality for downstream tasks. However, serving many fine-tuned LLMs concurrently is challenging due to the sporadic, bursty, and varying request patterns of different LLMs. To bridge this gap, we present DeltaZip, an LLM serving system that efficiently serves multiple full-parameter fine-tuned models concurrently by aggressively compressing model deltas by up to 10× while maintaining high model quality. The key insight behind this design is that fine-tuning results in small-magnitude changes to the pre-trained model. By co-designing the serving system with the compression algorithm, DeltaZip achieves 2× to 12× improvement in throughput compared to the state-of-the-art systems.
EuroSys ’25, March 30–April 3, 2025, Rotterdam, Netherlands
</description>
<pubDate>Sun, 30 Mar 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159252</guid>
<dc:date>2025-03-30T00:00:00Z</dc:date>
</item>
<item>
<title>Open Dance Lab: Digital Platform for Examining, Experimenting, and Evolving Intangible Cultural Heritage</title>
<link>https://hdl.handle.net/1721.1/159251</link>
<description>Open Dance Lab: Digital Platform for Examining, Experimenting, and Evolving Intangible Cultural Heritage
Archiwaranguprok, Chayapatr; Pataranutaporn, Pat; Mano, Phoomparin; Bhongse-Tong, Piyaporn; Maes, Pattie; Klunchun, Pichet
This paper proposes a digital library approach to preserve traditional dance as a form of living cultural heritage. It explores using technology to capture knowledge and principles, enabling future generations to dynamically engage with and evolve these traditions. By democratizing access to cultural knowledge, digital technology challenges conservative ideologies that centralize cultural evolution. Using the Thai traditional dance principle Mae Bot Yai as a case study, we present Open Dance Lab, a web-based platform designed to preserve and innovate Thai traditional dance. The platform features a digital archive of 59 Mae Bot Yai poses as interactive 3D models with expert annotations, incorporates Pichet Klunchun's deconstruction of Thai dance into six core elements, and includes an AI-powered system for generating new dance sequences based on traditional principles. This research demonstrates how digital technologies can safeguard and transmit intangible cultural heritage while facilitating its evolution in the digital age.
JCDL ’24, December, 2024, Hong Kong, China
</description>
<pubDate>Mon, 16 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159251</guid>
<dc:date>2024-12-16T00:00:00Z</dc:date>
</item>
<item>
<title>DarwinGame: Playing Tournaments for Tuning Applications in Noisy Cloud Environments</title>
<link>https://hdl.handle.net/1721.1/159250</link>
<description>DarwinGame: Playing Tournaments for Tuning Applications in Noisy Cloud Environments
Basu Roy, Rohan; Gadepally, Vijay; Tiwari, Devesh
This work introduces a new subarea of performance tuning -- performance tuning in a shared interference-prone computing environment. We demonstrate that existing tuners are significantly suboptimal by design because of their inability to account for interference during tuning. Our solution, DarwinGame, employs a tournament-based design to systematically compare application executions with different tunable parameter configurations, enabling it to identify the relative performance of different tunable parameter configurations in a noisy environment. Compared to existing solutions, DarwinGame achieves more than 27% reduction in execution time, with less than 0.5% performance variability. DarwinGame is the first performance tuner that will help developers tune their applications in shared, interference-prone, cloud environments.
</description>
<pubDate>Mon, 03 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159250</guid>
<dc:date>2025-02-03T00:00:00Z</dc:date>
</item>
<item>
<title>RTL Verification for Secure Speculation Using Contract Shadow Logic</title>
<link>https://hdl.handle.net/1721.1/159249</link>
<description>RTL Verification for Secure Speculation Using Contract Shadow Logic
Tan, Qinhan; Yang, Yuheng; Bourgeat, Thomas; Malik, Sharad; Yan, Mengjia
Modern out-of-order processors face speculative execution attacks. Despite various proposed software and hardware mitigations to prevent such attacks, new attacks keep arising from unknown vulnerabilities. Thus, a formal and rigorous evaluation of the ability of hardware designs to deal with speculative execution attacks is urgently desired.&#13;
This paper proposes a formal verification technique called Contract Shadow Logic that can considerably improve RTL verification scalability with little manual effort while being applicable to different defense mechanisms. In this technique, we leverage computer architecture design insights to improve verification performance for checking security properties formulated as software-hardware contracts for secure speculation. Our verification scheme is accessible to computer architects and requires minimal formal-method expertise.&#13;
We evaluate our technique on multiple RTL designs, including three out-of-order processors. The experimental results demonstrate that our technique exhibits a significant advantage in finding attacks on insecure designs and deriving complete proofs on secure designs, when compared to the baseline and two state-of-the-art verification schemes, LEAVE and UPEC.
ASPLOS ’25, March 30–April 3, 2025, Rotterdam, Netherlands
</description>
<pubDate>Mon, 03 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159249</guid>
<dc:date>2025-02-03T00:00:00Z</dc:date>
</item>
<item>
<title>GraphPipe: Improving Performance and Scalability of DNN Training with Graph Pipeline Parallelism</title>
<link>https://hdl.handle.net/1721.1/159248</link>
<description>GraphPipe: Improving Performance and Scalability of DNN Training with Graph Pipeline Parallelism
Jeon, Byungsoo; Wu, Mengdi; Cao, Shiyi; Kim, Sunghyun; Park, Sunghyun; Aggarwal, Neeraj; Unger, Colin; Arfeen, Daiyaan; Liao, Peiyuan; Miao, Xupeng; Alizadeh, Mohammad; Ganger, Gregory; Chen, Tianqi; Jia, Zhihao
Deep neural networks (DNNs) continue to grow rapidly in size, making them infeasible to train on a single device (e.g. GPU). Pipeline parallelism is commonly used in existing DNN systems to support large-scale DNN training by partitioning a DNN into multiple stages, which concurrently perform DNN computation for different micro-batches of training samples in a pipeline fashion. However, existing pipeline-parallel approaches only consider sequential pipeline stages and thus ignore the topology of a DNN, resulting in missed model-parallel opportunities.&#13;
This paper presents graph pipeline parallelism (GPP), a new pipeline-parallel scheme that partitions a DNN into pipeline stages whose dependencies are identified by a directed acyclic graph. GPP generalizes existing sequential pipeline parallelism and preserves the inherent topology of a DNN to enable concurrent execution of computationally-independent operators, resulting in reduced memory requirement and improved GPU performance. In addition, we develop GraphPipe, a distributed system that exploits GPP strategies to enable performant and scalable DNN training. GraphPipe partitions a DNN into a graph of stages, optimizes micro-batch schedules for these stages, and parallelizes DNN training using the discovered GPP strategies. Evaluation on a variety of DNNs shows that GraphPipe outperforms existing pipeline-parallel systems such as PipeDream and Piper by up to 1.6×. GraphPipe also reduces the search time by 9-21× compared to PipeDream and Piper.
ASPLOS ’25, March 30–April 3, 2025, Rotterdam, Netherlands
</description>
<pubDate>Mon, 03 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159248</guid>
<dc:date>2025-02-03T00:00:00Z</dc:date>
</item>
<item>
<title>Slip Through the Chat: Subtle Injection of False Information in LLM Chatbot Conversations Increases False Memory Formation</title>
<link>https://hdl.handle.net/1721.1/159247</link>
<description>Slip Through the Chat: Subtle Injection of False Information in LLM Chatbot Conversations Increases False Memory Formation
Pataranutaporn, Pat; Archiwaranguprok, Chayapatr; Chan, Samantha; Loftus, Elizabeth; Maes, Pattie
This study examines the potential for malicious generative chatbots to induce false memories by injecting subtle misinformation during user interactions. An experiment involving 180 participants explored five intervention conditions following the presentation of an article: (1) no intervention, (2) reading an honest or (3) misleading article summary, (4) discussing the article with an honest or (5) misleading chatbot. Results revealed that while the misleading summary condition increased false memory occurrence, misleading chatbot interactions led to significantly higher rates of false recollection. These findings highlight the emerging risks associated with conversational AI as it becomes more prevalent. The paper concludes by discussing implications and proposing future research directions to address this concerning phenomenon.
IUI ’25, Cagliari, Italy
</description>
<pubDate>Mon, 24 Mar 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159247</guid>
<dc:date>2025-03-24T00:00:00Z</dc:date>
</item>
<item>
<title>Exo 2: Growing a Scheduling Language</title>
<link>https://hdl.handle.net/1721.1/159246</link>
<description>Exo 2: Growing a Scheduling Language
Ikarashi, Yuka; Qian, Kevin; Droubi, Samir; Reinking, Alex; Bernstein, Gilbert; Ragan-Kelley, Jonathan
User-schedulable languages (USLs) help programmers productively optimize programs by providing safe means of transforming them. Current USLs are designed to give programmers exactly the control they want, while automating all other concerns. However, there is no universal answer for what performance-conscious programmers want to control, how they want to control it, and what they want to automate, even in relatively narrow domains. We claim that USLs should, instead, be designed to grow. We present Exo 2, a scheduling language that enables users to define new scheduling operations externally to the compiler. By composing a set of trusted, fine-grained primitives, users can safely write their own scheduling library to build up desired automation. We identify actions (ways of modifying code), inspection (ways of interrogating code), and references (ways of pointing to code) as essential for any user-extensible USL. We fuse these ideas into a new mechanism called Cursors that enables the creation of scheduling libraries in user code. We demonstrate libraries that amortize scheduling effort across more than 80 high-performance kernels, reducing total scheduling code by an order of magnitude and delivering performance competitive with state-of-the-art implementations on three different platforms.
ASPLOS ’25, March 30-April 3, 2025, Rotterdam, Netherlands
</description>
<pubDate>Mon, 03 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159246</guid>
<dc:date>2025-02-03T00:00:00Z</dc:date>
</item>
<item>
<title>Revisiting Wireless Cyberattacks on Vehicles</title>
<link>https://hdl.handle.net/1721.1/159245</link>
<description>Revisiting Wireless Cyberattacks on Vehicles
Gesteira-Miñarro, Roberto; López, Gregorio; Palacios, Rafael
The automotive industry has been a prime target for cybercriminals for decades, with attacks becoming more sophisticated as vehicles integrate advanced digital technologies. In response, new standards and regulations have been introduced, requiring manufacturers to implement robust cybersecurity measures to obtain necessary certifications. Modern vehicles have an extensive attack surface due to the increasing number of interconnected electronic components and wireless communication features. While new technologies improve connectivity, automation, and comfort, they also introduce new vulnerabilities that can be exploited by attackers. This paper presents a comprehensive analysis of the attack surface of modern vehicles, focusing on the security risks associated with wireless communication technologies. Each technology is examined in detail, highlighting existing research, known vulnerabilities, and potential countermeasures. Furthermore, this study identifies key research gaps in the field, providing insights into critical areas that require further investigation. This work aims to guide future research efforts in order to enhance vehicle cybersecurity in the evolving landscape of smart, autonomous, and connected vehicles.
</description>
<pubDate>Sun, 20 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159245</guid>
<dc:date>2025-04-20T00:00:00Z</dc:date>
</item>
<item>
<title>Challenges and Opportunities for Post-COVID Pulmonary Disease: A Focused Review of Immunomodulation</title>
<link>https://hdl.handle.net/1721.1/159244</link>
<description>Challenges and Opportunities for Post-COVID Pulmonary Disease: A Focused Review of Immunomodulation
Verbeeck Mendez, Steffi; Do Orozco, Isabella L.; Gavilanez-Chavez, Guadalupe E.; Nava-Zavala, Arnulfo Hernán; Zavala-Cerna, Maria G.
The resolution of the recent COVID-19 pandemic still requires attention, since the consequences of having suffered the infection, even in mild cases, are associated with several acute and chronic pathological conditions referred to as post-COVID syndrome (PCS). PCS often manifests with pulmonary disease and, in up to 9% of cases, a more serious complication known as post-COVID-19 pulmonary fibrosis (PC19-PF), which has a similar clinical course as idiopathic pulmonary fibrosis (IPF). Generating knowledge to provide robust evidence about the clinical benefits of different therapeutic strategies to treat the pulmonary effects of PCS can provide new insights to amplify therapeutic options for these patients. We present evidence found after a scoping review, following extended PRIMSA guidelines, for the use of immunomodulators in pulmonary PCS. We start with a brief description of the immunomodulatory properties of the relevant drugs, their clinically proven efficacy for viral infections and chronic inflammatory conditions, and their use during the COVID-19 pandemic. We emphasize the need for well-designed clinical trials to improve our understanding the physiopathology of pulmonary PCS and PC19-PF and also to determine the efficacy and safety of candidate treatments.
</description>
<pubDate>Fri, 18 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159244</guid>
<dc:date>2025-04-18T00:00:00Z</dc:date>
</item>
<item>
<title>Machine Learning in Mode Choice Prediction as Part of MPOs&amp;rsquo; Regional Travel Demand Models: Is It Time for Change?</title>
<link>https://hdl.handle.net/1721.1/159243</link>
<description>Machine Learning in Mode Choice Prediction as Part of MPOs&amp;rsquo; Regional Travel Demand Models: Is It Time for Change?
Kalantari, Hannaneh Abdollahzadeh; Sabouri, Sadegh; Brewer, Simon; Ewing, Reid; Tian, Guang
This study aims to improve the predictive accuracy of metropolitan planning organizations&amp;rsquo; (MPOs&amp;rsquo;) travel demand models (TDM) by unraveling the factors influencing transportation mode choices. By exploring the interplay between trip characteristics, socioeconomics, built environment features, and regional conditions, we aim to address existing gaps in MPOs&amp;rsquo; TDMs which revolve around the need to also integrate non-motorized modes and a more comprehensive array of features. Additionally, our objective is to develop a more robust predictive model compared to the current nested logit (NL) and multinomial logit (MNL) models commonly employed by MPOs. We apply a one-vs-rest random forest (RF) model to predict mode choices (Home-based-Work, Home-Based-Other, and non-home-based) for over 800,000 trips by 80,000 households across 29 US regions. Validation results demonstrate the RF model&amp;rsquo;s superior performance compared to conventional NL/MNL models. Key findings highlight that increased travel time and distance are associated with more auto trips, while household vehicle ownership significantly affects car and transit choices. Built environment features, such as activity density, transit density, and intersection density, also play crucial roles in mode preferences. This study offers a more robust predictive framework that can be directly applied in MPO TDMs, contributing to more accurate and inclusive transportation planning.
</description>
<pubDate>Wed, 16 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159243</guid>
<dc:date>2025-04-16T00:00:00Z</dc:date>
</item>
<item>
<title>Beyond the Bloom: Invasive Seaweed Sargassum spp. as a Catalyst for Sustainable Agriculture and Blue Economy—A Multifaceted Approach to Biodegradable Films, Biostimulants, and Carbon Mitigation</title>
<link>https://hdl.handle.net/1721.1/159242</link>
<description>Beyond the Bloom: Invasive Seaweed Sargassum spp. as a Catalyst for Sustainable Agriculture and Blue Economy—A Multifaceted Approach to Biodegradable Films, Biostimulants, and Carbon Mitigation
Martínez-Martínez, Elena; Slocum, Alexander H.; Ceballos, María Laura; Aponte, Paula; Bisonó-León, Andrés Guillermo
The Anthropocene has ushered in unprecedented environmental challenges, with invasive seaweed blooms emerging as a critical yet understudied facet of climate change. These blooms, driven by nutrient runoff and oceanic alterations, disrupt ecosystems, threaten biodiversity, and impose economic and public health burdens on coastal communities. However, invasive seaweeds also present an opportunity as a sustainable resource. This study explores the valorization of Sargassum spp. for agricultural applications, focusing on the development of biodegradable bioplastics and biostimulants. Field trials demonstrated the effectiveness of Marine Symbiotic® Sargassum-derived biostimulant in distinct agricultural contexts. In the Dominican Republic, trials on pepper crops showed significant improvements, including a 33.26% increase in fruit weight, a 21.94% rise in fruit set percentage, a 45% higher yield under high-stress conditions, and a 48.42% reduction in fruit rejection compared to control. In Colombia, trials across four leafy green varieties revealed biomass increases of up to 360%, a 50% reduction in synthetic input dependency, and enhanced crop coloration, improving marketability. Additionally, Sargassum-based biofilms exhibited favorable mechanical properties and biodegradability, offering a sustainable alternative to conventional agricultural plastics. Carbon credit quantification revealed that valorizing Sargassum could prevent up to 89,670 tons of CO2-equivalent emissions annually using just one Littoral Collection Module® harvesting system, while biostimulant application enhanced carbon sequestration in crops. These findings underscore the potential of invasive seaweed valorization to address multiple climate challenges, from reducing plastic pollution and GHG emissions to enhancing agricultural resilience, thereby contributing to a sustainable Blue Economy and aligning with global sustainability goals.
</description>
<pubDate>Mon, 14 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159242</guid>
<dc:date>2025-04-14T00:00:00Z</dc:date>
</item>
<item>
<title>Optimizing Solar PV Deployment in Manufacturing: A Morphological Matrix and Fuzzy TOPSIS Approach</title>
<link>https://hdl.handle.net/1721.1/159241</link>
<description>Optimizing Solar PV Deployment in Manufacturing: A Morphological Matrix and Fuzzy TOPSIS Approach
Briceño, Citlaly Pérez; Ponce, Pedro; Fayek, Aminah Robinson; Anthony, Brian; Bradley, Russel; Peffer, Therese; Meier, Alan; Mei, Qipei
The growing energy demand of the industrial sector and the need for sustainable solutions highlight the importance of efficient decision making in solar photovoltaic (PV) implementation. Selecting optimal PV configuration is complex due to the interdependent technical, economic, environmental, and social factors involved. This study introduces an integrated decision-making method combining a morphological matrix and fuzzy TOPSIS to systematically select and rank optimal PV system configurations for manufacturing firms. While the morphological matrix exhaustively examines possible design solutions based on sensing, smart, sustainable, and social (S4) attributes, the fuzzy TOPSIS method ranks the alternatives by handling uncertainty in decision making. A case study conducted in a Mexican manufacturing company validates the methodology&amp;rsquo;s effectiveness. The optimal PV configuration identified comprehensively addresses operational and sustainability criteria, covering all lifecycle stages. This approach demonstrates quantitative superiority and greater robustness compared to existing fuzzy TOPSIS-based methods for solar PV applications. The findings highlight the practical value of data-driven, multi-criteria decision making for industrial solar energy adoption, enhancing project feasibility, cost efficiency, and environmental compliance. Future research will incorporate discrete event simulation (DES) to further refine energy consumption strategies in manufacturing.
</description>
<pubDate>Tue, 08 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159241</guid>
<dc:date>2025-04-08T00:00:00Z</dc:date>
</item>
<item>
<title>Pathomechanics of Early-Stage Lumbar Intervertebral Disc Degradation Leading to Discogenic Pain&amp;mdash;A Narrative Review</title>
<link>https://hdl.handle.net/1721.1/159240</link>
<description>Pathomechanics of Early-Stage Lumbar Intervertebral Disc Degradation Leading to Discogenic Pain&amp;mdash;A Narrative Review
Hedman, Thomas; Rogers, Adam
Although the existence of highly prevalent pain, disability, and work time lost associated with discogenic low back pain is well known, the recognition of the culpability of universally present disc degradation and mechanical insufficiency in the first three decades of life is often overlooked. There is a corresponding &amp;ldquo;treatment gap&amp;rdquo; and no current interventions with demonstrated capabilities to address the pain and resist the usual progression of increasing structural failure of spinal tissues with increasing levels of pain and disability. This narrative review summarizes more than forty years of the literature describing the pathomechanics of progressive degradation of lumbar discs, with a focus on studies that implicate an increasing mechanical insufficiency in the etiology of early-stage chronic and recurrent discogenic low back pain. Topics highlighted in this review include the deleterious biological changes that begin soon after birth, stress intensification due to the loss of fluid phase load support, fatigue weakening and damage accumulation in non-regenerative tissue, disc tears, segmental instability, and the timeline for first incidence of chronic low back pain. The review concludes with preferred treatment characteristics and a brief summary of emerging treatment approaches.
</description>
<pubDate>Sat, 05 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159240</guid>
<dc:date>2025-04-05T00:00:00Z</dc:date>
</item>
<item>
<title>The Geometry of Concepts: Sparse Autoencoder Feature Structure</title>
<link>https://hdl.handle.net/1721.1/159239</link>
<description>The Geometry of Concepts: Sparse Autoencoder Feature Structure
Li, Yuxiao; Michaud, Eric J.; Baek, David D.; Engels, Joshua; Sun, Xiaoqing; Tegmark, Max
Sparse autoencoders have recently produced dictionaries of high-dimensional vectors corresponding to the universe of concepts represented by large language models. We find that this concept universe has interesting structure at three levels: (1) The “atomic” small-scale structure contains “crystals” whose faces are parallelograms or trapezoids, generalizing well-known examples such as (man:woman::king:queen). We find that the quality of such parallelograms and associated function vectors improves greatly when projecting out global distractor directions such as word length, which is efficiently performed with linear discriminant analysis. (2) The “brain” intermediate-scale structure has significant spatial modularity; for example, math and code features form a “lobe” akin to functional lobes seen in neural fMRI images. We quantify the spatial locality of these lobes with multiple metrics and find that clusters of co-occurring features, at coarse enough scale, also cluster together spatially far more than one would expect if feature geometry were random. (3) The “galaxy”-scale large-scale structure of the feature point cloud is not isotropic, but instead has a power law of eigenvalues with steepest slope in middle layers. We also quantify how the clustering entropy depends on the layer.
</description>
<pubDate>Thu, 27 Mar 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159239</guid>
<dc:date>2025-03-27T00:00:00Z</dc:date>
</item>
<item>
<title>Using Interpretable Artificial Intelligence Algorithms in the Management of Blunt Splenic Trauma: Applications of Optimal Policy Trees as a Treatment Prescription Aid to Improve Patient Mortality</title>
<link>https://hdl.handle.net/1721.1/159238</link>
<description>Using Interpretable Artificial Intelligence Algorithms in the Management of Blunt Splenic Trauma: Applications of Optimal Policy Trees as a Treatment Prescription Aid to Improve Patient Mortality
Panossian, Vahe S.; Ma, Yu; Song, Bolin; Proaño-Zamudio, Jefferson A.; van Zon, Veerle P. C.; Nzenwa, Ikemsinachi C.; Tabari, Azadeh; Velmahos, George C.; Kaafarani, Haytham M. A.; Bertsimas, Dimitris; Daye, Dania
Background: The identification of the optimal management for blunt splenic trauma—angioembolization (AE), splenectomy, or observation—remains a challenge. This study applies Optimal Policy Trees (OPT), an artificial intelligence (AI) model, to prescribe appropriate management and improve in-hospital mortality. Methods: OPTs were trained on patients with blunt splenic injuries in the ACS-TQIP 2013–2019 to prescribe one of the three interventions: splenectomy, angioembolization (AE), or observation. Prescriptive trees were derived in two separate patient cohorts: those who presented with a systolic blood pressure (SBP) &lt; 70 mmHg and those with an SBP ≥ 70 mmHg. Splenic injury severity was graded using the American Association of Surgical Trauma (AAST) grading scale. Counterfactual estimation was used to predict the effects of interventions on overall in-hospital mortality. Results: Among 54,345 patients, 3.1% underwent splenic AE, 13.1% splenectomy, and 83.8% were managed with observation. In patients with SBP &lt; 70 mmHg, AE was recommended for shock index (SI) &lt; 1.5 or without transfusion, while splenectomy was indicated for SI ≥ 1.5 with transfusion. For patients with SBP ≥ 70 mmHg, AE was recommended for AAST grades 4–5, or grades 1–3 with SI ≥ 1.2; observation was recommended for grades 1–3 with SI &lt; 1.2. Predicted mortality using OPT-prescribed treatments was 18.4% for SBP &lt; 70 mmHg and 4.97% for SBP ≥ 70 mmHg, compared to observed rates of 36.46% and 7.60%, respectively. Conclusions: Interpretable AI models may serve as a decision aid to improve mortality in patients presenting with a blunt splenic injury. Our data-driven prescriptive OPT models may aid in prescribing the appropriate management in this patient cohort based on their characteristics.
</description>
<pubDate>Mon, 24 Mar 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159238</guid>
<dc:date>2025-03-24T00:00:00Z</dc:date>
</item>
<item>
<title>On Dynamical Measures of Quantum Information</title>
<link>https://hdl.handle.net/1721.1/159237</link>
<description>On Dynamical Measures of Quantum Information
Fullwood, James; Parzygnat, Arthur J.
In this work, we use the theory of quantum states over time to define joint entropy for timelike-separated quantum systems. For timelike-separated systems that admit a dual description as being spacelike-separated, our notion of entropy recovers the usual von Neumann entropy for bipartite quantum states and thus may be viewed as a spacetime generalization of von Neumann entropy. Such an entropy is then used to define dynamical extensions of quantum joint entropy, quantum conditional entropy, and quantum mutual information for systems separated by the action of a quantum channel. We provide an in-depth mathematical analysis of such information measures and the properties they satisfy. We also use such a dynamical formulation of entropy to quantify the information loss/gain associated with the dynamical evolution of quantum systems, which enables us to formulate a precise notion of information conservation for quantum processes. Finally, we show how our dynamical entropy admits an operational interpretation in terms of quantifying the amount of state disturbance associated with a positive operator- valued measurement.
</description>
<pubDate>Fri, 21 Mar 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159237</guid>
<dc:date>2025-03-21T00:00:00Z</dc:date>
</item>
<item>
<title>Operationalizing Artificial Intelligence: Lessons Learned at Dana-Farber Cancer Institute</title>
<link>https://hdl.handle.net/1721.1/159236</link>
<description>Operationalizing Artificial Intelligence: Lessons Learned at Dana-Farber Cancer Institute
Methot, John; Antell, Gregory; Umeton, Renato
Few AI applications in oncology have progressed to production or clinical use. This translational hurdle has two&#13;
main components: static or limited training data; and the absence of a production environment into which models&#13;
may be deployed. Dana-Farber's Platform for Operationalized Data Science aims to remove these impediments to&#13;
enable development and deployment of AI in a healthcare setting at scale.
Proceedings of AMIA 2020, American Medical Informatics Association Annual Symposium, USA, November 14-18, 2020
</description>
<pubDate>Sat, 14 Nov 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159236</guid>
<dc:date>2020-11-14T00:00:00Z</dc:date>
</item>
<item>
<title>Spectrum: High-Bandwidth Anonymous Broadcast with Malicious Security</title>
<link>https://hdl.handle.net/1721.1/159235</link>
<description>Spectrum: High-Bandwidth Anonymous Broadcast with Malicious Security
Newman, Zachary; Servan-Schreiber, Sacha; Devadas, Srinivas
Proceedings of the 19th USENIX Symposium on Networked Systems Design and Implementation. April 4–6, 2022 Renton, WA, USA
</description>
<pubDate>Mon, 04 Apr 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159235</guid>
<dc:date>2022-04-04T00:00:00Z</dc:date>
</item>
<item>
<title>Evolving Properties of Biological Materials Captured via Needle-Based Cavity Expansion Method</title>
<link>https://hdl.handle.net/1721.1/159222</link>
<description>Evolving Properties of Biological Materials Captured via Needle-Based Cavity Expansion Method
Varner, H. M.; Naghibzadeh, S. K.; Spaeth, K. C.; Klein, A.; Cohen, T.
Background The mechanical properties of biological tissues change over time and with disease progression. Quantifying these mechanical properties can thus be instrumental for medical diagnosis and for evaluation of tissue viability for transplant. However, soft and biological materials are exceptionally challenging to mechanically characterize using conventional testing methods, which are hindered by limitations of sample size, fixturing capabilities, and sample preparation. Objective We hypothesize that Volume Controlled Cavity Expansion (VCCE) is well-suited to capture subtle mechanical differences in biological tissue. The objective of this work is therefore twofold: first, we seek to quantify how stiffness of liver and gelatin evolve with age. In achieving this understanding, we aim to demonstrate the precision of VCCE in measuring subtle changes in the mechanical properties of biological tissues. Methods Performing VCCE tests over 15 days in samples of gelatin and liver (porcine and bovine), we track the evolving pressure-volume response and deformation limits of the materials. Results In both materials, we observed time-dependent variation of the stiffness and fracture thresholds. In gelatin VCCE repeatably captured stiffening over time, which was correlated with a higher fracture stress. This was in contrast to observations in bovine liver, where stiffening corresponded to a lower fracture stress. Porcine liver initially stiffened, then reversed this trend and relaxed. Conclusion Through this work we show that liver and gelatin stiffen with age, and that this trend is measurable via VCCE. These results highlight the utility of VCCE and call attention to the need for a new class of mechanism based constitutive models that are capable of capturing variations in material over time with a minimal number of parameters.
</description>
<pubDate>Wed, 04 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159222</guid>
<dc:date>2024-12-04T00:00:00Z</dc:date>
</item>
<item>
<title>SleepBoost: a multi-level tree-based ensemble model for automatic sleep stage classification</title>
<link>https://hdl.handle.net/1721.1/159221</link>
<description>SleepBoost: a multi-level tree-based ensemble model for automatic sleep stage classification
Zaman, Akib; Kumar, Shiu; Shatabda, Swakkhar; Dehzangi, Iman; Sharma, Alok
Neurodegenerative diseases often exhibit a strong link with sleep disruption, highlighting the importance of effective sleep stage monitoring. In this light, automatic sleep stage classification (ASSC) plays a pivotal role, now more streamlined than ever due to the advancements in deep learning (DL). However, the opaque nature of DL models can be a barrier in their clinical adoption, due to trust concerns among medical practitioners. To bridge this gap, we introduce SleepBoost, a transparent multi-level tree-based ensemble model specifically designed for ASSC. Our approach includes a crafted feature engineering block (FEB) that extracts 41 time and frequency domain features, out of which 23 are selected based on their high mutual information score (&gt; 0.23). Uniquely, SleepBoost integrates three fundamental linear models into a cohesive multi-level tree structure, further enhanced by a novel reward-based adaptive weight allocation mechanism. Tested on the Sleep-EDF-20 dataset, SleepBoost demonstrates superior performance with an accuracy of 86.3%, F1-score of 80.9%, and Cohen kappa score of 0.807, outperforming leading DL models in ASSC. An ablation study underscores the critical role of our selective feature extraction in enhancing model accuracy and interpretability, crucial for clinical settings. This innovative approach not only offers a more transparent alternative to traditional DL models but also extends potential implications for monitoring and understanding sleep patterns in the context of neurodegenerative disorders. The open-source availability of SleepBoost’s implementation at https://github.com/akibzaman/SleepBoost can further facilitate its accessibility and potential for widespread clinical adoption.
</description>
<pubDate>Fri, 03 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159221</guid>
<dc:date>2024-05-03T00:00:00Z</dc:date>
</item>
<item>
<title>Evaluating Combinations of Biological and Clinicopathologic Factors Linked to Poor Outcomes in Resected Colorectal Liver Metastasis: An External Validation Study</title>
<link>https://hdl.handle.net/1721.1/159220</link>
<description>Evaluating Combinations of Biological and Clinicopathologic Factors Linked to Poor Outcomes in Resected Colorectal Liver Metastasis: An External Validation Study
Sasaki, Kazunari; Wang, Jane; Kamphues, Carsten; Buettner, Stefan; Gagniere, Johan; Ardilles, Victoria; Imai, Katsunori; Wagner, Doris; Pozios, Ioannis; Papakonstantinou, Dimitris; Pikoulis, Emmanouil; Antoniou, Efstathios; Morioka, Daisuke; Løes, Inger M.; Lønning, Per E.; Kornprat, Peter
Background Recent studies have suggested that certain combinations of KRAS or BRAF biomarkers with clinical factors are associated with poor outcomes and may indicate that surgery could be “biologically” futile in otherwise technically resectable colorectal liver metastasis (CRLM). However, these combinations have yet to be validated through external studies. Patients and Methods We conducted a systematic search to identify these studies. The overall survival (OS) of patients with these combinations was evaluated in a cohort of patients treated at 11 tertiary centers. Additionally, the study investigated whether using high-risk KRAS point mutations in these combinations could be associated with particularly poor outcomes. Results The recommendations of four studies were validated in 1661 patients. The first three studies utilized KRAS, and their validation showed the following median and 5-year OS: (1) 30 months and 16.9%, (2) 24.3 months and 21.6%, and (3) 46.8 months and 44.4%, respectively. When analyzing only patients with high-risk KRAS mutations, median and 5-year OS decreased to: (1) 26.2 months and 0%, (2) 22.3 months and 15.1%, and (3) not reached and 44.9%, respectively. The fourth study utilized BRAF, and its validation showed a median OS of 10.4 months, with no survivors beyond 21 months. Conclusion The combinations of biomarkers and clinical factors proposed to render surgery for CRLM futile, as presented in studies 1 (KRAS high-risk mutations) and 4, appear justified. In these studies, there were no long-term survivors, and survival was similar to that of historic cohorts with similar mutational profiles that received systemic therapies alone for unresectable disease.
</description>
<pubDate>Tue, 08 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159220</guid>
<dc:date>2024-10-08T00:00:00Z</dc:date>
</item>
<item>
<title>Quantifying the progress of artificial intelligence subdomains using the patent citation network</title>
<link>https://hdl.handle.net/1721.1/159219</link>
<description>Quantifying the progress of artificial intelligence subdomains using the patent citation network
Rezazadegan, Reza; Sharifzadeh, Mahdi; Magee, Christopher L.
Even though Artificial Intelligence (AI) has been having a transformative effect on human life, there is currently no precise quantitative method for measuring and comparing the performance of different AI methods. Technology Improvement Rate (TIR) is a measure that describes a technology’s rate of performance improvement, and is represented in a generalization of Moore’s Law. Estimating TIR is important for R&amp;D purposes to forecast which competing technologies have a higher chance of success in the future. The present contribution estimates the TIR for different subdomains of applied and industrial AI by quantifying each subdomain’s centrality in the global flow of technology, as modeled by the Patent Citation Network and shown in previous work. The estimated TIR enables us to quantify and compare the performance improvement of different AI methods. We also discuss the influencing factors behind slower or faster improvement rates. Our results highlight the importance of Rule-based Machine Learning (not to be confused with Rule-based Systems), Multi-task Learning, Meta-Learning, and Knowledge Representation in the future advancement of AI and particularly in Deep Learning.
</description>
<pubDate>Wed, 17 Apr 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159219</guid>
<dc:date>2024-04-17T00:00:00Z</dc:date>
</item>
<item>
<title>Intra-individual consistency of vestibular perceptual thresholds</title>
<link>https://hdl.handle.net/1721.1/159218</link>
<description>Intra-individual consistency of vestibular perceptual thresholds
Clark, Torin K.; Galvan-Garza, Raquel C.; Merfeld, Daniel M.
Vestibular perceptual thresholds quantify sensory noise associated with reliable perception of small self-motions. Previous studies have identified substantial variation between even healthy individuals’ thresholds. However, it remains unclear if or how an individual’s vestibular threshold varies over repeated measures across various time scales (repeated measurements on the same day, across days, weeks, or months). Here, we assessed yaw rotation and roll tilt thresholds in four individuals and compared this intra-individual variability to inter-individual variability of thresholds measured across a large age-matched cohort each measured only once. For analysis, we performed simulations of threshold measurements where there was no underlying variability (or it was manipulated) to compare to that observed empirically. We found remarkable consistency in vestibular thresholds within individuals, for both yaw rotation and roll tilt; this contrasts with substantial inter-individual differences. Thus, we conclude that vestibular perceptual thresholds are an innate characteristic, which validates pooling measures across sessions and potentially serves as a stable clinical diagnostic and/or biomarker.
</description>
<pubDate>Wed, 24 Apr 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159218</guid>
<dc:date>2024-04-24T00:00:00Z</dc:date>
</item>
<item>
<title>The algorithmic phase transition of random graph alignment problem</title>
<link>https://hdl.handle.net/1721.1/159217</link>
<description>The algorithmic phase transition of random graph alignment problem
Du, Hang; Gong, Shuyang; Huang, Rundong
We study the graph alignment problem over two independent Erdős–Rényi random graphs on n vertices, with edge density p falling into two regimes separated by the critical window around p c : = log n / n . Our result reveals an algorithmic phase transition for this random optimization problem: polynomial-time approximation schemes exist in the sparse regime, while statistical-computational gap emerges in the dense regime. Additionally, we establish a sharp transition on the performance of online algorithms for this problem when p is in the dense regime, resulting in a 8 / 9 multiplicative constant factor gap between achievable solutions and optimal solutions.
</description>
<pubDate>Wed, 26 Mar 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159217</guid>
<dc:date>2025-03-26T00:00:00Z</dc:date>
</item>
<item>
<title>DBOS: three years later</title>
<link>https://hdl.handle.net/1721.1/159216</link>
<description>DBOS: three years later
Li, Qian; Kraft, Peter; Kozyrakis, Christos; Zaharia, Matei A; Stonebraker, Michael
In our VLDB 2022 publication (Skiadopoulos, 2022), we presented the rationale for a DBMS-oriented operating system and reported a series of experiments supporting this approach. This paper provides a comprehensive update on the project, which evolved as a research initiative for two additional years before transitioning into a venture-capital-backed startup over the past 18 months. During this period, we developed a provenance system and a programming environment integrating Python and TypeScript, accompanied by detailed performance evaluations. Furthermore, we outline the modifications made to the research prototype to support the demands of commercialization.
</description>
<pubDate>Tue, 29 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159216</guid>
<dc:date>2025-04-29T00:00:00Z</dc:date>
</item>
<item>
<title>Air quality co-benefits of subnational carbon policies</title>
<link>https://hdl.handle.net/1721.1/159215</link>
<description>Air quality co-benefits of subnational carbon policies
Thompson, Tammy M; Rausch, Sebastian; Saari, Rebecca K; Selin, Noelle E
To mitigate climate change, governments ranging from city to multi-national have adopted greenhouse gas (GHG) emissions reduction targets. While the location of GHG reductions does not affect their climate benefits, it can impact human health benefits associated with co-emitted pollutants. Here, an advanced modeling framework is used to explore how subnational level GHG targets influence air pollutant co-benefits from ground level ozone and fine particulate matter. Two carbon policy scenarios are analyzed, each reducing the same total amount of GHG emissions in the Northeast US: an economy-wide Cap and Trade (CAT) program reducing emissions from all sectors of the economy, and a Clean Energy Standard (CES) reducing emissions from the electricity sector only. Results suggest that a regional CES policy will cost about 10 times more than a CAT policy. Despite having the same regional targets in the Northeast, carbon leakage to non-capped regions varies between policies. Consequently, a regional CAT policy will result in national carbon reductions that are over six times greater than the carbon reduced by the CES in 2030. Monetized regional human health benefits of the CAT and CES policies are 844% and 185% of the costs of each policy, respectively. Benefits for both policies are thus estimated to exceed their costs in the Northeast US. The estimated value of human health co-benefits associated with air pollution reductions for the CES scenario is two times that of the CAT scenario. Implications: In this research, an advanced modeling framework is used to determine the potential impacts of regional carbon policies on air pollution co-benefits associated with ground level ozone and fine particulate matter. Study results show that spatially heterogeneous GHG policies have the potential to create areas of air pollution dis-benefit. It is also shown that monetized human health benefits within the area covered by policy may be larger than the model estimated cost of the policy. These findings are of particular interest both as U.S. states work to develop plans to meet state-level carbon emissions reduction targets set by the EPA through the Clean Power Plan, and in the absence of comprehensive national carbon policy.
</description>
<pubDate>Tue, 24 May 2016 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159215</guid>
<dc:date>2016-05-24T00:00:00Z</dc:date>
</item>
<item>
<title>Climatic Stress, Internal Migration, and Syrian Civil War Onset</title>
<link>https://hdl.handle.net/1721.1/159214</link>
<description>Climatic Stress, Internal Migration, and Syrian Civil War Onset
Ash, Konstantin; Obradovich, Nick
Syria recently suffered a once in 500-year meteorological drought followed by one of the worst conflicts of the twenty-first century. We exploit subnational variation in drought impact to examine associations between climatic stress and Syria’s political unrest. Climatic stress may produce instability through both immediate hardship and, indirectly, internal migration. Consistent with the internal migration hypothesis, we find less severely drought-stricken Syrian regions more likely to experience protest. We employ nighttime lights as a proxy for population density to examine the association between climatic stress and internal displacement. We find climatic stress decreased nighttime light intensity during the drought period. Increases in nighttime lights from 2005 to 2010 are associated with added risk of protest in Sunni Arab areas, suggesting an influx of migrants bolstered local grievances. Our findings support the internal migration hypothesis and suggest extreme climate events may impact civil unrest via geographically and temporally indirect paths.
</description>
<pubDate>Thu, 25 Jul 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159214</guid>
<dc:date>2019-07-25T00:00:00Z</dc:date>
</item>
<item>
<title>Nanofabrication of silk microneedles for high-throughput micronutrient delivery and continuous sap monitoring in plants</title>
<link>https://hdl.handle.net/1721.1/159213</link>
<description>Nanofabrication of silk microneedles for high-throughput micronutrient delivery and continuous sap monitoring in plants
Cao, Yunteng; Kim, Doyoon; Koh, Sally Shuxian; Li, Zheng; Rigoldi, Federica; Fortmueller, Julia Eva; Goh, Kasey; Zhang, Yilin; Lim, Eugene J.; Sun, Hui; Uyehara, Elise; Cheerlavancha, Raju; Han, Yangyang; Ram, Rajeev J.; Urano, Daisuke; Marelli, Benedetto
Biomaterials bridging the biotic–abiotic interface in plants offer the opportunity to precisely deliver agrochemicals and continuously monitor plant health, with the goals of increasing resilience to climate change, enhancing crop production and mitigating environmental impact. In this study we report the manipulation of silk fibroin assembly with inorganics nucleation at their phase front to nanomanufacture porous and hollow microneedles that can be interfaced with plants. Plant growth analysis and quantification of wounding gene expression show a non-significant systemic wounding response to the injection of silk microneedles in tomato plants. Microneedles with a hollow structure enable the systemic delivery of plant micronutrients to treat chlorosis in tomato plants and crop biofortification through transport of human micronutrients injected in the petiole and loaded into tomato fruits. Hollow microneedles also provide access to plant vasculature for sap sampling, enabling continuous monitoring and early detection of phytoaccumulation of environmental contaminants such as cadmium.
</description>
<pubDate>Tue, 29 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159213</guid>
<dc:date>2025-04-29T00:00:00Z</dc:date>
</item>
<item>
<title>Designing Border Carbon Adjustments for Enhanced Climate Action</title>
<link>https://hdl.handle.net/1721.1/159212</link>
<description>Designing Border Carbon Adjustments for Enhanced Climate Action
Mehling, Michael A.; van Asselt, Harro; Das, Kasturi; Droege, Susanne; Verkuijl, Cleo
The Paris Agreement advances a heterogeneous approach to international climate cooperation. Such an approach may be undermined by carbon leakage—the displacement of emissions from states with more to less stringent climate policy constraints. Border carbon adjustments offer a promising response to leakage, but they also raise concerns about their compatibility with international trade law. This Article provides a comprehensive analysis of border carbon adjustments and proposes a way to design them that balances legal, administrative, and environmental considerations.
</description>
<pubDate>Thu, 11 Jul 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159212</guid>
<dc:date>2019-07-11T00:00:00Z</dc:date>
</item>
<item>
<title>What Can the Internal Variability of CMIP5 Models Tell Us about Their Climate Sensitivity?</title>
<link>https://hdl.handle.net/1721.1/159211</link>
<description>What Can the Internal Variability of CMIP5 Models Tell Us about Their Climate Sensitivity?
Lutsko, Nicholas J; Takahashi, Ken
The relationship between climate models’ internal variability and their response to external forcings is investigated. Frequency-dependent regressions are performed between the outgoing top-of-atmosphere (TOA) energy fluxes and the global-mean surface temperature in the preindustrial control simulations of the CMIP5 archive. Two distinct regimes are found. At subdecadal frequencies the surface temperature and the outgoing shortwave flux are in quadrature, while the outgoing longwave flux is linearly related to temperature and acts as a negative feedback on temperature perturbations. On longer time scales the outgoing shortwave and longwave fluxes are both linearly related to temperature, with the longwave continuing to act as a negative feedback and the shortwave acting as a positive feedback on temperature variability. In addition to the different phase relationships, the two regimes can also be seen in estimates of the coherence and of the frequency-dependent regression coefficients. The frequency-dependent regression coefficients for the total cloudy-sky flux on time scales of 2.5 to 3 years are found to be strongly ( r&lt;jats:sup&gt;2&lt;/jats:sup&gt; &amp;gt; 0.6) related to the models’ equilibrium climate sensitivities (ECSs), suggesting a potential “emergent constraint” for Earth’s ECS. However, O(100) years of data are required for this relationship to become robust. A simple model for Earth’s surface temperature variability and its relationship to the TOA fluxes is used to provide a physical interpretation of these results.
</description>
<pubDate>Sun, 01 Jul 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159211</guid>
<dc:date>2018-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Carbon pricing and deep decarbonisation</title>
<link>https://hdl.handle.net/1721.1/159210</link>
<description>Carbon pricing and deep decarbonisation
Tvinnereim, Endre; Mehling, Michael
Experts frequently point to carbon pricing as the most cost-effective tool for reducing greenhouse gas emissions. Empirical studies show that carbon pricing can successfully incentivise incremental emissions reductions. But meeting temperature targets within defined timelines as agreed under the Paris Agreement requires more than incremental improvements: it requires achieving net zero emissions within a few decades. To date, there is little evidence that carbon pricing has produced deep emission reductions, even at high prices. While much steeper carbon prices may deliver greater abatement, political economy constraints render their feasibility doubtful. An approach with multiple instruments, including technology mandates and targeted support for innovation, is indispensable to avoid path dependencies and lock-in of long-lived, high-carbon assets. We argue that carbon pricing serves several important purposes in such an instrument mix, but also that the global commitment to deep decarbonisation requires acknowledging the vital role of instruments other than carbon pricing.
</description>
<pubDate>Wed, 10 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159210</guid>
<dc:date>2018-01-10T00:00:00Z</dc:date>
</item>
<item>
<title>Know when to fold ‘em: An empirical description of risk management in public research funding</title>
<link>https://hdl.handle.net/1721.1/159209</link>
<description>Know when to fold ‘em: An empirical description of risk management in public research funding
Goldstein, Anna P.; Kearney, Michael
Public research funding programs typically make grants with minimal intervention by program staff, rather than using a hands-on approach to project management, which is more common in the private sector. In contrast, program staff at the US Department of Energy's Advanced Research Projects Agency – Energy (ARPA-E) are given a set of real options with which to manage funded projects: abandon, contract or expand project budgets or timelines. Using internal data from ARPA-E, we show that active project management enables risk mitigation across a portfolio of research projects. We find that program staff modify projects frequently, especially project timelines, and these changes are more sensitive to poor performance than to strong performance. We also find that projects with a shortened timeline or reduced budget are less likely to generate short-term research outputs, compared to those of ultimately similar size. This evidence suggests that the practice of active project management, when combined with high upfront risk tolerance, can be used to enhance the productivity of mission-oriented public research funding.
</description>
<pubDate>Thu, 02 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159209</guid>
<dc:date>2020-01-02T00:00:00Z</dc:date>
</item>
<item>
<title>A new family of high-current cyclotrons for isotope production</title>
<link>https://hdl.handle.net/1721.1/159184</link>
<description>A new family of high-current cyclotrons for isotope production
Winklehner, Daniel; Alonso, Jose R.; Conrad, Janet
We are developing a high-current cyclotron as a driver for the IsoDAR neutrino experiment. It accelerates 5 mA of H2 + to 60 MeV/amu, after which the electron is removed to produce a 10 mA, 60 MeV proton beam. The enabling innovations that offset space-charge effects occur at injection and in the first few turns, allowing one to construct cyclotrons with energies ranging from below 5 MeV up to 60 MeV/amu, or possibly higher, with the same performance for accelerated ions with Q/A = 0.5 (H2+, D+, He++, …). In this paper, we discuss the possible uses of such cyclotrons for isotope production, including production of long-lived generator parents (68Ga, 44Ti, 82Sr,…), as well as intense fast neutron beams from deuteron breakup for (n,2n) production of isotopes like 225Ac.
</description>
<pubDate>Thu, 27 Jun 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159184</guid>
<dc:date>2024-06-27T00:00:00Z</dc:date>
</item>
<item>
<title>A dynamic model of investment in research and teaching facilities in academic institutions</title>
<link>https://hdl.handle.net/1721.1/159183</link>
<description>A dynamic model of investment in research and teaching facilities in academic institutions
Brudner, Amir; Gavious, Arieh
Academic institutions seek to enhance their reputation, which is one of their primary assets. Doing so requires a massive investment of resources in research, recruiting a high-quality academic staff, and building campuses and state-of-the-art laboratories. To obtain the necessary financial resources, institutions must attract students, donors, and government budgets and grants. This paper introduces a stylized dynamic model demonstrating how an institution can best allocate its resources between teaching and research. We create a simulated competition that resembles the real situation where the enhancement of the institution’s reputation depends not only on its resource allocation but also on its competitors’ actions and reputation. We consider a two-institution contest over time using a differential game solution with open-loop strategies. In this case, the steady-state investment in research increases and the level of teaching decreases.
</description>
<pubDate>Tue, 01 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159183</guid>
<dc:date>2024-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Recent Advances in Fluorescence-Based Colored Passive Daytime Radiative Cooling for Heat Mitigation</title>
<link>https://hdl.handle.net/1721.1/159182</link>
<description>Recent Advances in Fluorescence-Based Colored Passive Daytime Radiative Cooling for Heat Mitigation
Santamouris, Mat; Khan, Hassan S.; Paolini, Riccardo; Julia, Olivia M. L.; Garshasbi, Samira; Papakonstantinou, Ioannis; Valenta, Jan
Passive daytime radiative coolers (PDRCs) with exceptionally high solar reflectance and emissivity in the atmospheric window can provide sub-ambient cooling while reducing buildings’ cooling energy demand. However, glare and esthetic issues limit their application to high-rise buildings while may increase the building’s heating energy needs. Passive colored radiative coolers (PCRCs), based on fluorescent materials, convert part of the absorbed UV and visible solar radiation into emitted light, providing color and reducing the thermal balance of the materials and the potential visual annoyance. This article investigates the state of the art on the PCRC based on fluorescent technologies. Seven articles presenting different combinations of PDRC technologies with fluorescent components to create PCRCs of various colors are presented and analyzed in detail. Quantum dots and phosphors embedded in polymer matrices and combined with reflecting and emitting layers were used as the fluorescent layer of the seven developed green, red, yellow, and yellow–green films. The proposed PCRCs are characterized by very significant differences in cooling performance, although most presented sub-ambient surface temperatures. Their cooling potential is comparatively investigated in terms of the testing climatic conditions and their optical characteristics. The potential increase of their surface temperature, caused by the addition of the fluorescent component, is analyzed through comparisons between the proposed PCRCs and the corresponding white PDRCs without the fluorescent component. The average temperature difference of the green, red, yellow, and yellow–green films against the reference PDRCs is found to be 0.66 °C, 2.6 °C, 1.7 °C and 1.4 °C, respectively. A relevant decreasing trend, but not statistically significant, is observed between the temperature increase caused by the fluorescent additives and the corresponding photoluminescence quantum yield.
</description>
<pubDate>Tue, 28 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159182</guid>
<dc:date>2024-05-28T00:00:00Z</dc:date>
</item>
<item>
<title>Feasibility Study of Anisotropic Full-Waveform Inversion with DAS Data in a Vertical Seismic Profile Configuration at the Newell County Facility, Alberta, Canada</title>
<link>https://hdl.handle.net/1721.1/159181</link>
<description>Feasibility Study of Anisotropic Full-Waveform Inversion with DAS Data in a Vertical Seismic Profile Configuration at the Newell County Facility, Alberta, Canada
Qu, Luping; Pan, Wenyong; Innanen, Kristopher; Macquet, Marie; Lawton, Donald
As an emerging seismic acquisition technology, distributed acoustic sensing (DAS) has drawn significant attention in earth science for long-term and cost-effective monitoring of underground activities. Field seismic experiments with optical fibers in a vertical seismic profile (VSP) configuration were conducted at the Newell County Facility of Carbon Management Canada in Alberta, Canada, for CO 2 injection and storage monitoring. Seismic full-waveform inversion (FWI) represents one promising approach for high-resolution imaging of subsurface model properties. In this study, anisotropic FWI with variable density is applied to the DAS-recorded walk-away VSP data for characterizing the subsurface velocity, anisotropy, and density structures, serving as baseline models for future time-lapse studies at the pilot site. Synthetic inversion experiments suggest that, without accounting for anisotropy, the inverted density structures by isotropic FWI are damaged by strong trade-off artifacts. Anisotropic FWI can provide more accurate P-wave velocity, density, and valuable anisotropy models. Field data applications are then performed to validate the effectiveness and superiority of the proposed methods. Compared to the inversion outputs of isotropic FWI, the inverted P-wave velocity by anisotropic FWI matches trend variation of the well log more closely. In the inverted density model, the CO 2 injection formation can be clearly resolved. The inverted anisotropy parameters provide informative references to interpret the structures and lithology around the target CO 2 injection zone.
</description>
<pubDate>Wed, 24 Apr 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159181</guid>
<dc:date>2024-04-24T00:00:00Z</dc:date>
</item>
<item>
<title>Dynamics of minimal networks of limit cycle oscillators</title>
<link>https://hdl.handle.net/1721.1/159180</link>
<description>Dynamics of minimal networks of limit cycle oscillators
Biju, Andrea E.; Srikanth, Sneha; Manoj, Krishna; Pawar, Samadhan A.; Sujith, R. I.
The framework of mutually coupled oscillators on a network has served as a convenient tool for investigating the impact of various parameters on the dynamics of real-world systems. Compared to large networks of oscillators, minimal networks are more susceptible to changes in coupling parameters, number of oscillators, and network topologies. In this study, we systematically explore the influence of these parameters on the dynamics of a minimal network comprising Stuart–Landau oscillators coupled with a distance-dependent time delay. We examine three network topologies: ring, chain, and star. Specifically, for ring networks, we study the effects of increasing nonlocality from local to global coupling on the overall dynamics of the system. Our findings reveal the existence of various synchronized states, including splay and cluster states, a partially synchronized state such as chimera with quasiperiodic oscillations, and an oscillation quenching state such as amplitude death in these networks. Through an analysis of long-lived transients, we discover novel amplitude-modulated in-phase and amplitude-modulated 2-cluster states within ring networks. Interestingly, we observe that increasing nonlocality diminishes the influence of the number of oscillators on the overall behavior in these networks. Furthermore, we note that oscillators in chain networks exhibit clustering in both amplitude and phase, while star networks demonstrate remote synchronization. The insights from this study deepen our understanding of the dynamics of minimal networks and have implications for various fields, ranging from biology to engineering
</description>
<pubDate>Fri, 24 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159180</guid>
<dc:date>2024-05-24T00:00:00Z</dc:date>
</item>
<item>
<title>InfoPro: Locally Supervised Deep Learning by Maximizing Information Propagation</title>
<link>https://hdl.handle.net/1721.1/159179</link>
<description>InfoPro: Locally Supervised Deep Learning by Maximizing Information Propagation
Wang, Yulin; Ni, Zanlin; Pu, Yifan; Zhou, Cai; Ying, Jixuan; Song, Shiji; Huang, Gao
End-to-end (E2E) training has become the de-facto standard for training modern deep networks, e.g., ConvNets and vision Transformers (ViTs). Typically, a global error signal is generated at the end of a model and back-propagated layer-by-layer to update the parameters. This paper shows that the reliance on back-propagating global errors may not be necessary for deep learning. More precisely, deep networks with a competitive or even better performance can be obtained by purely leveraging locally supervised learning, i.e., splitting a network into gradient-isolated modules and training them with local supervision signals. However, such an extension is non-trivial. Our experimental and theoretical analysis demonstrates that simply training local modules with an E2E objective tends to be short-sighted, collapsing task-relevant information at early layers, and hurting the performance of the full model. To avoid this issue, we propose an information propagation (InfoPro) loss, which encourages local modules to preserve as much useful information as possible, while progressively discarding task-irrelevant information. As InfoPro loss is difficult to compute in its original form, we derive a feasible upper bound as a surrogate optimization objective, yielding a simple but effective algorithm. We evaluate InfoPro extensively with ConvNets and ViTs, based on twelve computer vision benchmarks organized into five tasks (i.e., image/video recognition, semantic/instance segmentation, and object detection). InfoPro exhibits superior efficiency over E2E training in terms of GPU memory footprints, convergence speed, and training data scale. Moreover, InfoPro enables the effective training of more parameter- and computation-efficient models (e.g., much deeper networks), which suffer from inferior performance when trained in E2E. Code: https://github.com/blackfeather-wang/InfoPro-Pytorch .
</description>
<pubDate>Wed, 11 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159179</guid>
<dc:date>2024-12-11T00:00:00Z</dc:date>
</item>
<item>
<title>Half-Space Intersection Properties for Minimal Hypersurfaces</title>
<link>https://hdl.handle.net/1721.1/159178</link>
<description>Half-Space Intersection Properties for Minimal Hypersurfaces
Naff, Keaton; Zhu, Jonathan J.
We prove “half-space” intersection properties in three settings: the hemisphere, half-geodesic balls in space forms, and certain subsets of Gaussian space. For instance, any two embedded minimal hypersurfaces in the sphere must intersect in every closed hemisphere. Two approaches are developed: one using classifications of stable minimal hypersurfaces, and the second using conformal change and comparison geometry for α -Bakry-Émery-Ricci curvature. Our methods yield the analogous intersection properties for free boundary minimal hypersurfaces in space form balls, even when the interior or boundary curvature may be negative. Finally, Colding and Minicozzi recently showed that any two embedded shrinkers of dimension n must intersect in a large enough Euclidean ball of radius R(n). We show that R ( n ) ≤ 2 n.
</description>
<pubDate>Wed, 16 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159178</guid>
<dc:date>2025-04-16T00:00:00Z</dc:date>
</item>
<item>
<title>Menin: from molecular insights to clinical impact</title>
<link>https://hdl.handle.net/1721.1/159177</link>
<description>Menin: from molecular insights to clinical impact
Brown, Margaret R; Soto-Feliciano, Yadira M
Menin, the protein product of the MEN1 gene, is essential for development and has been implicated in multiple different cancer types. These include leukemias and several different solid tumors, including neuroendocrine tumors. Menin interacts with many different protein partners and genomic loci in a context-dependent manner, implicating it in numerous cellular processes. The role of Menin varies across tumor types as well, acting as a tumor suppressor in some tissues and an oncogenic co-factor in others. Given the role of Menin in cancer, and particularly its oncogenic role in acute myeloid leukemia, the development of Menin inhibitors has been an expanding field over the past 10-15 years. Many inhibitors have been in clinical trials and one has recently received approval from the Food and Drug Administration (FDA). In this review, we explore the role of Menin in multiple cancer types, the development of Menin inhibitors and their clinical applications and what the focus of the field should be in the next 5-10 years to expand the use and efficacy of these drugs.
</description>
<pubDate>Fri, 28 Mar 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159177</guid>
<dc:date>2025-03-28T00:00:00Z</dc:date>
</item>
<item>
<title>Pooling solvent mixtures for solvation free energy predictions</title>
<link>https://hdl.handle.net/1721.1/159176</link>
<description>Pooling solvent mixtures for solvation free energy predictions
Leenhouts, Roel J.; Morgan, Nathan; Al Ibrahim, Emad; Green, William H.; Vermeire, Florence H.
Solvation free energy is an important design parameter in reaction kinetics and separation processes, making it a critical property to predict during process development. In previous research, directed message passing neural networks (D-MPNN) have successfully been used to predict solvation free energies and enthalpies in organic solvents. However, solvent mixtures provide greater flexibility for optimizing solvent interactions than monosolvents. This work aims to extend our previous models to mixtures. To handle mixtures in a permutation invariant manner we propose a pooling function; MolPool. With this pooling function, the machine learning models can learn and predict solvation energy and enthalpy for an arbitrary number of molecules in the mixed solvent. The novel SolProp-mix software that applies MolPool to D-MPNN was compared to state-of-the-art architectures for predicting mixture properties and validated with our new database of COSMOtherm calculations; BinarySolv-QM. To improve predictions towards experimental accuracy, the network was then fine-tuned on experimental data in monosolvents. To demonstrate the benefit of this transfer learning methodology, experimental datasets of solvation free energies in binary (BinarySolv-Exp) and ternary (TernarySolv-Exp) solvent mixtures were compiled from data on vapor–liquid equilibria and activity coefficients. The neural network performed comparable in accuracy to the benchmark of COSMOtherm calculations with an MAE of 0.29 kcal/mol and an RMSE of 0.45 kcal/mol for binary mixed solvents. Additionally, the ability to capture trends for a varying mixture composition was validated successfully. Our model’s ability to accurately predict mixture properties from the combination of in silico data and pure component experimental data is promising given the scarcity of experimental data for mixtures in many fields.
</description>
<pubDate>Sun, 13 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159176</guid>
<dc:date>2025-04-13T00:00:00Z</dc:date>
</item>
<item>
<title>Energetic Constraints on Precipitation Under Climate Change</title>
<link>https://hdl.handle.net/1721.1/159174</link>
<description>Energetic Constraints on Precipitation Under Climate Change
O’Gorman, Paul A; Allan, Richard P; Byrne, Michael P; Previdi, Michael
Energetic constraints on precipitation are useful for understanding the response of the hydrological cycle to ongoing climate change, its response to possible geoengineering schemes, and the limits on precipitation in very warm climates of the past. Much recent progress has been made in quantifying the different forcings and feedbacks on precipitation and in understanding how the transient responses of precipitation and temperature might differ qualitatively. Here, we introduce the basic ideas and review recent progress. We also examine the extent to which energetic constraints on precipitation may be viewed as radiative constraints and the extent to which they are confirmed by available observations. Challenges remain, including the need to better demonstrate the link between energetics and precipitation in observations and to better understand energetic constraints on precipitation at sub-global length scales.
</description>
<pubDate>Sun, 01 Jul 2012 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159174</guid>
<dc:date>2012-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Process-based analysis of climate model ENSO simulations: Intermodel consistency and compensating errors</title>
<link>https://hdl.handle.net/1721.1/159173</link>
<description>Process-based analysis of climate model ENSO simulations: Intermodel consistency and compensating errors
Linz, Marianna; Tziperman, Eli; MacMartin, Douglas G
Systematic and compensating errors can lead to degraded predictive skill in climate models. Such errors may be identified by comparing different models in an analysis of individual physical processes. We examine model simulations of El Niño–Southern Oscillation (ENSO) in five Coupled Model Intercomparison Project (CMIP) models, using transfer functions to analyze nine processes critical to ENSO's dynamics. The input and output of these processes are identified and analyzed, some of which are motivated by the recharge oscillator theory. Several errors and compensating errors are identified. The east-west slope of the equatorial thermocline is found to respond to the central equatorial Pacific zonal wind stress as a damped driven harmonic oscillator in all models. This result is shown to be inconsistent with two different formulations of the recharge oscillator. East Pacific sea surface temperature (SST) responds consistently to changes in the thermocline depth in the eastern Pacific in the five CMIP models examined here. However, at time scales greater than 2 years, this consistent model response disagrees with observations, showing that the SST leads thermocline depth at long time scales. Compensating errors are present in the response of meridional transport of water away from the equator to SST: two different models show different response of the transport to off-equatorial wind curl and wind curl response to East Pacific SST. However, these two models show the same response of meridional transport to East Pacific SST. Identification of errors in specific physical processes can hopefully lead to model improvement by focusing model development efforts on these processes.
</description>
<pubDate>Fri, 27 Jun 2014 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159173</guid>
<dc:date>2014-06-27T00:00:00Z</dc:date>
</item>
<item>
<title>Graphical house allocation with identical valuations</title>
<link>https://hdl.handle.net/1721.1/159172</link>
<description>Graphical house allocation with identical valuations
Hosseini, Hadi; McGregor, Andrew; Payan, Justin; Sengupta, Rik; Vaish, Rohit; Viswanathan, Vignesh
The classical house allocation problem involves assigning n houses (or items) to n agents according to their preferences. A key criterion in such problems is satisfying some fairness constraints such as envy-freeness. We consider a generalization of this problem, called Graphical House Allocation, wherein the agents are placed along the vertices of a graph (corresponding to a social network), and each agent can only experience envy towards its neighbors. Our goal is to minimize the aggregate envy among the agents as a natural fairness objective, i.e., the sum of the envy value over all edges in a social graph. We focus on graphical house allocation with identical valuations. When agents have identical and evenly-spaced valuations, our problem reduces to the well-studied Minimum Linear Arrangement. For identical valuations with possibly uneven spacing, we show a number of deep and surprising ways in which our setting is a departure from this classical problem. More broadly, we contribute several structural and computational results for various classes of graphs, including NP-hardness results for disjoint unions of paths, cycles, stars, cliques, and complete bipartite graphs; we also obtain fixed-parameter tractable (and, in some cases, polynomial-time) algorithms for paths, cycles, stars, cliques, complete bipartite graphs, and their disjoint unions. Additionally, a conceptual contribution of our work is the formulation of a structural property for disconnected graphs that we call splittability, which results in efficient parameterized algorithms for finding optimal allocations.
</description>
<pubDate>Wed, 28 Aug 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159172</guid>
<dc:date>2024-08-28T00:00:00Z</dc:date>
</item>
<item>
<title>Apparent algorithmic discrimination and real-time algorithmic learning in digital search advertising</title>
<link>https://hdl.handle.net/1721.1/159171</link>
<description>Apparent algorithmic discrimination and real-time algorithmic learning in digital search advertising
Lambrecht, Anja; Tucker, Catherine
Digital algorithms try to display content that engages consumers. To do this, algorithms need to overcome a ‘cold-start problem’ by swiftly learning whether content engages users. This requires feedback from users. The algorithm targets segments of users. However, if there are fewer individuals in a targeted segment of users, simply because this group is rarer in the population, this could lead to uneven outcomes for minority relative to majority groups. This is because individuals in a minority segment are proportionately more likely to be test subjects for experimental content that may ultimately be rejected by the platform. We explore in the context of ads that are displayed following searches on Google whether this is indeed the case. Previous research has documented that searches for names associated in a US context with Black people on search engines were more likely to return ads that highlighted the need for a criminal background check than was the case for searches for white people. We implement search advertising campaigns that target ads to searches for Black and white names. Our ads are indeed more likely to be displayed following a search for a Black name, even though the likelihood of clicking was similar. Since Black names are less common, the algorithm learns about the quality of the underlying ad more slowly. As a result, an ad is more likely to persist for searches next to Black names than next to white names. Proportionally more Black name searches are likely to have a low-quality ad shown next to them, even though eventually the ad will be rejected. A second study where ads are placed following searches for terms related to religious discrimination confirms this empirical pattern. Our results suggest that as a practical matter, real-time algorithmic learning can lead minority segments to be more likely to see content that will ultimately be rejected by the algorithm.
</description>
<pubDate>Thu, 25 Jul 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159171</guid>
<dc:date>2024-07-25T00:00:00Z</dc:date>
</item>
<item>
<title>Ordering Candidates via Vantage Points</title>
<link>https://hdl.handle.net/1721.1/159170</link>
<description>Ordering Candidates via Vantage Points
Alon, Noga; Defant, Colin; Kravitz, Noah; Zhu, Daniel G.
Given an n-element set C ⊆ R d and a (sufficiently generic) k-element multiset V ⊆ R d , we can order the points in C by ranking each point c ∈ C according to the sum of the distances from c to the points of V. Let Ψ k ( C ) denote the set of orderings of C that can be obtained in this manner as V varies, and let ψ d , k max ( n ) be the maximum of | Ψ k ( C ) | as C ranges over all n-element subsets of R d . We prove that ψ d , k max ( n ) = Θ d , k ( n 2 d k ) when d ≥ 2 and that ψ 1 , k max ( n ) = Θ k ( n 4 ⌈ k / 2 ⌉ - 2 ) . As a step toward proving this result, we establish a bound on the number of sign patterns determined by a collection of functions that are sums of radicals of nonnegative polynomials; this can be understood as an analogue of a classical theorem of Warren. We also prove several results about the set Ψ ( C ) = ⋃ k ≥ 1 Ψ k ( C ) ; this includes an exact description of Ψ ( C ) when d = 1 and when C is the set of vertices of a vertex-transitive polytope.
</description>
<pubDate>Tue, 08 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159170</guid>
<dc:date>2025-04-08T00:00:00Z</dc:date>
</item>
<item>
<title>Perseus: a simple and optimal high-order method for variational inequalities</title>
<link>https://hdl.handle.net/1721.1/159169</link>
<description>Perseus: a simple and optimal high-order method for variational inequalities
Lin, Tianyi; Jordan, Michael I.
This paper settles an open and challenging question pertaining to the design of simple and optimal high-order methods for solving smooth and monotone variational inequalities (VIs). A VI involves finding x ⋆ ∈ X such that ⟨ F ( x ) , x - x ⋆ ⟩ ≥ 0 for all x ∈ X . We consider the setting in which F : R d → R d is smooth with up to ( p - 1 ) th -order derivatives. For p = 2 , the cubic regularization of Newton’s method has been extended to VIs with a global rate of O ( ϵ - 1 ) (Nesterov in Cubic regularization of Newton’s method for convex problems with constraints, Tech. rep., Université catholique de Louvain, Center for Operations Research and Econometrics (CORE), 2006). An improved rate of O ( ϵ - 2 / 3 log log ( 1 / ϵ ) ) can be obtained via an alternative second-order method, but this method requires a nontrivial line-search procedure as an inner loop. Similarly, the existing high-order methods based on line-search procedures have been shown to achieve a rate of O ( ϵ - 2 / ( p + 1 ) log log ( 1 / ϵ ) ) (Bullins and Lai in SIAM J Optim 32(3):2208–2229, 2022; Jiang and Mokhtari in Generalized optimistic methods for convex–concave saddle point problems, 2022; Lin and Jordan in Math Oper Res 48(4):2353–2382, 2023). As emphasized by Nesterov (Lectures on convex optimization, vol 137, Springer, Berlin, 2018), however, such procedures do not necessarily imply the practical applicability in large-scale applications, and it is desirable to complement these results with a simple high-order VI method that retains the optimality of the more complex methods. We propose a p th -order method that does not require any line search procedure and provably converges to a weak solution at a rate of O ( ϵ - 2 / ( p + 1 ) ) . We prove that our p th -order method is optimal in the monotone setting by establishing a lower bound of Ω ( ϵ - 2 / ( p + 1 ) ) under a generalized linear span assumption. A restarted version of our p th -order method attains a linear rate for smooth and p th -order uniformly monotone VIs and another restarted version of our p th -order method attains a local superlinear rate for smooth and strongly monotone VIs. Further, the similar p th -order method achieves a global rate of O ( ϵ - 2 / p ) for solving smooth and nonmonotone VIs satisfying the Minty condition. Two restarted versions attain a global linear rate under additional p th -order uniform Minty condition and a local superlinear rate under additional strong Minty condition.
</description>
<pubDate>Wed, 13 Mar 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159169</guid>
<dc:date>2024-03-13T00:00:00Z</dc:date>
</item>
<item>
<title>Graph coloring and semidefinite rank</title>
<link>https://hdl.handle.net/1721.1/159168</link>
<description>Graph coloring and semidefinite rank
Mirka, Renee; Smedira, Devin; Williamson, David P.
This paper considers the interplay between semidefinite programming, matrix rank, and graph coloring. Karger et al. (J ACM 45(2):246–265, 1998) give a vector program in which a coloring of a graph can be encoded as a semidefinite matrix of low rank. By complementary slackness conditions of semidefinite programming, if an optimal dual solution has high rank, any optimal primal solution must have low rank. We attempt to characterize graphs for which we can show that the corresponding dual optimal solution must have rank high enough that the primal solution encodes a coloring. In the case of the original Karger, Motwani, and Sudan vector program, we show that any graph which is a k-tree has sufficiently high dual rank, and we can extract the coloring from the corresponding low-rank primal solution. We can also show that if a graph is not uniquely colorable, then no sufficiently high rank dual optimal solution can exist. This allows us to completely characterize the planar graphs for which dual optimal solutions have sufficiently high dual rank, since it is known that the uniquely colorable planar graphs are precisely the planar 3-trees. We then modify the semidefinite program to have an objective function with costs, and explore when we can create an objective function such that the optimal dual solution has sufficiently high rank. We show that it is always possible to construct such an objective function given the graph coloring. The construction of the objective function gives rise to heuristics for 4-coloring planar graphs. We enumerated all maximal planar graphs with an induced K 4 of up to 14 vertices; the heuristics successfully found a 4-coloring for 99.75% of them. Our research was motivated by trying to use semidefinite programming to prove the four-color theorem, which states that every planar graph can be colored with four colors. There is an intriguing connection of the Karger–Motwani–Sudan semidefinite program with the Colin de Verdière graph invariant (J Combin. Theory Ser B 50:11-21, 1990) (and a corresponding conjecture of Colin de Verdière), in which matrices that have some similarities to the dual feasible matrices of the semidefinite program must have high rank in the case that graphs are of a certain type; for instance, planar graphs have rank that would imply that the primal solution of the semidefinite program encodes a 4-coloring.
</description>
<pubDate>Wed, 24 Apr 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159168</guid>
<dc:date>2024-04-24T00:00:00Z</dc:date>
</item>
<item>
<title>Implications of heterogeneous SIR models for analyses of COVID-19</title>
<link>https://hdl.handle.net/1721.1/159167</link>
<description>Implications of heterogeneous SIR models for analyses of COVID-19
Ellison, Glenn
This paper provides a quick survey of results on the classic SIR model and variants allowing for heterogeneity in contact rates. It notes that calibrating the classic model to data generated by a heterogeneous model can lead to forecasts that are biased in several ways and to understatement of the forecast uncertainty. Among the biases are that we may underestimate how quickly herd immunity might be reached, underestimate differences across regions, and have biased estimates of the impact of endogenous and policy-driven social distancing.
</description>
<pubDate>Mon, 15 Jul 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159167</guid>
<dc:date>2024-07-15T00:00:00Z</dc:date>
</item>
<item>
<title>Tagged deep inelastic scattering measurement on deuterium with the LAD experiment</title>
<link>https://hdl.handle.net/1721.1/159166</link>
<description>Tagged deep inelastic scattering measurement on deuterium with the LAD experiment
Hauenstein, F.; Ayerbe Gayoso, C.; Ratliff, S.; Szumila-Vance, H.; Schmidt, A.; Ehinger, L.; Hen, O.; Higinbotham, D.; Korover, I.; Kutz, T.; Nguyen, D.; Piasetzky, E.; Weinstein, L. B.
The origin of the modification of the quark structure of nucleons in the nuclear medium can be tested with tagged recoil nucleon measurements from deep inelastic scattering off electrons on deuterium. The LAD experiment at the Thomas Jefferson National Laboratory (JLab) will measure the modification of the neutron structure function for high-momentum, highly-virtual neutrons by measuring the spectator recoil protons in coincidence with the scattered electron. An update on the experimental setup and projected results is presented. The experiment will collect data in Fall 2024.
</description>
<pubDate>Mon, 07 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159166</guid>
<dc:date>2024-10-07T00:00:00Z</dc:date>
</item>
<item>
<title>Strong interaction physics at the luminosity frontier with 22 GeV electrons at Jefferson Lab</title>
<link>https://hdl.handle.net/1721.1/159165</link>
<description>Strong interaction physics at the luminosity frontier with 22 GeV electrons at Jefferson Lab
Accardi, A.; Achenbach, P.; Adhikari, D.; Afanasev, A.; Akondi, C. S.; Akopov, N.; Albaladejo, M.; Albataineh, H.; Albrecht, M.; Almeida-Zamora, B.; Amaryan, M.; Androić, D.; Armstrong, W.; Armstrong, D. S.; Arratia, M.; Arrington, J.; Asaturyan, A.; Austregesilo, A.; Avakian, H.; Averett, T.; Gayoso, C. A.
The purpose of this document is to outline the developing scientific case for pursuing an energy upgrade to 22 GeV of the&#13;
Continuous Electron Beam Accelerator Facility (CEBAF) at&#13;
the Thomas Jefferson National Accelerator Facility (TJNAF,&#13;
or JLab). This document was developed with input from a series of workshops held in the period between March 2022&#13;
and April 2023 that were organized by the JLab user community and staff with guidance from JLab management (see&#13;
Sect. 10). The scientific case for the 22 GeV energy upgrade&#13;
leverages existing or already planned Hall equipment and&#13;
world-wide uniqueness of CEBAF high-luminosity operations.&#13;
CEBAF delivers the world’s highest intensity and highest precision multi-GeV electron beams and has been do so&#13;
for more than 25 years. In Fall 2017, with the completion&#13;
of the 12 GeV upgrade and the start of the 12 GeV science&#13;
program, a new era at the Laboratory began. The 12 GeV&#13;
era is now well underway, with many important experimental results already published, and an exciting portfolio Program Advisory Committee approved experiments planned&#13;
for at least the next 8–10 years [1]. At the same time, the&#13;
CEBAF community is looking toward its future and the science that could be obtained through a future cost-effective&#13;
upgrade to 22 GeV. The great potential to upgrade CEBAF to&#13;
higher energies opens a rich and unique experimental nuclear&#13;
physics program that combines illustrious history with an&#13;
exciting future, extending the life of the facility well into the&#13;
2030s and beyond.&#13;
JLab at 22 GeV will provide unique, world-leading science with high-precision, high-luminosity experiments elucidating the properties of quantum chromodynamics (QCD) in&#13;
the valence regime (x ≥ 0.1). JLab at 22 GeV also enables&#13;
researchers to probe the transition to a region of sea dominance, with access to hadrons of larger mass and different structures. With a fixed-target program at the “luminosity frontier”, large acceptance detection systems, as well as&#13;
high-precision spectrometers, CEBAF will continue to offer&#13;
unique opportunities to shed light on the nature of QCD and&#13;
the emergence of hadron structure for decades to come. In&#13;
fact, CEBAF today, and with an energy upgrade, will continue to operate with several orders of magnitude higher&#13;
luminosity than what is planned at the Electron-Ion Collider&#13;
(EIC). CEBAF’s current and envisioned capabilities enable&#13;
exciting scientific opportunities that complement the EIC&#13;
operational reach, thus giving scientists the full suite of tools&#13;
necessary to comprehensively understand how QCD builds&#13;
hadronic matter.&#13;
The physics program laid out in this document spans a&#13;
broad range of exciting initiatives that focus on a common&#13;
theme, namely, investigations that explore different facets of&#13;
the nonperturbative dynamics that manifest in hadron structure and probe the richness of these strongly interacting systems. The central themes of this program are reviewed in&#13;
Sect. 2 - Introduction. The main components of the research&#13;
program are highlighted in Sects. 3 through 8, followed by&#13;
Sect. 9, which provides a brief overview of the 22 GeV&#13;
CEBAF energy-doubling concept. These sections outline the&#13;
key measurements in different areas of experimental studies&#13;
possible at a 22 GeV CEBAF accelerator in the existing JLab&#13;
experimental end stations. They provide details on the key&#13;
physics outcomes and unique aspects of the programs not&#13;
possible at other existing or planned facilities.&#13;
The 22 GeV physics program is being developed following three main principles: (a) identify the flagship measurements that can be done only with 22 GeV and their science impacts (Uniqueness); (b) identify the flagship measurements with 22 GeV that can extend and improve the 12 GeV&#13;
measurements, helping the physics interpretation through&#13;
multidimensional bins in extended kinematics (Enrichment);&#13;
(c) identify the measurements with 22 GeV that can set the&#13;
bridge between JLab12 and EIC (Complementarity). Even if&#13;
a sharp separation among these three categories sometimes&#13;
is difficult to maintain, we highlight the main points in the&#13;
following.
</description>
<pubDate>Wed, 04 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159165</guid>
<dc:date>2024-09-04T00:00:00Z</dc:date>
</item>
<item>
<title>Role-play simulations for climate change adaptation education and engagement</title>
<link>https://hdl.handle.net/1721.1/159164</link>
<description>Role-play simulations for climate change adaptation education and engagement
Rumore, Danya; Schenk, Todd; Susskind, Lawrence
In order to effectively adapt to climate change, public officials and other stakeholders need to rapidly enhance their understanding of local risks and their ability to collaboratively and adaptively respond to them. We argue that science-based role-play simulation exercises-a type of 'serious game' involving face-to-face mock decision-making-have considerable potential as education and engagement tools for enhancing readiness to adapt. Prior research suggests role-play simulations and other serious games can foster public learning and encourage collective action in public policy-making contexts. However, the effectiveness of such exercises in the context of climate change adaptation education and engagement has heretofore been underexplored. We share results from two research projects that demonstrate the effectiveness of role-play simulations in cultivating climate change adaptation literacy, enhancing collaborative capacity and facilitating social learning. Based on our findings, we suggest such exercises should be more widely embraced as part of adaptation professionals' education and engagement toolkits.
</description>
<pubDate>Mon, 01 Aug 2016 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159164</guid>
<dc:date>2016-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>The cost of CO2 capture and storage</title>
<link>https://hdl.handle.net/1721.1/159163</link>
<description>The cost of CO2 capture and storage
Rubin, Edward S; Davison, John E; Herzog, Howard J
The objective of this paper is to assess the current costs of CO2 capture and storage (CCS) for new fossil fuel power plants and to compare those results to the costs reported a decade ago in the IPCC Special Report on Carbon Dioxide Capture and Storage (SRCCS). Toward that end, we employed a similar methodology based on review and analysis of recent cost studies for the major CCS options identified in the SRCCS, namely, post-combustion CO2 capture at supercritical pulverized coal (SCPC) and natural gas combined cycle (NGCC) power plants, plus pre-combustion capture at coal-based integrated gasification combined cycle (IGCC) power plants. We also report current costs for SCPC plants employing oxy-combustion for CO2 capture - an option that was still in the early stages of development at the time of the SRCCS. To compare current CCS cost estimates to those in the SRCCS, we adjust all costs to constant 2013 US dollars using cost indices for power plant capital costs, fuel costs and other O&amp;M costs. On this basis, we report changes in capital cost, levelized cost of electricity, and mitigation costs for each power plant system with and without CCS. We also discuss the outlook for future CCS costs.
</description>
<pubDate>Tue, 01 Sep 2015 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159163</guid>
<dc:date>2015-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Decoupling Economic Growth and Carbon Emissions</title>
<link>https://hdl.handle.net/1721.1/159162</link>
<description>Decoupling Economic Growth and Carbon Emissions
Deutch, John
All economic activity requires energy; to the extent this energy comes from fossil fuels, the energy use results in emissions of carbon dioxide, CO2. The nature of this link between the growth in economic activity and carbon emissions is a critical question for climate change.1 Linkage implies that deep emission reductions will constrain economic growth; decoupling implies that deep emission reductions are possible with little or no effect on growth. An answer to this question is important for the United States, but more crucial for rapidly growing emerging economies such as China and India that seek to improve their citizens' access to low-cost energy while respecting the need to protect the global environment.
</description>
<pubDate>Fri, 01 Sep 2017 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159162</guid>
<dc:date>2017-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Reducing Proliferation Risks with High-Assay Low-Enriched Uranium Fuels in Reactors with Coated-Particle (TRISO) Fuels</title>
<link>https://hdl.handle.net/1721.1/159161</link>
<description>Reducing Proliferation Risks with High-Assay Low-Enriched Uranium Fuels in Reactors with Coated-Particle (TRISO) Fuels
Forsberg, Charles; Kadak, Andrew
The use of graphite-matrix tri-structural-isotropic (TRISO) fuels in high-temperature reactors with high-assay low-enriched uranium (HALEU) can significantly reduce nuclear weapons proliferation risks relative to other fuels and reactor types. The HALEU fuel, with fuels containing 15% to 20% 235U enable used nuclear fuels (UNFs) with thermal neutron–spectrum burnups between 150 000 and 200 000 MWd per ton. At these high burnups, the plutonium isotopics make the direct use for nuclear weapons unattractive and the uranium isotopics unattractive as a feed to a uranium-enrichment plant. On the front end, it would require the theft of ~150 000 pebbles with uranium just under 20% 235U to create the theoretical potential to produce sufficient material for one weapon (1000 kg), which is about a 2-year supply of fuel for these reactors.&#13;
&#13;
The chemical and mechanical processing requirements to convert fresh TRISO fuel to uranium metal for use in a nuclear weapon are beyond nonstate actors. Over 10 sequential chemical process steps would be required, plus uranium recovery from waste streams, to avoid large uranium losses in the conversion processes. If a nation-state wanted to make a nuclear weapon starting with HALEU fuel, they would enrich the HALEU from 19.95% to over 90% 235U, which presumes they already possess enrichment capabilities and can use any uranium feedstock. If enriched to weapons-grade 235U, 1 ton of HALEU has sufficient 235U for multiple weapons.&#13;
&#13;
Separately, it is not clear if a weapon can actually be built with HALEU fuel. The fuel characteristics also reduce risks from sabotage. Consequently, we conclude that reactor safeguards for fresh HALEU TRISO fuel can be similar to those for low-enriched uranium light water reactor fuel; that is, no requirements for added security or other measures. TRISO UNF safeguards and security can be significantly relaxed relative to the requirements for other types of UNF at the reactor site.
</description>
<pubDate>Fri, 11 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159161</guid>
<dc:date>2025-04-11T00:00:00Z</dc:date>
</item>
<item>
<title>Demonstration of the rodeo algorithm on a quantum computer</title>
<link>https://hdl.handle.net/1721.1/159160</link>
<description>Demonstration of the rodeo algorithm on a quantum computer
Qian, Zhengrong; Watkins, Jacob; Given, Gabriel; Bonitati, Joey; Choi, Kenneth; Lee, Dean
The rodeo algorithm is an efficient algorithm for eigenstate preparation and eigenvalue estimation for any observable on a quantum computer. This makes it a promising tool for studying the spectrum and structure of atomic nuclei as well as other fields of quantum many-body physics. The only requirement is that the initial state has sufficient overlap probability with the desired eigenstate. While it is exponentially faster than well-known algorithms such as phase estimation and adiabatic evolution for eigenstate preparation, it has yet to be implemented on an actual quantum device. In this work, we apply the rodeo algorithm to determine the energy levels of a random one-qubit Hamiltonian, resulting in a relative error of 0.08 % using mid-circuit measurements on the IBM Q device Casablanca. This surpasses the accuracy of directly-prepared eigenvector expectation values using the same quantum device. We take advantage of the high-accuracy energy determination and use the Hellmann–Feynman theorem to compute eigenvector expectation values for a different random one-qubit observable. For the Hellmann–Feynman calculations, we find a relative error of 0.7 % . We conclude by discussing possible future applications of the rodeo algorithm for multi-qubit Hamiltonians.
</description>
<pubDate>Sat, 20 Jul 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159160</guid>
<dc:date>2024-07-20T00:00:00Z</dc:date>
</item>
<item>
<title>Instrumental uncertainties in radiative corrections for the MUSE experiment</title>
<link>https://hdl.handle.net/1721.1/159159</link>
<description>Instrumental uncertainties in radiative corrections for the MUSE experiment
Li, L.; Strauch, S.; Bernauer, J. C.; Briscoe, W. J.; Christopher Ndukwe, A.; Cline, E.; Cohen, D.; Deiters, K.; Downie, E. J.; Fernando, I. P.; Flannery, A.; Gilman, R.; Ilieva, Y.; Kohl, M.; Lavrukhin, I.; Lin, W.; Lorenzon, W.; Lunkenheimer, S.; Mohanmurthy, P.; Nazeer, J.
The MUSE experiment at the Paul Scherrer Institute is measuring elastic lepton-proton scattering cross sections in a four-momentum transfer range from Q2 of approximately 0.002–0.08 GeV2 using positively and negatively charged electrons and muons. The extraction of the Born cross sections from the experimental data requires radiative corrections. Estimates of the instrumental uncertainties in those corrections have been made using the ESEPP event generator. The results depend in particular on the minimum lepton momentum that contributes to the experimental cross section and the fraction of events with hard initial-state radiation that is detected in the MUSE calorimeter and is excluded from the data. These results show that the angular-dependent instrumental uncertainties in radiative corrections to the electron cross section are less than 0.4% and are negligible for the muon cross section.
</description>
<pubDate>Wed, 10 Jan 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159159</guid>
<dc:date>2024-01-10T00:00:00Z</dc:date>
</item>
<item>
<title>FlexpushdownDB: rethinking computation pushdown for cloud OLAP DBMSs</title>
<link>https://hdl.handle.net/1721.1/159158</link>
<description>FlexpushdownDB: rethinking computation pushdown for cloud OLAP DBMSs
Yang, Yifei; Yu, Xiangyao; Serafini, Marco; Aboulnaga, Ashraf; Stonebraker, Michael
Modern cloud-native OLAP databases adopt a storage-disaggregation architecture that separates the management of computation and storage. A major bottleneck in such an architecture is the network connecting the computation and storage layers. Computation pushdown is a promising solution to tackle this issue, which offloads some computation tasks to the storage layer to reduce network traffic. This paper presents FlexPushdownDB (FPDB), where we revisit the design of computation pushdown in a storage-disaggregation architecture, and then introduce several optimizations to further accelerate query processing. First, FPDB supports hybrid query execution, which combines local computation on cached data and computation pushdown to cloud storage at a fine granularity. Within the cache, FPDB uses a novel Weighted-LFU cache replacement policy that takes into account the cost of pushdown computation. Second, we design adaptive pushdown as a new mechanism to avoid throttling the storage-layer computation during pushdown, which pushes the request back to the computation layer at runtime if the storage-layer computational resource is insufficient. Finally, we derive a general principle to identify pushdown-amenable computational tasks, by summarizing common patterns of pushdown capabilities in existing systems, and further propose two new pushdown operators, namely, selection bitmap and distributed data shuffle. Evaluation on SSB and TPC-H shows each optimization can improve the performance by 2.2 × , 1.9 × , and 3 × respectively.
</description>
<pubDate>Wed, 10 Jul 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159158</guid>
<dc:date>2024-07-10T00:00:00Z</dc:date>
</item>
<item>
<title>Do We Learn From Each Other: Understanding the Human-AI Co-Learning Process Embedded in Human-AI Collaboration</title>
<link>https://hdl.handle.net/1721.1/159157</link>
<description>Do We Learn From Each Other: Understanding the Human-AI Co-Learning Process Embedded in Human-AI Collaboration
Lu, Jinwei; Yan, Yikuan; Huang, Keman; Yin, Ming; Zhang, Fang
Beyond collaborating in the AI-supported decision-making setting to achieve complementary performance, human and AI should learn from each other and internalize knowledge from their collaboration. This can enhance their individual performance when working independently after their collaboration. However, this expected dual-pathway co-learning process, including both “human learns from AI” and “AI learns from human”, does not occur spontaneously. Human-AI collaboration designs could have inconsistent and intertwined influences on the co-learning process. Based on the learning cycle theory, this study conducted three online, two-stage, and between-subject behavioral experiments to reveal how human and AI learn from each other. By developing a context where human and AI have comparable and moderate performance on emotion classification tasks, our study provides the first empirical evidence of an effective human-AI co-learning process within human-AI collaboration. However, the AI feedback and collaborative workflow design can lead to unequal and potentially negative impacts on both pathways of the co-learning process in groups with varying levels of cognitive reflection capability. These findings highlight three design principles to facilitate the co-learning process embedded in human-AI collaboration rather than naively deploying a complex AI system.
</description>
<pubDate>Tue, 03 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159157</guid>
<dc:date>2024-12-03T00:00:00Z</dc:date>
</item>
<item>
<title>Spectroscopic and computational investigations of Cobalt(II) binding to the innate immune protein human calprotectin</title>
<link>https://hdl.handle.net/1721.1/159156</link>
<description>Spectroscopic and computational investigations of Cobalt(II) binding to the innate immune protein human calprotectin
Killian, Michelle M.; Brophy, Megan B.; Nolan, Elizabeth M.; Brunold, Thomas C.
Human calprotectin (CP) is an innate immune protein that participates in the metal-withholding response to infection by sequestering essential metal nutrients from invading microbial pathogens. CP is comprised of S100A8 (α subunit, 10.8 kDa) and S100A9 (β subunit, 13.2 kDa). Two transition-metal binding sites of CP form at the S100A8/S100A9 dimer interface. Site 1 is a His3Asp motif comprised of His83 and His87 from the S100A8 subunit and His20 and Asp30 from the S100A9 subunit. Site 2 is an unusual hexahistidine motif composed of S100A8 residues His17 and His27 and S100A9 residues His91, His95, His103, and His105. In the present study, the His3Asp and His6 sites of CP were further characterized by utilizing Co2+ as a spectroscopic probe. Magnetic circular dichroism spectroscopy was employed in conjunction with electron paramagnetic resonance spectroscopy and density functional theory computations to characterize the Co2+-bound S100A8(C42S)/S100A9(C3S) CP-Ser variant and six site variants that allowed the His3Asp and His6 sites to be further probed. Our results provide new insight into the metal-binding sites of CP-Ser and the effect of amino acid substitutions on the structure of site 2.
</description>
<pubDate>Wed, 17 Jan 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159156</guid>
<dc:date>2024-01-17T00:00:00Z</dc:date>
</item>
<item>
<title>Lp -Hardy identities and inequalities with respect to the distance and mean distance to the boundary</title>
<link>https://hdl.handle.net/1721.1/159155</link>
<description>Lp -Hardy identities and inequalities with respect to the distance and mean distance to the boundary
Flynn, Joshua; Lam, Nguyen; Lu, Guozhen
Firstly, this paper establishes useful forms of the remainder term of Hardy-type inequalities on general domains where the weights are functions of the distance to the boundary. For weakly mean convex domains we use the resulting identities to establish nonexistence of extremizers for and improve known sharp Hardy inequalities. Secondly, we establish geometrically interesting remainders for the Davies-Hardy-Tidblom inequalities for the mean distance function, as well as generalize and improve several Hardy type inequalities in the spirit of Brezis and Marcus and spectral estimates of Davies. Lastly, we apply our results to obtain Sobolev inequalities for non-regular Riemannian metrics on geometric exterior domains.
</description>
<pubDate>Mon, 25 Nov 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159155</guid>
<dc:date>2024-11-25T00:00:00Z</dc:date>
</item>
<item>
<title>Multiplicity one for min–max theory in compact manifolds with boundary and its applications</title>
<link>https://hdl.handle.net/1721.1/159154</link>
<description>Multiplicity one for min–max theory in compact manifolds with boundary and its applications
Sun, Ao; Wang, Zhichao; Zhou, Xin
We prove the multiplicity one theorem for min–max free boundary minimal hypersurfaces in compact manifolds with boundary of dimension between 3 and 7 for generic metrics. To approach this, we develop existence and regularity theory for free boundary hypersurface with prescribed mean curvature, which includes the regularity theory for minimizers, compactness theory, and a generic min–max theory with Morse index bounds. As applications, we construct new free boundary minimal hypersurfaces in the unit balls in Euclidean spaces and self-shrinkers of the mean curvature flows with arbitrarily large entropy.
</description>
<pubDate>Thu, 07 Mar 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159154</guid>
<dc:date>2024-03-07T00:00:00Z</dc:date>
</item>
<item>
<title>Transportation onto log-Lipschitz perturbations</title>
<link>https://hdl.handle.net/1721.1/159153</link>
<description>Transportation onto log-Lipschitz perturbations
Fathi, Max; Mikulincer, Dan; Shenfeld, Yair
We establish sufficient conditions for the existence of globally Lipschitz transport maps between probability measures and their log-Lipschitz perturbations, with dimension-free bounds. Our results include Gaussian measures on Euclidean spaces and uniform measures on spheres as source measures. More generally, we prove results for source measures on manifolds satisfying strong curvature assumptions. These seem to be the first examples of dimension-free Lipschitz transport maps in non-Euclidean settings, which are moreover sharp on the sphere. We also present some applications to functional inequalities, including a new dimension-free Gaussian isoperimetric inequality for log-Lipschitz perturbations of the standard Gaussian measure. Our proofs are based on the Langevin flow construction of transport maps of Kim and Milman.
</description>
<pubDate>Tue, 20 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159153</guid>
<dc:date>2024-02-20T00:00:00Z</dc:date>
</item>
<item>
<title>Golden lichtenberg algorithm: a fibonacci sequence approach applied to feature selection</title>
<link>https://hdl.handle.net/1721.1/159076</link>
<description>Golden lichtenberg algorithm: a fibonacci sequence approach applied to feature selection
Pereira, João L. J.; Francisco, Matheus B.; Ma, Benedict J.; Gomes, Guilherme F.; Lorena, Ana C.
Computational and technological advancements have led to an increase in data generation and storage capacity. Many annotated datasets have been used to train machine learning models for predictive tasks. Feature selection (FS) is a combinatorial binary optimization problem that arises from a need to reduce dataset dimensionality by finding the subset of features with maximum predictive accuracy. While different methodologies have been proposed, metaheuristics adapted to binary optimization have proven to be reliable and efficient techniques for FS. This paper applies the first and unique population-trajectory metaheuristic, the Lichtenberg algorithm (LA), and enhances it with a Fibonacci sequence to improve its exploration capabilities in FS. Substituting the random scales that controls the Lichtenberg figures' size and the population distribution in the original version by a sequence based on the golden ratio, a new optimal exploration–exploitation LF's size decay is presented. The new few hyperparameters golden Lichtenberg algorithm (GLA), LA, and eight other popular metaheuristics are then equipped with the v-shaped transfer function and associated with the K-nearest neighbor classifier in the search of the optimized feature subsets through a double cross-validation experiment method on 15 UCI machine learning repository datasets. The binary GLA selected reduced subsets of features, leading to the best predictive accuracy and fitness values at the lowest computational cost.
</description>
<pubDate>Tue, 13 Aug 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159076</guid>
<dc:date>2024-08-13T00:00:00Z</dc:date>
</item>
<item>
<title>Leveraging Seawater Thermal Energy Storage and Heat Pumps for Coupling Electricity and Urban Heating: A Techno-Economic Analysis</title>
<link>https://hdl.handle.net/1721.1/159075</link>
<description>Leveraging Seawater Thermal Energy Storage and Heat Pumps for Coupling Electricity and Urban Heating: A Techno-Economic Analysis
Abbiasov, Timur; Bischi, Aldo; Gangi, Manfredi; Baccioli, Andrea; Santi, Paolo; Ratti, Carlo
This paper presents an economic assessment of seawater thermal energy storage (TES) integrated with industrial heat pumps to couple renewable electricity generation with urban district heating networks. Using Amsterdam as a case study, we develop a techno-economic model leveraging real-world data on electricity prices, heat demand, and system costs. Our findings show that large-scale TES using seawater as a storage medium significantly enhances district heating economics through energy arbitrage and operational flexibility. The optimal configuration yields a net present value (NPV) of EUR 466 million over 30 years and a payback period under 6 years. Thermal storage increases NPV by 17% compared to systems without storage, while within-day load shifting further boosts economic value by 23%. Accurate demand and price forecasting is critical, as forecasting errors can reduce NPV by 13.7%. The proposed system is scalable and well suited for coastal cities, offering a sustainable, space-efficient solution for urban decarbonization and addressing renewable energy overproduction.
</description>
<pubDate>Mon, 07 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159075</guid>
<dc:date>2025-04-07T00:00:00Z</dc:date>
</item>
<item>
<title>Evidence-Based Nutraceuticals Derived from Antrodia cinnamomea</title>
<link>https://hdl.handle.net/1721.1/159074</link>
<description>Evidence-Based Nutraceuticals Derived from Antrodia cinnamomea
Xu, Chunyuhang; Xie, Qingtong; Kuo, Chien-Liang; Yang, Xin; Huang, Dejian
Antrodia cinnamomea (A. cinnamomea), a medicinal and edible mushroom endemic to Taiwan, has been traditionally valued as a health tonic. Recent studies have highlighted the diverse specialized metabolites and bioactive potential of this substance, primarily attributed to key secondary metabolites such as benzenoids, maleic and succinic acids, ubiquinone, triterpenoids, and the primary metabolite polysaccharides. These compounds exhibit a broad spectrum of pharmacological properties, including those related to antibacterial, antitumor, anti-inflammation, hepatoprotection, hypoglycaemia, and antioxidant activities, and immunomodulation and gut microbiota regulation. These findings highlight the therapeutic potential of A. cinnamomea and its potential applications in health supplements and functional foods. This review evaluated recent advancements in the cultivation, extraction, and characterization of bioactive compounds from A. cinnamomea, with a particular focus on submerged and solid-state fermentation methods. We hope to provide a comprehensive framework for promoting the efficient and scientific evidence based utilization of A. cinnamomea in novel therapeutic strategies and health-related innovations.
</description>
<pubDate>Sun, 30 Mar 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159074</guid>
<dc:date>2025-03-30T00:00:00Z</dc:date>
</item>
<item>
<title>Nature-inspired orientation-dependent toughening mechanism for TPMS ceramic architectures</title>
<link>https://hdl.handle.net/1721.1/159073</link>
<description>Nature-inspired orientation-dependent toughening mechanism for TPMS ceramic architectures
D’Andrea, Luca; Yang, Ting; Dao, Ming; Vena, Pasquale
Triply periodic minimal surfaces (TPMSs) have been extensively studied in many fields of engineering, including bone tissue scaffolds. Recent advancements in manufacturing have enabled the three-dimensional printing of ceramic porous architectures; however, their intrinsic brittleness limits its practical applications. It has been observed that the ossicles of the knobby starfish exhibit a mineralized TPMS structure with lattice distortions (i.e., dislocations), which effectively deviate the crack propagation and enhance the fracture energy. In this article, the aforementioned toughening mechanism has been introduced in a TPMS architecture. We employed finite element models to analyze the effective mechanical properties of the structures under compression, both in the elastic and post-elastic regimes. Our analysis reveals that the introduction of the dislocation induces variations in both elastic and fracture properties of the structures. With particular reference to the fracture behavior, a suitable oriented edge dislocation is able to alter the crack nucleation and propagation, resulting in a tougher structure. Both the elastic and fracture phenomena can be enhanced or reduced by changing the dislocation density.
</description>
<pubDate>Tue, 28 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159073</guid>
<dc:date>2025-01-28T00:00:00Z</dc:date>
</item>
<item>
<title>AGM aquariums and elliptic curves over arbitrary finite fields</title>
<link>https://hdl.handle.net/1721.1/159072</link>
<description>AGM aquariums and elliptic curves over arbitrary finite fields
Kayath, June; Lane, Connor; Neifeld, Ben; Ni, Tianyu; Xue, Hui
In this paper, we define a version of the arithmetic-geometric mean (AGM) function for arbitrary finite fields F q , and study the resulting AGM graph with points ( a , b ) ∈ F q × F q and directed edges between points (a, b), ( a + b 2 , ab ) and (a, b), ( a + b 2 , - ab ) . The points in this graph are naturally associated to elliptic curves over F q in Legendre normal form, with the AGM function defining a 2-isogeny between the associated curves. We use this correspondence to prove several results on the structure, size, and multiplicity of the connected components in the AGM graph.
</description>
<pubDate>Wed, 09 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159072</guid>
<dc:date>2025-04-09T00:00:00Z</dc:date>
</item>
<item>
<title>Did we personalize? Assessing personalization by an online reinforcement learning algorithm using resampling</title>
<link>https://hdl.handle.net/1721.1/159071</link>
<description>Did we personalize? Assessing personalization by an online reinforcement learning algorithm using resampling
Ghosh, Susobhan; Kim, Raphael; Chhabria, Prasidh; Dwivedi, Raaz; Klasnja, Predrag; Liao, Peng; Zhang, Kelly; Murphy, Susan
There is a growing interest in using reinforcement learning (RL) to personalize sequences of treatments in digital health to support users in adopting healthier behaviors. Such sequential decision-making problems involve decisions about when to treat and how to treat based on the user’s context (e.g., prior activity level, location, etc.). Online RL is a promising data-driven approach for this problem as it learns based on each user’s historical responses and uses that knowledge to personalize these decisions. However, to decide whether the RL algorithm should be included in an “optimized” intervention for real-world deployment, we must assess the data evidence indicating that the RL algorithm is actually personalizing the treatments to its users. Due to the stochasticity in the RL algorithm, one may get a false impression that it is learning in certain states and using this learning to provide specific treatments. We use a working definition of personalization and introduce a resampling-based methodology for investigating whether the personalization exhibited by the RL algorithm is an artifact of the RL algorithm stochasticity. We illustrate our methodology with a case study by analyzing the data from a physical activity clinical trial called HeartSteps, which included the use of an online RL algorithm. We demonstrate how our approach enhances data-driven truth-in-advertising of algorithm personalization both across all users as well as within specific users in the study.
</description>
<pubDate>Wed, 10 Apr 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159071</guid>
<dc:date>2024-04-10T00:00:00Z</dc:date>
</item>
<item>
<title>Secure finite-time filtering for switched fuzzy systems with scaling attacks and stochastic sensor faults</title>
<link>https://hdl.handle.net/1721.1/159070</link>
<description>Secure finite-time filtering for switched fuzzy systems with scaling attacks and stochastic sensor faults
Sathishkumar, Murugesan; Joby, Maya; Ma, Yong-Ki; Anthoni, Selvaraj M.; Santra, Srimanta
In this study, we introduce a design for robust secure finite-time mixed H ∞ and passivity filter for discrete-time switched fuzzy systems. This design effectively combats both stochastic scaling attacks and sensor failure. To be specific, the sensor signals are represented by stochastic variables with different failure rates. Also, a comprehensive model is presented to characterize the scaling attacks and it is described by the Bernoulli distributed random variable. By designing a suitable Lyapunov functional candidate and leveraging the principles of finite-time theory, we have formulated a new collection of sufficient conditions. These conditions, expressed as linear matrix inequalities, ensure that the augmented fuzzy system maintains robust stochastic finite-time boundedness, along with a predetermined mixed H ∞ and passivity performance index. Ultimately, two numerical demonstrations are provided, incorporating real-world applications from the continuous-time single-link robot arm model and the tunnel diode circuit systems, to highlight the practicality of the proposed secure filter design.
</description>
<pubDate>Mon, 10 Mar 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159070</guid>
<dc:date>2025-03-10T00:00:00Z</dc:date>
</item>
<item>
<title>A Neural Network Retrieval Technique for High-Resolution Profiling of Cloudy Atmospheres</title>
<link>https://hdl.handle.net/1721.1/159068</link>
<description>A Neural Network Retrieval Technique for High-Resolution Profiling of Cloudy Atmospheres
Blackwell, William J; Milstein, Adam B
The synergistic use of microwave and hyperspectral infrared sounding observations gives rise to a rich array of signal processing challenges. Of particular interest are the following elements which are combined for the first time in the retrieval technique presented here: 1) radiance noise filtering and redundancy removal (compression) using principal components transforms and canonical correlations, 2) data fusion (infrared plus microwave at possibly different spatial and spectral resolutions) and stochastic cloud clearing (SCC), and 3) geophysical product retrieval from spectral radiance measurements using neural networks. In this paper, we describe the algorithm and demonstrate performance using the Atmospheric Infrared Sounder (AIRS) and the Advanced Microwave Sounding Unit (AMSU). We show that performance is improved by approximately 25%-50% using the neural network method relative to other common techniques. Furthermore, we quantify the improvement in the vertical resolution of the retrieved products.
</description>
<pubDate>Tue, 01 Apr 2014 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159068</guid>
<dc:date>2014-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Cloudy skies: assessing public understanding of global warming</title>
<link>https://hdl.handle.net/1721.1/159067</link>
<description>Cloudy skies: assessing public understanding of global warming
Sterman, John D; Sweeney, Linda Booth
Surveys show that most Americans believe global warming is real. But many advocate delaying action until there is more evidence that warming is harmful. The stock and flow structure of the climate, however, means “wait and see” policies guarantee further warming. Atmospheric CO2 concentration is now higher than any time in the last 420,000 years, and growing faster than any time in the past 20,000 years. The high concentration of CO2 and other greenhouse gases (GHGs) generates significant radiative forcing that contributes to warming. To reduce radiative forcing and the human contribution to warming, GHG concentrations must fall. To reduce GHG concentrations, emissions must fall below the rate at which GHGs are removed from the atmosphere. Anthropogenic CO2 emissions are now roughly double the removal rate, and the removal rate is projected to fall as natural carbon sinks saturate. Emissions must therefore fall by more than half even to stabilize CO2 at present record levels. Such reductions greatly exceed the Kyoto targets, while the Bush administration's Clear Skies Initiative calls for continued emissions growth. Does the public understand these physical facts? We report experiments assessing people's intuitive understanding of climate change. We presented highly educated graduate students with descriptions of greenhouse warming drawn from the IPCC's nontechnical reports. Subjects were then asked to identify the likely response to various scenarios for CO2 emissions or concentrations. The tasks require no mathematics, only an understanding of stocks and flows and basic facts about climate change. Overall performance was poor. Subjects often select trajectories that violate conservation of matter. Many believe temperature responds immediately to changes in CO2 emissions or concentrations. Still more believe that stabilizing emissions near current rates would stabilize the climate, when in fact emissions would continue to exceed removal, increasing GHG concentrations and radiative forcing. Such beliefs support “wait and see” policies, but violate basic laws of physics. We discuss implications for education and public policy.
</description>
<pubDate>Sat, 01 Jun 2002 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159067</guid>
<dc:date>2002-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Effect of electricity generation mix on battery electric vehicle adoption and its environmental impact</title>
<link>https://hdl.handle.net/1721.1/159066</link>
<description>Effect of electricity generation mix on battery electric vehicle adoption and its environmental impact
Choi, Hyunhong; Shin, Jungwoo; Woo, JongRoul
Battery electric vehicles (BEVs) are gaining much attention as the next technology paradigm in the transport sector because it can mitigate the environmental problems, such as greenhouse gas (GHG) emissions, of conventional internal combustion engine vehicles. Many countries are attempting to promote the consumer adoption of BEVs in this sense by providing subsidies and expanding their related infrastructure. The expected environmental effect of BEVs is an important factor in increasing the consumer adoption of BEVs, and the environmental impact of BEVs is directly related to the electricity generation mix. In this study, we analyze how the consumer adoption behavior of BEVs and their environmental impact can be changed by improving the environmental performance of BEVs based on the electricity generation mix. To this end, we estimate consumer preferences in vehicles by using a discrete choice experiment survey and a mixed logit model. Then, on the basis of the estimation results, we simulate changes in consumers’ vehicle adoption behavior according to various electricity generation mixes and measure the environmental impact of these changes. Analysis results show changing the electricity generation mix to renewable-oriented mix can promote BEV's market share up to 10% and reduce GHG emissions up to 5% by 2026.
</description>
<pubDate>Mon, 01 Oct 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159066</guid>
<dc:date>2018-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>A secondary analysis: the impact of pre-existing chronic pain among patients with cancer presenting to the emergency department with acute pain</title>
<link>https://hdl.handle.net/1721.1/159065</link>
<description>A secondary analysis: the impact of pre-existing chronic pain among patients with cancer presenting to the emergency department with acute pain
Beck, Meghan; Schreiber, Kristin L.; Wilson, Jenna M.; Flowers, K. M.; Edwards, Robert R.; Chai, Peter R.; Azizoddin, Desiree R.
Purpose Patients with cancer may experience pain from cancer itself or its treatment. Additionally, chronic pain (CP) predating a patient’s cancer diagnosis may make the etiology of pain less clear and the management of pain more complex. In this brief report, we investigated differences in biopsychosocial characteristics, pain severity, and opioid consumption, comparing groups of cancer patients with and without a history of CP who presented to the emergency department (ED) with a complaint of cancer-related pain. Methods This secondary analysis of a prospective cohort study included patients with cancer who presented to the ED with a complaint of pain (≥ 4/10). Sociodemographic, clinical, psychological, and pain characteristics were assessed in the ED and subsequent hospitalization. Mann-Whitney U-, T-, and Chi-square tests were used to compare differences between patients with and without pre-existing CP before cancer. Results Patients with pre-existing CP had lower income (p = 0.21) and less formal education (p = 0.25) and were more likely to have a diagnosis of depression or substance use disorder (p &lt; 0.01). Patients with pre-existing CP reported significantly greater pain severity in the ED and during hospitalization compared to those without pre-existing CP (p &lt; 0.05), despite receiving greater amounts of opioid analgesics (p = 0.036). Conclusion Identifying a history of pre-existing CP during intake may help identify patients with cancer with difficult to manage pain, who may particularly benefit from multimodal interventions and supportive care. In addition, referral of these patients for the management of co-occurring pain disorders may help decrease the usage of the ED for undertreated pain.
</description>
<pubDate>Thu, 25 Jan 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159065</guid>
<dc:date>2024-01-25T00:00:00Z</dc:date>
</item>
<item>
<title>Uniacute Spherical Codes</title>
<link>https://hdl.handle.net/1721.1/159064</link>
<description>Uniacute Spherical Codes
Lepsveridze, Saba; Saatashvili, Aleksandre; Zhao, Yufei
A spherical L-code, where L ⊆ [−1,∞), consists of unit vectors in Rd whose pairwise inner products are contained in L. Determining the maximum cardinality NL (d)&#13;
of an L-code in Rd is a fundamental question in discrete geometry and has been&#13;
extensively investigated for various choices of L. Our understanding in high dimensions is generally quite poor. Equiangular lines, corresponding to L = {−α, α}, is&#13;
a rare and notable solved case. Bukh studied an extension of equiangular lines and&#13;
showed that NL (d) = OL (d) for L = [−1, −β]∪{α} with α, β &gt; 0 (we call such&#13;
L-codes “uniacute”), leaving open the question of determining the leading constant&#13;
factor. Balla, Dräxler, Keevash, and Sudakov proved a “uniform bound” showing&#13;
lim supd→∞ NL (d)/d ≤ 2p for L = [−1, −β]∪{α} and p =  α/β  + 1. For which&#13;
(α, β) is this uniform bound tight? We completely answer this question. We develop a&#13;
framework for studying uniacute codes, including a global structure theorem showing&#13;
that the Gram matrix has an approximate p-block structure. We also formulate a notion&#13;
of “modular codes,” which we conjecture to be optimal in high dimensions.
</description>
<pubDate>Thu, 02 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159064</guid>
<dc:date>2025-01-02T00:00:00Z</dc:date>
</item>
<item>
<title>New Partitioning Techniques and Faster Algorithms for Approximate Interval Scheduling</title>
<link>https://hdl.handle.net/1721.1/159063</link>
<description>New Partitioning Techniques and Faster Algorithms for Approximate Interval Scheduling
Compton, Spencer; Mitrović, Slobodan; Rubinfeld, Ronitt
Interval scheduling is a basic algorithmic problem and a classical task in combinatorial optimization. We develop techniques for partitioning and grouping jobs based on their starting/ending times, enabling us to view an instance of interval scheduling on many jobs as a union of multiple interval scheduling instances, each containing only a few jobs. Instantiating these techniques in a dynamic setting produces several new results. For ( 1 + ε ) -approximation of job scheduling of n jobs on a single machine, we develop a fully dynamic algorithm with O ( log n ε ) update and O ( log n ) query worst-case time. Our techniques are also applicable in a setting where jobs have weights. We design a fully dynamic deterministic algorithm whose worst-case update and query times are poly ( log n , 1 ε ) . This is the first algorithm that maintains a ( 1 + ε ) -approximation of the maximum independent set of a collection of weighted intervals in poly ( log n , 1 ε ) time updates/queries. This is an exponential improvement in 1 / ε over the running time of an algorithm of Henzinger, Neumann, and Wiese  [SoCG, 2020]. Our approach also removes all dependence on the values of the jobs’ starting/ending times and weights.
</description>
<pubDate>Thu, 18 Jul 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159063</guid>
<dc:date>2024-07-18T00:00:00Z</dc:date>
</item>
<item>
<title>Hypergeometric L-functions in average polynomial time, II</title>
<link>https://hdl.handle.net/1721.1/159062</link>
<description>Hypergeometric L-functions in average polynomial time, II
Costa, Edgar; Kedlaya, Kiran S.; Roe, David
We describe an algorithm for computing, for all primes p ≤ X , the trace of Frobenius at p of a hypergeometric motive over Q in time quasilinear in X. This involves computing the trace modulo p e for suitable e; as in our previous work treating the case e = 1 , we combine the Beukers–Cohen–Mellit trace formula with average polynomial time techniques of Harvey and Harvey–Sutherland. The key new ingredient for e &gt; 1 is an expanded version of Harvey’s “generic prime” construction, making it possible to incorporate certain p-adic transcendental functions into the computation; one of these is the p-adic Gamma function, whose average polynomial time computation is an intermediate step which may be of independent interest. We also provide an implementation in Sage and discuss the remaining computational issues around tabulating hypergeometric L-series.
</description>
<pubDate>Thu, 30 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159062</guid>
<dc:date>2025-01-30T00:00:00Z</dc:date>
</item>
<item>
<title>Deterministic near-optimal distributed listing of cliques</title>
<link>https://hdl.handle.net/1721.1/159061</link>
<description>Deterministic near-optimal distributed listing of cliques
Censor-Hillel, Keren; Leitersdorf, Dean; Vulakh, David
The importance of classifying connections in large graphs has been the motivation for a rich line of work on distributed subgraph finding that has led to exciting recent breakthroughs. A crucial aspect that remained open was whether deterministic algorithms can be as efficient as their randomized counterparts, where the latter are known to be tight up to polylogarithmic factors. We give deterministic distributed algorithms for listing cliques of size p in n 1 - 2 / p + o ( 1 ) rounds in the Congest model. For triangles, our n 1 / 3 + o ( 1 ) round complexity improves upon the previous state of the art of n 2 / 3 + o ( 1 ) rounds (Chang and Saranurak, in: 2020 IEEE 61st annual symposium on foundations of computer science (FOCS), pp 377–388. IEEE Computer Society, Los Alamito, 2020. https://doi.org/10.1109/FOCS46700.2020.00043 ). For cliques of size p ≥ 4 , ours are the first non-trivial deterministic distributed algorithms. Given known lower bounds, for all values p ≥ 3 our algorithms are tight up to an n o ( 1 ) subpolynomial factor, which comes from the deterministic routing procedure we use.
This article is part of a collection for a Special Issue of Distributed Computing: by invitation only, this special issue highlights the best papers from the ACM Symposium on Principles of Distributed Computing (PODC 2022) held in Salerno, Italy, on July 25-29 2022.
</description>
<pubDate>Thu, 20 Jun 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159061</guid>
<dc:date>2024-06-20T00:00:00Z</dc:date>
</item>
<item>
<title>Residual-Mean Solutions for the Antarctic Circumpolar Current and Its Associated Overturning Circulation</title>
<link>https://hdl.handle.net/1721.1/159060</link>
<description>Residual-Mean Solutions for the Antarctic Circumpolar Current and Its Associated Overturning Circulation
Marshall, John; Radko, Timour
Residual-mean theory is applied to the streamwise-averaged Antarctic Circumpolar Current to arrive at a concise description of the processes that set up its stratification and meridional overturning circulation on an f plane. Simple solutions are found in which transfer by geostrophic eddies colludes with applied winds and buoyancy fluxes to determine the depth and stratification of the thermocline and the pattern of associated (residual) meridional overturning circulation.
</description>
<pubDate>Sat, 01 Nov 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159060</guid>
<dc:date>2003-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Climate extremes and ozone pollution: a growing threat to china’s food security</title>
<link>https://hdl.handle.net/1721.1/159059</link>
<description>Climate extremes and ozone pollution: a growing threat to china’s food security
Tian, Hanqin; Ren, Wei; Tao, Bo; Sun, Ge; Chappelka, Art; Wang, Xiaoke; Pan, Shufen; Yang, Jia; Liu, Jiyuan; S. felzer, Ben; M. melillo, Jerry; Reilly, John
Ensuring global food security requires a sound understanding of climate and environmental controls on crop productivity. The majority of existing assessments have focused on physical climate variables (i.e., mean temperature and precipitation), but less on the increasing climate extremes (e.g., drought) and their interactions with increasing levels of tropospheric ozone (O3). Here we quantify the combined impacts of drought and O3 on China's crop yield using a comprehensive, process-based agricultural ecosystem model in conjunction with observational data. Our results indicate that climate change/variability and O3 together led to an annual mean reduction of crop yield by 10.0% or 55 million tons per year at the national level during 1981–2010. Crop yield shows a growing threat from severe episodic droughts and increasing O3 concentrations since 2000, with the largest crop yield losses occurring in northern China, causing serious concerns in food supply security in China. Our results imply that reducing tropospheric O3 levels is critical for securing crop production in coping with increasing frequency and severity of extreme climate events such as droughts. Improving air quality should be a core component of climate adaptation strategies.
</description>
<pubDate>Fri, 01 Jan 2016 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159059</guid>
<dc:date>2016-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>What have we learned about artificial intelligence from studying the brain?</title>
<link>https://hdl.handle.net/1721.1/159058</link>
<description>What have we learned about artificial intelligence from studying the brain?
Gershman, Samuel J.
Neuroscience and artificial intelligence (AI) share a long, intertwined history. It has been argued that discoveries in neuroscience were (and continue to be) instrumental in driving the development of new AI technology. Scrutinizing these historical claims yields a more nuanced story, where AI researchers were loosely inspired by the brain, but ideas flowed mostly in the other direction.
</description>
<pubDate>Sat, 10 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159058</guid>
<dc:date>2024-02-10T00:00:00Z</dc:date>
</item>
<item>
<title>Artificial intelligence in neurology: opportunities, challenges, and policy implications</title>
<link>https://hdl.handle.net/1721.1/159057</link>
<description>Artificial intelligence in neurology: opportunities, challenges, and policy implications
Voigtlaender, Sebastian; Pawelczyk, Johannes; Geiger, Mario; Vaios, Eugene J.; Karschnia, Philipp; Cudkowicz, Merit; Dietrich, Jorg; Haraldsen, Ira R. J. H.; Feigin, Valery; Owolabi, Mayowa; White, Tara L.; Świeboda, Paweł; Farahany, Nita
Neurological conditions are the leading cause of disability and mortality combined, demanding innovative, scalable, and sustainable solutions. Brain health has become a global priority with adoption of the World Health Organization’s Intersectoral Global Action Plan in 2022. Simultaneously, rapid advancements in artificial intelligence (AI) are revolutionizing neurological research and practice. This scoping review of 66 original articles explores the value of AI in neurology and brain health, systematizing the landscape for emergent clinical opportunities and future trends across the care trajectory: prevention, risk stratification, early detection, diagnosis, management, and rehabilitation. AI’s potential to advance personalized precision neurology and global brain health directives hinges on resolving core challenges across four pillars—models, data, feasibility/equity, and regulation/innovation—through concerted pursuit of targeted recommendations. Paramount actions include swift, ethical, equity-focused integration of novel technologies into clinical workflows, mitigating data-related issues, counteracting digital inequity gaps, and establishing robust governance frameworks balancing safety and innovation.
</description>
<pubDate>Sat, 17 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159057</guid>
<dc:date>2024-02-17T00:00:00Z</dc:date>
</item>
<item>
<title>Lower crustal assimilation revealed by sulfur isotope systematics of the Bear Valley Intrusive Suite, southern Sierra Nevada Batholith, California, USA</title>
<link>https://hdl.handle.net/1721.1/159056</link>
<description>Lower crustal assimilation revealed by sulfur isotope systematics of the Bear Valley Intrusive Suite, southern Sierra Nevada Batholith, California, USA
Rezeau, Hervé; Jagoutz, Oliver; Beaudry, Patrick; Klein, Benjamin Z; Izon, Gareth; Ono, Shuhei
The origin of the wide range of sulfur isotope compositions (i.e., δ34S) measured in arc rocks remains debated. While the observed δ34S variability has been attributed to slab-related fluids that flux the sub-arc mantle, others have argued that it primarily reflects crustal-derived processes by some combination of magmatic differentiation, country rock assimilation, and/or degassing. Here, we present new whole rock sulfur isotopes for the Late Cretaceous Bear Valley Intrusive Suite (BVIS) that represents a continuous arc crustal section in the southern Sierra Nevada Batholith, exposing lower crustal mafic cumulates and cogenetic mid-upper crustal tonalites. Our data reveal a range of δ34S-depleted values (–1.2 to − 5.1‰) for the BVIS with overlapping δ34S between mafic cumulates and tonalites. Complementary δ34S measurements of structurally concordant metasedimentary pendants indicate δ34S-depleted values (–11.5 to − 5.2‰) for deep metasedimentary rocks compared to δ34S-enriched values (+ 1.6 to + 6.4‰) for shallower ones. Quantitative mixing models suggest that assimilation of crustal-derived sulfur from metasedimentary rocks in the lower crust can account for the δ34S-depleted values in the BVIS, whereas assimilation of shallower ones is unlikely. Sulfur degassing modelling indicates that the range of δ34S-depleted values observed within mid-upper crustal tonalites can be reproduced by degassing  ~60–80% of the initial melt sulfur at fO2 ≤ FMQ + 1 with initial H2O content of 10–12 wt%. Finally, the identical ranges of δ34S values within the tonalites and mafic cumulates argue for limited sulfur isotope fractionation related to magmatic sulfide immiscibility. Although assimilation, magma degassing and sulfide immiscibility are not mutually exclusive during crustal magmatic processes, field, thermal and geochemical evidence favor lower crustal-derived sulfur assimilation as the primary mechanism to explain the range of δ34S- depleted values within the mafic cumulates, which are ultimately inherited by the derivative tonalitic melts. Overall, this study emphasizes that deep crustal magmatic processes can severely influence the early δ34S evolution of arc magmas.
</description>
<pubDate>Mon, 01 Apr 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159056</guid>
<dc:date>2024-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Instrument stiffness artifacts: avoiding bad data with operational limit lines of G max and E max</title>
<link>https://hdl.handle.net/1721.1/159054</link>
<description>Instrument stiffness artifacts: avoiding bad data with operational limit lines of G max and E max
Hossain, Mohammad T.; Macosko, Christopher W.; McKinley, Gareth H.; Ewoldt, Randy H.
We derive an operating limit line for the non-ideal artifacts caused by machine stiffness (instrument compliance) which causes measured apparent viscoelastic moduli to be systematically lower than the true values. The limit is represented as a maximum measurable apparent shear modulus G max , or tensile modulus E max , which can be shown explicitly on plots of viscoelastic moduli independent of the applied displacement, load, or frequency. Uncorrected data should be much lower than these limits. Corrected data can be above these limits and credible. These interpretations are supported by studying how correction equations can be re-written in terms of G max or E max and how error propagates in the corrections. We also show how the dynamic compliance representation leads to simpler corrections and how machine stiffness can be calibrated from apparent dynamic compliance measurements of a single sample at two different geometry conditions. Equations are provided for rotational rheometers as well as linear displacement dynamic mechanical analyzers. Used as an operational limit line, G max or E max , the method can assess the credibility of data from others—even without access to their primary data of displacement, force, torque, or amount of correction, which are rarely reported. The method can also anticipate future issues before data are taken, e.g., to understand operational limits when selecting instruments and test geometries.
</description>
<pubDate>Mon, 20 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159054</guid>
<dc:date>2025-01-20T00:00:00Z</dc:date>
</item>
<item>
<title>Coalesce: An Accessible Mixed-Initiative System for Designing Community-Centric Questionnaires</title>
<link>https://hdl.handle.net/1721.1/159053</link>
<description>Coalesce: An Accessible Mixed-Initiative System for Designing Community-Centric Questionnaires
Overney, Cassandra; Kessler, Daniel; Fulay, Suyash; Jasim, Mahmood; Roy, Deb
Effectively incorporating community input into civic decision-making processes is crucial for fostering inclusive governance. However, public officials often face challenges in formulating effective questions to gather meaningful insights due to constraints such as time, resources, and limited experience in questionnaire design. This paper explores the potential of leveraging large language models (LLMs) to address this challenge. We present Coalesce, a novel mixed-initiative system that utilizes LLMs to assist civic leaders in crafting tailored and impactful questions for surveys, interviews, and conversation guides. Guided by best practices in questionnaire design, Coalesce improves question readability, enhances specificity, and reduces bias. To inform our design, we conducted a formative interview study with 30 civic leaders and implemented an iterative human-centered design process involving 14 feedback sessions. We built a fully-functional system before evaluating it through a real-world user study with 16 participants who applied the platform to their own community engagement projects. Our findings show that Coalesce improved participants’ confidence in questionnaire design, supported diverse workflows, and fostered learning while raising important questions about human agency and over-reliance on AI. These insights highlight the potential for intelligent user interfaces to reshape how civic leaders engage with their communities, fostering more informed and inclusive decision-making processes.
IUI ’25, Cagliari, Italy
</description>
<pubDate>Mon, 24 Mar 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159053</guid>
<dc:date>2025-03-24T00:00:00Z</dc:date>
</item>
<item>
<title>NeST: Neural Stress Tensor Tomography by leveraging 3D Photoelasticity</title>
<link>https://hdl.handle.net/1721.1/159052</link>
<description>NeST: Neural Stress Tensor Tomography by leveraging 3D Photoelasticity
Dave, Akshat; Zhang, Tianyi; Young, Aaron; Raskar, Ramesh; Heidrich, Wolfgang; Veeraraghavan, Ashok
Photoelasticity enables full-field stress analysis in transparent objects through stress-induced birefringence. Existing techniques are limited to 2D slices and require destructively slicing the object. Recovering the internal 3D stress distribution of the entire object is challenging as it involves solving a tensor tomography problem and handling phase wrapping ambiguities. We introduce NeST, an analysis-by-synthesis approach for reconstructing 3D stress tensor fields as neural implicit representations from polarization measurements. Our key insight is to jointly handle phase unwrapping and tensor tomography using a differentiable forward model based on Jones calculus. Our non-linear model faithfully matches real captures, unlike prior linear approximations. We develop an experimental multi-axis polariscope setup to capture 3D photoelasticity and experimentally demonstrate that NeST reconstructs the internal stress distribution for objects with varying shape and force conditions. Additionally, we showcase novel applications in stress analysis, such as visualizing photoelastic fringes by virtually slicing the object and viewing photoelastic fringes from unseen viewpoints. NeST paves the way for scalable non-destructive 3D photoelastic analysis.
</description>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159052</guid>
</item>
<item>
<title>OpenEarable 2.0: Open-Source Earphone Platform for Physiological Ear Sensing</title>
<link>https://hdl.handle.net/1721.1/159051</link>
<description>OpenEarable 2.0: Open-Source Earphone Platform for Physiological Ear Sensing
R?ddiger, Tobias; K?ttner, Michael; Lepold, Philipp; King, Tobias; Moschina, Dennis; Bagge, Oliver; Paradiso, Joseph; Clarke, Christopher; Beigl, Michael
Earphones have evolved from pure audio devices to "earables" that are capable of advanced sensing. Bespoke research devices have shown the unique sensing capabilities of the earable platform; however, they are hard to replicate and require expertise to develop in the first place. In this paper, we present OpenEarable 2.0 - an open source, unified platform that integrates a larger number of sensors for conducting comprehensive earable research. OpenEarable 2.0 works as regular binaural Bluetooth earphones and features two ultrasound capable microphones (inward/outward), a 3-axis ear canal accelerometer/bone microphone, a 9-axis head inertial measurement unit, pulse oximeter, optical temperature sensor, ear canal pressure sensor, and microSD card. These capabilities allow for the detection and measurement of 30+ phenomena on the ear that can be used across a wide range of applications in health monitoring, activity tracking, human-computer-interaction and authentication. We describe the design and development of OpenEarable 2.0 which follows best open hardware practices and achieves commercial-level wearability. We provide justification for the selection and placement of integrated sensors and include in-depth descriptions of the extensible, open source firmware and hardware that are implemented using free to use tools and frameworks. For real-time sensor control and data recording we also contribute a web-based dashboard and mobile smartphone app. The wearability and ability to sense different phenomena are validated in four studies which showcases how OpenEarable 2.0 provides accurate measurements in comparison to established gold-standard measurements. We further demonstrate that OpenEarable 2.0 can be assembled by inexperienced users, and that undergraduate students can build applications using the OpenEarable platform.
</description>
<pubDate>Tue, 04 Mar 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159051</guid>
<dc:date>2025-03-04T00:00:00Z</dc:date>
</item>
<item>
<title>Learning-Augmented Competitive Algorithms for Spatiotemporal Online Allocation with Deadline Constraints</title>
<link>https://hdl.handle.net/1721.1/159050</link>
<description>Learning-Augmented Competitive Algorithms for Spatiotemporal Online Allocation with Deadline Constraints
Lechowicz, Adam; Christianson, Nicolas; Sun, Bo; Bashir, Noman; Hajiesmaili, Mohammad; Wierman, Adam; Shenoy, Prashant
We introduce and study spatiotemporal online allocation with deadline constraints (SOAD), a new online problem motivated by emerging challenges in sustainability and energy.  In SOAD, an online player completes a workload by allocating and scheduling it on the points of a metric space $(X, d)$ while subject to a deadline $T$.  At each time step, a service cost function is revealed that represents the cost of servicing the workload at each point, and the player must irrevocably decide the current allocation of work to points.  Whenever the player moves this allocation, they incur a movement cost defined by the distance metric $d(\cdot, \ \cdot)$ that captures, e.g., an overhead cost.  SOAD formalizes the open problem of combining general metrics and deadline constraints in the online algorithms literature, unifying problems such as metrical task systems and online search.  We propose a competitive algorithm for SOAD along with a matching lower bound establishing its optimality.  Our main algorithm, ST-CLIP, is a learning-augmented algorithm that takes advantage of predictions (e.g., forecasts of relevant costs) and achieves an optimal consistency-robustness trade-off.  We evaluate our proposed algorithms in a simulated case study of carbon-aware spatiotemporal workload management, an application in sustainable computing that schedules a delay-tolerant batch compute job on a distributed network of data centers.  In these experiments, we show that ST-CLIP substantially improves on heuristic baseline methods.
</description>
<pubDate>Mon, 10 Mar 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159050</guid>
<dc:date>2025-03-10T00:00:00Z</dc:date>
</item>
<item>
<title>The ultra-thin conception of objecthood</title>
<link>https://hdl.handle.net/1721.1/159049</link>
<description>The ultra-thin conception of objecthood
Rayo, Agustin
In his excellent book Thin Objects, Øystein Linnebo develops a conception of&#13;
objecthood that allows for thin objects: objects whose ‘existence does not&#13;
make a substantial demand on the world’ (p. 4). His proposal is premised on&#13;
the Fregean dictum that to be an object is to be the referent of a possible&#13;
singular term (p. 22). As a result, much of Linnebo’s argumentation is focused&#13;
on defending a ‘thin’ conception of reference, which is liberal enough to&#13;
allow for thin objects. This paper is a critique of Linnebo’s conception of&#13;
reference.
</description>
<pubDate>Sun, 02 Mar 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159049</guid>
<dc:date>2025-03-02T00:00:00Z</dc:date>
</item>
<item>
<title>What Constitutes a Less Discriminatory Algorithm?</title>
<link>https://hdl.handle.net/1721.1/159048</link>
<description>What Constitutes a Less Discriminatory Algorithm?
Laufer, Benjamin; Raghavan, Manish; Barocas, Solon
Disparate impact doctrine offers an important legal apparatus for targeting discriminatory data-driven algorithmic decisions. A recent body of work has focused on conceptualizing one particular construct from this doctrine: the less discriminatory alternative, an alternative policy that reduces disparities while meeting the same business needs of a status quo or baseline policy. However, attempts to operationalize this construct in the algorithmic setting must grapple with some thorny challenges and ambiguities. In this paper, we attempt to raise and resolve important questions about less discriminatory algorithms (LDAs). How should we formally define LDAs, and how does this interact with different societal goals they might serve? And how feasible is it for firms or plaintiffs to computationally search for candidate LDAs? We find that formal LDA definitions face fundamental challenges when they attempt to evaluate and compare predictive models in the absence of held-out data. As a result, we argue that LDA definitions cannot be purely quantitative, and must rely on standards of "reasonableness." We then raise both mathematical and computational constraints on firms' ability to efficiently conduct a proactive search for LDAs, but we provide evidence that these limits are "weak" in a formal sense. By defining LDAs formally, we put forward a framework in which both firms and plaintiffs can search for alternative models that comport with societal goals.
CSLAW ’25, München, Germany
</description>
<pubDate>Tue, 25 Mar 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159048</guid>
<dc:date>2025-03-25T00:00:00Z</dc:date>
</item>
<item>
<title>Ortho-Unit Polygons can be Guarded with at most n - 4 8 Guards</title>
<link>https://hdl.handle.net/1721.1/159047</link>
<description>Ortho-Unit Polygons can be Guarded with at most n - 4 8 Guards
Díaz-Báñez, J. M.; Horn, P.; Lopez, M. A.; Marín, N.; Ramírez-Vigueras, A.; Solé-Pi, O.; Stevens, A.; Urrutia, J.
Abstract An orthogonal polygon is called an ortho-unit polygon if its vertices have integer coordinates, and all of its edges have length one. In this paper we prove that any ortho-unit polygon with n ≥ 12 vertices can be guarded with at most ⌊ n - 4 8 ⌋ guards, which is a tight bound.
</description>
<pubDate>Sun, 29 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159047</guid>
<dc:date>2024-12-29T00:00:00Z</dc:date>
</item>
<item>
<title>Evaluating the Impacts of Swapping on the US Decennial Census</title>
<link>https://hdl.handle.net/1721.1/159046</link>
<description>Evaluating the Impacts of Swapping on the US Decennial Census
Ballesteros, Mar?a; Dwork, Cynthia; King, Gary; Olson, Conlan; Raghavan, Manish
To meet its dual burdens of providing useful statistics and ensuring privacy of individual respondents, the US Census Bureau has for decades introduced some form of "noise" into published statistics. Initially, they used a method known as "swapping" (1990-2010). In 2020, they switched to an algorithm called TopDown that ensures a form of Differential Privacy. While the TopDown algorithm has been made public, no implementation of swapping has been released and many details of the deployed swapping methodology deployed have been kept secret. Further, the Bureau has not published (even a synthetic) "original" dataset and its swapped version. It is therefore difficult to evaluate the effects of swapping, and to compare these effects to those of other privacy technologies. To address these difficulties we describe and implement a parameterized swapping algorithm based on Census publications, court documents, and informal interviews with Census employees. With this implementation, we characterize the impacts of swapping on a range of statistical quantities of interest. We provide intuition for the types of shifts induced by swapping and compare against those introduced by TopDown. We find that even when swapping and TopDown introduce errors of similar magnitude, the direction in which statistics are biased need not be the same across the two techniques. More broadly, our implementation provides researchers with the tools to analyze and potentially correct for the impacts of disclosure avoidance systems on the quantities they study.
CSLAW ’25, München, Germany
</description>
<pubDate>Tue, 25 Mar 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159046</guid>
<dc:date>2025-03-25T00:00:00Z</dc:date>
</item>
<item>
<title>The André–Quillen cohomology of commutative monoids</title>
<link>https://hdl.handle.net/1721.1/159045</link>
<description>The André–Quillen cohomology of commutative monoids
Agrawalla, Bhavya; Khlaif, Nasief; Miller, Haynes
We observe that Beck modules for a commutative monoid are exactly modules over a graded commutative ring associated to the monoid. Under this identification, the Quillen cohomology of commutative monoids is a special case of the André–Quillen cohomology for graded commutative rings, generalizing a result of Kurdiani and Pirashvili. To verify this we develop the necessary grading formalism. The partial cochain complex developed by Pierre Grillet for computing Quillen cohomology appears as the start of a modification of the Harrison cochain complex suggested by Michael Barr.
</description>
<pubDate>Tue, 09 Apr 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159045</guid>
<dc:date>2024-04-09T00:00:00Z</dc:date>
</item>
<item>
<title>Higher dimensional Fourier quasicrystals from Lee–Yang varieties</title>
<link>https://hdl.handle.net/1721.1/159044</link>
<description>Higher dimensional Fourier quasicrystals from Lee–Yang varieties
Alon, Lior; Kummer, Mario; Kurasov, Pavel; Vinzant, Cynthia
In this paper, we construct Fourier quasicrystals with unit masses in arbitrary dimensions. This generalizes a one-dimensional construction of Kurasov and Sarnak. To do this, we employ a class of complex algebraic varieties avoiding certain regions in C n , which generalize hypersurfaces defined by Lee–Yang polynomials. We show that these are Delone almost periodic sets that have at most finite intersection with every discrete periodic set.
</description>
<pubDate>Mon, 16 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159044</guid>
<dc:date>2024-12-16T00:00:00Z</dc:date>
</item>
<item>
<title>C2-Lusin approximation of strongly convex functions</title>
<link>https://hdl.handle.net/1721.1/159043</link>
<description>C2-Lusin approximation of strongly convex functions
Azagra, Daniel; Drake, Marjorie; Hajłasz, Piotr
Abstract We prove that if u : R n → R is strongly convex, then for every ε &gt; 0 there is a strongly convex function v ∈ C 2 ( R n ) such that | { u ≠ v } | &lt; ε and ∥ u − v ∥ ∞ &lt; ε .
</description>
<pubDate>Wed, 03 Apr 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159043</guid>
<dc:date>2024-04-03T00:00:00Z</dc:date>
</item>
<item>
<title>MIND (Mixed-Initiative Next-gen Design): Workshop on Blending Agents and Direct Manipulation for Harnessing LLMs</title>
<link>https://hdl.handle.net/1721.1/159042</link>
<description>MIND (Mixed-Initiative Next-gen Design): Workshop on Blending Agents and Direct Manipulation for Harnessing LLMs
Dinakar, Karthik; Lieberman, Henry; Wu, Sonia
Since the 1980s, a key debate in human-centered computing involving machine learning at IUI is between agent-driven systems and direct manipulation. The explosion of Large Language Models (LLMs), particularly auto-regressive as agents serving as chatbots, generative search, and work automation tools, has also brought with it inherent limitations. We posit that efforts to address and alleviate these LLM challenges—hallucinations, unpredictable outputs, lack of transparency, and difficulties in customization—cannot be solved through algorithmic improvements alone but require elevated mixed-initiative interface design at the heart of the IUI community. This workshop aims to bridge the gap between agent-driven automation and direct manipulation by exploring mixed-initiative interaction models that blend the strengths of both paradigms to empower end-users seeking to harness LLMs.
IUI Companion ’25, Cagliari, Italy
</description>
<pubDate>Mon, 24 Mar 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159042</guid>
<dc:date>2025-03-24T00:00:00Z</dc:date>
</item>
<item>
<title>Wilson spaces, snaith constructions, and elliptic orientations</title>
<link>https://hdl.handle.net/1721.1/159041</link>
<description>Wilson spaces, snaith constructions, and elliptic orientations
Chatham, Hood; Hahn, Jeremy; Yuan, Allen
We construct a canonical family of even periodic E ∞ -ring spectra, with exactly one member of the family for every prime p and chromatic height n . At height 1 our construction is due to Snaith, who built complex K -theory from CP ∞ . At height 2 we replace CP ∞ with a p -local retract of BU ⟨ 6 ⟩ , producing a new theory that orients elliptic, but not generic, height 2 Morava E -theories. In general our construction exhibits a kind of redshift, whereby BP ⟨ n − 1 ⟩ is used to produce a height n theory. A familiar sequence of Bocksteins, studied by Tamanoi, Ravenel, Wilson, and Yagita, relates the K ( n ) -localization of our height n ring to work of Peterson and Westerland building E n h S G ± from K ( Z , n + 1 ) .
</description>
<pubDate>Thu, 15 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159041</guid>
<dc:date>2024-02-15T00:00:00Z</dc:date>
</item>
<item>
<title>The d γ / 2 -Variation of Distance Profiles in γ -Liouville Quantum Gravity</title>
<link>https://hdl.handle.net/1721.1/159040</link>
<description>The d γ / 2 -Variation of Distance Profiles in γ -Liouville Quantum Gravity
Bhatia, Manan
For Brownian surfaces with boundary and an interior marked point, a natural observable to consider is the distance profile, defined as the process of distances from the marked point to a variable point lying on the boundary. When the boundary is parametrized by the natural length measure on it, this distance profile turns out to be locally absolutely continuous to Brownian motion, and as a result, the boundary length measure itself has a natural interpretation as the quadratic variation process of the distance profile. In this paper, we extend this interpretation to γ -Liouville quantum gravity ( γ -LQG), a one-parameter family of models of random geometry which is known to specialize to the case of Brownian geometry for the case γ = 8 / 3 . With d γ denoting the Hausdorff dimension of γ -LQG, we show that for a γ -LQG surface with boundary, the natural boundary length measure can be interpreted (up to a constant factor) as the d γ / 2 -variation process of the distance profile from an interior point.
</description>
<pubDate>Mon, 17 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159040</guid>
<dc:date>2025-02-17T00:00:00Z</dc:date>
</item>
<item>
<title>On Modular Invariance of Quantum Affine W-Algebras</title>
<link>https://hdl.handle.net/1721.1/159039</link>
<description>On Modular Invariance of Quantum Affine W-Algebras
Kac, Victor G.; Wakimoto, Minoru
Abstract We find modular transformations of normalized characters for the following W-algebras: (a) W k min ( g ) , where g = D n ( n ≥ 4 ) , or E 6 , E 7 , E 8 , and k is a negative integer ≥ - 2 , or ≥ - h ∨ 6 - 1 , respectively; (b) quantum Hamiltonian reduction of the g ^ -module L ( k Λ 0 ) , where g is a simple Lie algebra, f is its non-zero nilpotent element, and k is a principal admissible level with the denominator u &gt; θ ( x ) , where 2x is the Dynkin characteristic of f, and θ is the highest root of g . We prove that these vertex algebras are modular invariant. A conformal vertex algebra V is called modular invariant if its character t r V q L 0 - c / 24 converges to a holomorphic modular function in the complex upper half-plane on a congruence subgroup. We find explicit formulas for their characters. Modular invariance of V is important since, in particular, conjecturally it implies that V is simple, and that V is rational, provided that it is lisse.
</description>
<pubDate>Sat, 18 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159039</guid>
<dc:date>2025-01-18T00:00:00Z</dc:date>
</item>
<item>
<title>MemPal: Leveraging Multimodal AI and LLMs for Voice-Activated Object Retrieval in Homes of Older Adults</title>
<link>https://hdl.handle.net/1721.1/159037</link>
<description>MemPal: Leveraging Multimodal AI and LLMs for Voice-Activated Object Retrieval in Homes of Older Adults
Maniar, Natasha; Chan, Samantha; Zulfikar, Wazeer; Ren, Scott; Xu, Christine; Maes, Pattie
Older adults have increasing difficulty with retrospective memory, hindering their abilities to perform daily activities and posing stress on caregivers to ensure their wellbeing. Recent developments in Artificial Intelligence (AI) and large context-aware multimodal models offer an opportunity to create memory support systems that assist older adults with common issues like object finding. This paper discusses the development of an AI-based, wearable memory assistant, MemPal, that helps older adults with a common problem, finding lost objects at home, and presents results from tests of the system in older adults’ own homes. Using visual context from a wearable camera, the multimodal LLM system creates a real-time automated text diary of the person’s activities for memory support purposes, offering object retrieval assistance using a voice-based interface. The system is designed to support additional use cases like context-based proactive safety reminders and recall of past actions. We report on a quantitative and qualitative study with N=15 older adults within their own homes that showed improved performance of object finding with audio-based assistance compared to no aid and positive overall user perceptions on the designed system. We discuss further applications of MemPal’s design as a multi-purpose memory aid and future design guidelines to adapt memory assistants to older adults’ unique needs.
IUI ’25, Cagliari, Italy
</description>
<pubDate>Mon, 24 Mar 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159037</guid>
<dc:date>2025-03-24T00:00:00Z</dc:date>
</item>
<item>
<title>Beyond the Holographic Entropy Cone via Cycle Flows</title>
<link>https://hdl.handle.net/1721.1/159036</link>
<description>Beyond the Holographic Entropy Cone via Cycle Flows
He, Temple; Hernández-Cuenca, Sergio; Keeler, Cynthia
Motivated by bit threads, we introduce a new prescription for computing entropy vectors outside the holographic entropy cone. By utilizing cycle flows on directed graphs, we show that the maximum cycle flow associated to any subset of vertices, which corresponds to a subsystem, manifestly obeys purification symmetry. Furthermore, by restricting ourselves to a subclass of directed graphs, we prove that the maximum cycle flow obeys both subadditivity and strong subadditivity, thereby establishing it as a viable candidate for the entropy associated to the subsystem. Finally, we demonstrate how our model generalizes the entropy vectors obtainable via conventional flows in undirected graphs, as well as conjecture that our model similarly generalizes the entropy vectors arising from hypergraphs.
</description>
<pubDate>Sat, 12 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159036</guid>
<dc:date>2024-10-12T00:00:00Z</dc:date>
</item>
<item>
<title>Parisi Formula for Balanced Potts Spin Glass</title>
<link>https://hdl.handle.net/1721.1/159035</link>
<description>Parisi Formula for Balanced Potts Spin Glass
Bates, Erik; Sohn, Youngtak
The Potts spin glass is a generalization of the Sherrington–Kirkpatrick (SK) model that allows for spins to take more than two values. Based on a novel synchronization mechanism, Panchenko (Ann Probab 46(2):829–864, 2018) showed that the limiting free energy is given by a Parisi-type variational formula. The functional order parameter in this formula is a probability measure on a monotone path in the space of positive-semidefinite matrices. By comparison, the order parameter for the SK model is much simpler: a probability measure on the unit interval. Nevertheless, a longstanding prediction by Elderfield and Sherrington (J Phys C Solid State Phys 16(15):L497–L503, 1983) is that the order parameter for the Potts spin glass can be reduced to that of the SK model. We prove this prediction for the balanced Potts spin glass, where the model is constrained so that the fraction of spins taking each value is asymptotically the same. It is generally believed that the limiting free energy of the balanced model is the same as that of the unconstrained model, in which case our results reduce the functional order parameter of Panchenko’s variational formula to probability measures on the unit interval. The intuitive reason—for both this belief and the Elderfield–Sherrington prediction—is that no spin value is a priori preferred over another, and the order parameter should reflect this inherent symmetry. This paper rigorously demonstrates how symmetry, when combined with synchronization, acts as the desired reduction mechanism. Our proof requires that we introduce a generalized Potts spin glass model with mixed higher-order interactions, which is interesting it its own right. We prove that the Parisi formula for this model is differentiable with respect to inverse temperatures. This is a key ingredient for guaranteeing the Ghirlanda–Guerra identities without perturbation, which then allow us to exploit symmetry and synchronization simultaneously.
</description>
<pubDate>Fri, 13 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159035</guid>
<dc:date>2024-09-13T00:00:00Z</dc:date>
</item>
<item>
<title>Sharp Asymptotics for Arm Probabilities in Critical Planar Percolation</title>
<link>https://hdl.handle.net/1721.1/159034</link>
<description>Sharp Asymptotics for Arm Probabilities in Critical Planar Percolation
Du, Hang; Gao, Yifan; Li, Xinyi; Zhuang, Zijie
In this work, we consider critical planar site percolation on the triangular lattice and derive sharp estimates on the asymptotics of the probability of half-plane j-arm events for j ≥ 1 and planar (polychromatic) j-arm events for j &gt; 1 , building upon a recent, not yet peer-reviewed result of Binder and Richards (Convergence rates of random discrete model curves approaching sle curves in the scaling limit. Preprint, 2020). These estimates greatly improve previous results and in particular answer (a large part of) a question of Schramm (ICM Proc., 2006).
</description>
<pubDate>Tue, 23 Jul 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159034</guid>
<dc:date>2024-07-23T00:00:00Z</dc:date>
</item>
<item>
<title>Pluto: Authoring Semantically Aligned Text and Charts for Data-Driven Communication</title>
<link>https://hdl.handle.net/1721.1/159033</link>
<description>Pluto: Authoring Semantically Aligned Text and Charts for Data-Driven Communication
Srinivasan, Arjun; Setlur, Vidya; Satyanarayan, Arvind
Textual content (including titles, annotations, and captions) plays a central role in helping readers understand a visualization by emphasizing, contextualizing, or summarizing the depicted data. Yet, existing visualization tools provide limited support for jointly authoring the two modalities of text and visuals such that both convey semantically-rich information and are cohesively integrated. In response, we introduce Pluto, a mixed-initiative authoring system that uses features of a chart’s construction (e.g., visual encodings) as well as any textual descriptions a user may have drafted to make suggestions about the content and presentation of the two modalities. For instance, a user can begin to type out a description and interactively brush a region of interest in the chart, and Pluto will generate a relevant auto-completion of the sentence. Similarly, based on a written description, Pluto may suggest lifting a sentence out as an annotation or the visualization’s title, or may suggest applying a data transformation (e.g., sort) to better align the two modalities. A preliminary user study revealed that Pluto’s recommendations were particularly useful for bootstrapping the authoring process and helped identify different strategies participants adopt when jointly authoring text and charts. Based on study feedback, we discuss design implications for integrating interactive verification features between charts and text, offering control over text verbosity and tone, and enhancing the bidirectional flow in unified text and chart authoring tools.
IUI ’25, Cagliari, Italy
</description>
<pubDate>Mon, 24 Mar 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159033</guid>
<dc:date>2025-03-24T00:00:00Z</dc:date>
</item>
<item>
<title>How do we interpret the outputs of a neural network trained on classification?</title>
<link>https://hdl.handle.net/1721.1/159032</link>
<description>How do we interpret the outputs of a neural network trained on classification?
Xie, Yudi
Deep neural networks are widely used for classification tasks, but the interpretation of their output activations is often unclear. This tutorial article explains&#13;
how these outputs can be understood as approximations of the Bayesian posterior.&#13;
We showed that, in theory, the loss function for classification tasks – derived by&#13;
maximum likelihood – is minimized by the Bayesian posterior. We conducted&#13;
empirical studies training neural networks to classify synthetic data from a known&#13;
generative model. In a simple classification task, the network closely approximates the theoretically derived posterior. However, a few changes in the task can&#13;
make accurate approximation much more difficult. The ability of the networks to&#13;
approximate the posterior depends on multiple factors, such as the complexity of&#13;
the posterior and whether there is sufficient data for learning.
Blogposts Track. ICLR 2025, 24-28 April, Singapore.
</description>
<pubDate>Mon, 28 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159032</guid>
<dc:date>2025-04-28T00:00:00Z</dc:date>
</item>
<item>
<title>Variations in approaches to urban climate adaptation: Experiences and experimentation from the global South</title>
<link>https://hdl.handle.net/1721.1/159021</link>
<description>Variations in approaches to urban climate adaptation: Experiences and experimentation from the global South
Anguelovski, Isabelle; Chu, Eric; Carmin, JoAnn
In recent years, an increasing number of local governments are recognizing the impact of climate change on different urban sectors. This has led many to pursue climate adaptation planning, seeking to achieve preparedness through reducing vulnerability and enhancing resilience of populations, assets, and municipal operations. Although cities typically share these common goals, many are electing to pursue different planning approaches. In this paper, we examine three climate adaptation planning approaches in the cities of Quito (Ecuador), Surat (India), and Durban (South Africa) and analyze the trade-offs associated with different planning pathways and different forms of stakeholder involvement. We assess the potentials and limitations of these different approaches, including their implications for enhancing government integration and coordination, promoting participation and adaptive capacity of vulnerable groups, and facilitating overall urban resilience. We find that, in order to gain widespread commitment on adaptation, sustained political leadership from the top, departmental engagement, and continued involvement from a variety of stakeholders are integral to effective decision-making and institutionalization of programs in the long run. When climate adaptation is advanced with a focus on learning, awareness, and capacity building, the process will likely lead to more sustained, legitimate, and comprehensive adaptation plans and policies that enhance the resilience of the most affected urban areas and residents.
</description>
<pubDate>Tue, 01 Jul 2014 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159021</guid>
<dc:date>2014-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Explaining Progress in Climate Adaptation Planning Across 156 U.S. Municipalities</title>
<link>https://hdl.handle.net/1721.1/159020</link>
<description>Explaining Progress in Climate Adaptation Planning Across 156 U.S. Municipalities
Shi, Linda; Chu, Eric; Debats, Jessica
Problem, research strategy, and findings: Cities are increasingly experiencing the effects of climate change and taking steps to adapt to current and future natural hazard risks. Research on these efforts has identified numerous barriers to climate adaptation planning, but has not yet systematically evaluated the relative importance of different constraints for a large number of diverse cities. We draw on responses from 156 U.S. cities that participated in a 2011 global survey on local adaptation planning, 60% of which are planning for climate change. We use logistic regression analysis to assess the significance of 13 indicators measuring political leadership, fiscal and administrative resources, ability to obtain and communicate climate information, and state policies in predicting the status of adaptation planning. In keeping with the literature, we find that greater local elected officials commitment, higher municipal expenditures per capita, and an awareness that the climate is already changing are associated with cities engaging in adaptation planning. The presence of state policies on climate adaptation is surprisingly not a statistically significant predictor, suggesting that current policies are not yet strong enough to increase local adaptation planning. However, the model's sampling bias toward larger and more environmentally progressive cities may mask the predictive power of state policies and other indicators.Takeaway for practice: State governments have an opportunity to increase local political commitment by integrating requirements for climate-risk evaluations into existing funding streams and investment plans. Regional planning entities also can help overcome the lack of local fiscal capacity and political support by facilitating the exchange of information, pooling and channeling resources, and providing technical assistance to local planners.
</description>
<pubDate>Fri, 03 Jul 2015 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159020</guid>
<dc:date>2015-07-03T00:00:00Z</dc:date>
</item>
<item>
<title>The Wave Maps Equation and Brownian Paths</title>
<link>https://hdl.handle.net/1721.1/159019</link>
<description>The Wave Maps Equation and Brownian Paths
Bringmann, Bjoern; Lührmann, Jonas; Staffilani, Gigliola
We discuss the ( 1 + 1 ) -dimensional wave maps equation with values in a compact Riemannian manifold . Motivated by the Gibbs measure problem, we consider Brownian paths on the manifold as initial data. Our main theorem is the probabilistic local well-posedness of the associated initial value problem. The analysis in this setting combines analytic, geometric, and probabilistic methods.
</description>
<pubDate>Fri, 23 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159019</guid>
<dc:date>2024-02-23T00:00:00Z</dc:date>
</item>
<item>
<title>Iterative regularization for low complexity regularizers</title>
<link>https://hdl.handle.net/1721.1/159018</link>
<description>Iterative regularization for low complexity regularizers
Molinari, Cesare; Massias, Mathurin; Rosasco, Lorenzo; Villa, Silvia
Iterative regularization exploits the implicit bias of optimization algorithms to regularize ill-posed problems. Constructing algorithms with such built-in regularization mechanisms is a classic challenge in inverse problems but also in modern machine learning, where it provides both a new perspective on algorithms analysis, and significant speed-ups compared to explicit regularization. In this work, we propose and study the first iterative regularization procedure with explicit computational steps able to handle biases described by non smooth and non strongly convex functionals, prominent in low-complexity regularization. Our approach is based on a primal-dual algorithm of which we analyze convergence and stability properties, even in the case where the original problem is unfeasible. The general results are illustrated considering the special case of sparse recovery with the ℓ 1 penalty. Our theoretical results are complemented by experiments showing the computational benefits of our approach.
</description>
<pubDate>Sat, 10 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159018</guid>
<dc:date>2024-02-10T00:00:00Z</dc:date>
</item>
<item>
<title>Shuffle algebras for quivers as quantum groups</title>
<link>https://hdl.handle.net/1721.1/159017</link>
<description>Shuffle algebras for quivers as quantum groups
Neguț, Andrei; Sala, Francesco; Schiffmann, Olivier
We define a quantum loop group U Q + associated to an arbitrary quiver Q = ( I , E ) and maximal set of deformation parameters, with generators indexed by I × Z and some explicit quadratic and cubic relations. We prove that U Q + is isomorphic to the (generic, small) shuffle algebra associated to the quiver Q and hence, by Neguț (Shuffle algebras for quivers and wheel conditions. arXiv:2102.11269 ), to the localized K-theoretic Hall algebra of Q. For the quiver with one vertex and g loops, this yields a presentation of the spherical Hall algebra of a (generic) smooth projective curve of genus g [invoking the results of Schiffmann and Vasserot (Math Ann 353(4):1399–1451, 2012)]. We extend the above results to the case of non-generic parameters satisfying a certain natural metric condition. As an application, we obtain a description by generators and relations of the subalgebra generated by absolutely cuspidal eigenforms of the Hall algebra of an arbitrary smooth projective curve [(invoking the results of Kapranov et al. (Sel Math (NS) 23(1):117–177, 2017)].
</description>
<pubDate>Fri, 20 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159017</guid>
<dc:date>2024-09-20T00:00:00Z</dc:date>
</item>
<item>
<title>Resolvent analysis of swirling turbulent jets</title>
<link>https://hdl.handle.net/1721.1/159016</link>
<description>Resolvent analysis of swirling turbulent jets
Chevalier, Quentin; Douglas, Christopher M.; Lesshafft, Lutz
This study explores coherent structures in a swirling turbulent jet. Stationary axisymmetric solutions of the Reynolds–Averaged Navier–Stokes equations at R e = 200 , 000 were obtained using an open source computational fluid dynamics code and the Spalart–Allmaras eddy viscosity model. Then, resolvent analysis with the same eddy viscosity field provided coherent structures of the turbulent fluctuations on the base flow. As in many earlier studies, a large gain separation is identified between the optimal and sub-optimal resolvent modes, permitting a focus on the most amplified response mode and its corresponding optimal forcing. At zero swirl, the results indicate that the jet’s coherent response is dominated by axisymmetric ( m = 0 ) structures, which are driven by the usual Kelvin–Helmholtz shear amplification mechanism. However, as swirl is increased, different coherent structures begin to dominate the response. For example, double and triple spiral ( | m | = 2 and | m | = 3 ) modes are identified as the dominant structures when the axial and azimuthal velocity maxima of the base flow are comparable. In this case, distinct co- and counter-rotating | m | = 2 modes experience vastly different degrees of amplification. The physics of this selection process involve several amplification mechanisms contributing simultaneously in different regions of the mode. This is analysed in more detail by comparing the alignment between the wavevector of the dominant response mode and the principal shear direction of the base flow. Additional discussion also considers the development of structures along the exterior of the jet nozzle. Graphical abstract
</description>
<pubDate>Thu, 13 Jun 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159016</guid>
<dc:date>2024-06-13T00:00:00Z</dc:date>
</item>
<item>
<title>The mental health toll of the Great Migration: a comparison of mental health outcomes among descendants of African American migrators</title>
<link>https://hdl.handle.net/1721.1/159015</link>
<description>The mental health toll of the Great Migration: a comparison of mental health outcomes among descendants of African American migrators
Vu, Cecilia; C. Arcaya, Mariana; Kawachi, Ichiro; Williams, David
Introduction Research is beginning to examine the health outcomes of migrators of the Great Migration, a movement of up to eight million African Americans from the South to the North and West during the twentieth century. However, sparse evidence exists studying the health outcomes of the descendants of Great Migration movers. The aim for this study was to compare the lifetime prevalence of mental health disorders by migration status. Methods We used a sample of 3183 African American adults from the National Survey of American Life (2001–2003). Using birthplaces of participants and their mothers, we classified adults as (1) Southern stayers, (2) migrators to the South, (3) migrators to the North or (4) Northern stayers. The outcomes were lifetime prevalence of any mental health, mood, anxiety, and substance use disorders. We used weighted log-Poisson regression models and adjusted for demographic characteristics and socioeconomic status. Results Migrators to the North and Northern stayers had higher risks of any lifetime mental health, mood, anxiety, and substance use disorders compared to Southern stayers in the adjusted models. Migrators to the North and Northern stayers were more likely to report perceived discrimination. Conclusion This study suggests that migrating families to the North may have experienced mental health adversities.
</description>
<pubDate>Wed, 17 Jan 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159015</guid>
<dc:date>2024-01-17T00:00:00Z</dc:date>
</item>
<item>
<title>Scientific Advancements in Gene Therapies: Opportunities for Global Regulatory Convergence</title>
<link>https://hdl.handle.net/1721.1/159014</link>
<description>Scientific Advancements in Gene Therapies: Opportunities for Global Regulatory Convergence
Olaghere, Jimi; Williams, David A.; Farrar, Jeremy; Büning, Hildegard; Calhoun, Cecelia; Ho, Tony; Inamdar, Maneesha S.; Liu, David; Makani, Julie; Nyarko, Kwasi; Ruiz, Sol; Tisdale, John; McCune, Joseph M.; Boadi, Esther; Reagan-Udall Foundation for the FDA
On 4 September 2024, the Reagan-Udall Foundation for the FDA (FDA Foundation) in collaboration with the Food and Drug Administration (FDA) and the Gates Foundation hosted a workshop titled “Scientific Advancements in Gene Therapies: Opportunities for Global Regulatory Convergence”. The event brought together a diverse group of experts, including international regulatory bodies, regulated industries, healthcare professionals, patients, academic researchers and global health advocates, to discuss the rapid advancements in gene therapy and the pressing need for equitable access in low-and middle-income countries (LMICs), with sickle cell disease (SCD) serving as the model disorder for the discussions. Although there has been significant progress in gene therapy, such as breakthroughs in clustered regularly interspaced short palindromic repeats (CRISPR)-based technologies and FDA-approved therapies, access to these therapies remain limited in underresourced regions. The workshop addressed critical challenges, including the high cost of therapies, regulatory gaps and barriers and ethical concerns regarding informed consent and public engagement in LMICs. This paper highlights the critical discussion points from the workshop with a focus on exploring strategies for global regulatory convergence, the role of international collaborations and the potential pathways to making gene therapies affordable and accessible to all.
</description>
<pubDate>Thu, 20 Mar 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159014</guid>
<dc:date>2025-03-20T00:00:00Z</dc:date>
</item>
<item>
<title>Brain Markers of Resilience to Psychosis in High-Risk Individuals: A Systematic Review and Label-Based Meta-Analysis of Multimodal MRI Studies</title>
<link>https://hdl.handle.net/1721.1/159013</link>
<description>Brain Markers of Resilience to Psychosis in High-Risk Individuals: A Systematic Review and Label-Based Meta-Analysis of Multimodal MRI Studies
Collin, Guusje; Goldenberg, Joshua E.; Chang, Xiao; Qi, Zhenghan; Whitfield-Gabrieli, Susan; Cahn, Wiepke; Wang, Jijun; Stone, William S.; Keshavan, Matcheri S.; Shenton, Martha E.
Background/Objectives: Most individuals who have a familial or clinical risk of developing psychosis remain free from psychopathology. Identifying neural markers of resilience in these at-risk individuals may help clarify underlying mechanisms and yield novel targets for early intervention. However, in contrast to studies on risk biomarkers, studies on neural markers of resilience to psychosis are scarce. The current study aimed to identify potential brain markers of resilience to psychosis. Methods: A systematic review of the literature yielded a total of 43 MRI studies that reported resilience-associated brain changes in individuals with an elevated risk for psychosis. Label-based meta-analysis was used to synthesize findings across MRI modalities. Results: Resilience-associated brain changes were significantly overreported in the default mode and language network, and among highly connected and central brain regions. Conclusions: These findings suggest that the DMN and language-associated areas and central brain hubs may be hotspots for resilience-associated brain changes. These neural systems are thus of key interest as targets of inquiry and, possibly, intervention in at-risk populations.
</description>
<pubDate>Mon, 17 Mar 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159013</guid>
<dc:date>2025-03-17T00:00:00Z</dc:date>
</item>
<item>
<title>The Design and Deployment of a Self-Powered, LoRaWAN-Based IoT Environment Sensor Ensemble for Integrated Air Quality Sensing and Simulation</title>
<link>https://hdl.handle.net/1721.1/159012</link>
<description>The Design and Deployment of a Self-Powered, LoRaWAN-Based IoT Environment Sensor Ensemble for Integrated Air Quality Sensing and Simulation
Wijeratne, Lakitha O. H.; Kiv, Daniel; Waczak, John; Dewage, Prabuddha; Balagopal, Gokul; Iqbal, Mazhar; Aker, Adam; Fernando, Bharana; Lary, Matthew; Sooriyaarachchi, Vinu; Patra, Rittik; Desmond, Nora; Zabiepour, Hannah; Xi, Darren; Agnihotri, Vardhan; Lee, Seth; Simmons, Chris; Lary, David J.
The goal of this study is to describe a design architecture for a self-powered IoT (Internet of Things) sensor network that is currently being deployed at various locations throughout the Dallas-Fort Worth metroplex to measure and report on Particulate Matter (PM) concentrations. This system leverages diverse low-cost PM sensors, enhanced by machine learning for sensor calibration, with LoRaWAN connectivity for long-range data transmission. Sensors are GPS-enabled, allowing precise geospatial mapping of collected data, which can be integrated with urban air quality forecasting models and operational forecasting systems. To achieve energy self-sufficiency, the system uses a small-scale solar-powered solution, allowing it to operate independently from the grid, making it both cost-effective and suitable for remote locations. This novel approach leverages multiple operational modes based on power availability to optimize energy efficiency and prevent downtime. By dynamically adjusting system behavior according to power conditions, it ensures continuous operation while conserving energy during periods of reduced supply. This innovative strategy significantly enhances performance and resource management, improving system reliability and sustainability. This IoT network provides localized real-time air quality data, which has significant public health benefits, especially for vulnerable populations in densely populated urban environments. The project demonstrates the synergy between IoT sensor data, machine learning-enhanced calibration, and forecasting methods, contributing to scientific understanding of microenvironments, human exposure, and public health impacts of urban air quality. In addition, this study emphasizes open source design principles, promoting transparency, data quality, and reproducibility by exploring cost-effective sensor calibration techniques and adhering to open data standards. The next iteration of the sensors will include edge processing for short-term air quality forecasts. This work underscores the transformative role of low-cost sensor networks in urban air quality monitoring, advancing equitable policy development and empowering communities to address pollution challenges.
</description>
<pubDate>Wed, 12 Mar 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/159012</guid>
<dc:date>2025-03-12T00:00:00Z</dc:date>
</item>
<item>
<title>Performance Analysis for High-Dimensional Bell-State Quantum Illumination</title>
<link>https://hdl.handle.net/1721.1/158998</link>
<description>Performance Analysis for High-Dimensional Bell-State Quantum Illumination
Shapiro, Jeffrey H.
Quantum illumination (QI) is an entanglement-based protocol for improving LiDAR/radar detection of unresolved targets beyond what a classical LiDAR/radar of the same average transmitted energy can do. Originally proposed by Seth Lloyd as a discrete-variable quantum LiDAR, it was soon shown that his proposal offered no quantum advantage over its best classical competitor. Continuous-variable, specifically Gaussian-state, QI has been shown to offer a true quantum advantage, both in theory and in table-top experiments. Moreover, despite its considerable drawbacks, the microwave version of Gaussian-state QI continues to attract research attention. A recent QI study by Armanpreet Pannu, Amr Helmy, and Hesham El Gamal (PHE), however, has: (i) combined the entangled state from Lloyd’s QI with the channel models from Gaussian-state QI; (ii) proposed a new positive operator-valued measurement for that composite setup; and (iii) claimed that, unlike Gaussian-state QI, PHE QI achieves the Nair–Gu lower bound on QI target-detection error probability at all noise brightnesses. PHE’s analysis was asymptotic, i.e., it presumed infinite-dimensional entanglement. The current paper works out the finite-dimensional performance of PHE QI. It shows that there is a threshold value for the entangled-state dimensionality below which there is no quantum advantage, and above which the Nair–Gu bound is approached asymptotically. Moreover, with both systems operating with error-probability exponents 1 dB lower than the Nair–Gu bound, PHE QI requires enormously higher entangled-state dimensionality than does Gaussian-state QI to achieve useful error probabilities in both high-brightness (100 photons/mode) and moderate-brightness (1 photon/mode) noise. Furthermore, neither system has an appreciable quantum advantage in low-brightness (much less than 1 photon/mode) noise.
</description>
<pubDate>Mon, 03 Mar 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158998</guid>
<dc:date>2025-03-03T00:00:00Z</dc:date>
</item>
<item>
<title>Choice Vectors: Streamlining Personal AI Alignment Through Binary Selection</title>
<link>https://hdl.handle.net/1721.1/158997</link>
<description>Choice Vectors: Streamlining Personal AI Alignment Through Binary Selection
Watson, Eleanor; Nguyen, Minh; Pan, Sarah; Zhang, Shujun
Value alignment for AI is not “one-size-fits-all”: even polite and friendly models can still fail to represent individual user contexts and preferences, and local cultural norms. This paper presents a modular workflow for personal fine-tuning, synthesizing four core components from our previous research: (1) robust vectorization of user values and preferences, (2) a binary choice user interface (UI) approach to capturing those preferences with minimal cognitive load, (3) contrastive activation methods for steering large language models (LLMs) via difference vectors, and (4) knowledge graph integration for more auditable and structured alignment. Our approach—descended from past research on “Towards an End-to-End Personal Fine-Tuning Framework”—demonstrates how these elements can be combined to create personalized, context-rich alignment solutions. We report on user studies for the forced-choice UI, describe an experimental pipeline for deriving “control vectors”, and propose a “moral graph” method for bridging symbolic and vector-based alignment. Our findings suggest that multi-pronged personalization can significantly reduce user annotation fatigue, improve alignment fidelity, and allow for more flexible, interpretable AI behaviors.
</description>
<pubDate>Mon, 03 Mar 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158997</guid>
<dc:date>2025-03-03T00:00:00Z</dc:date>
</item>
<item>
<title>Hands-On Quantum Cryptography: Experimentation with the B92 Protocol Using Pulsed Lasers</title>
<link>https://hdl.handle.net/1721.1/158996</link>
<description>Hands-On Quantum Cryptography: Experimentation with the B92 Protocol Using Pulsed Lasers
Gandelman, Sara P.; Maslennikov, Alona; Rozenman, Georgi Gary
Quantum cryptography continues to be an area of significant research and educational interest. Here, a straightforward and reliable approach to both the experimental and theoretical aspects of quantum key distribution is presented, tailored for senior undergraduate students. Focusing on illustrating the essential concepts of the B92 protocol through a combination of optical experiments and custom-developed computational tools, this work offers a thorough exploration of quantum cryptography according to the principles of the B92 protocol.
</description>
<pubDate>Fri, 28 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158996</guid>
<dc:date>2025-02-28T00:00:00Z</dc:date>
</item>
<item>
<title>A Model of Decadal Middle-Latitude Atmosphere–Ocean Coupled Modes</title>
<link>https://hdl.handle.net/1721.1/158995</link>
<description>A Model of Decadal Middle-Latitude Atmosphere–Ocean Coupled Modes
Goodman, Jason; Marshall, John
An analytical model of the mutual interaction of the middle-latitude atmosphere and ocean is formulated and studied. The model is found to support coupled modes in which oceanic baroclinic Rossby waves of decadal period grow through positive coupled feedback between the thermal forcing of the atmosphere induced by SST anomalies and the resulting wind stress forcing of the ocean. Growth only occurs if the atmospheric response to thermal forcing is equivalent barotropic, with a particular phase relationship with the underlying SST anomalies. The dependence of the growth rate and structure of the modes on the nature of the assumed physics of air-sea interaction is explored, and their possible relation to observed phenomena discussed.
</description>
<pubDate>Mon, 01 Feb 1999 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158995</guid>
<dc:date>1999-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Structurally Similar Mycotoxins Aflatoxin B1 and Sterigmatocystin Trigger Different and Distinctive High-Resolution Mutational Spectra in Mammalian Cells</title>
<link>https://hdl.handle.net/1721.1/158993</link>
<description>Structurally Similar Mycotoxins Aflatoxin B1 and Sterigmatocystin Trigger Different and Distinctive High-Resolution Mutational Spectra in Mammalian Cells
Thongararm, Pennapa; Chancharoen, Marisa; Suwanwong, Nutchapong; Ruchirawat, Somsak; Ruchirawat, Mathuros; Fedeles, Bogdan I.; Croy, Robert G.; Essigmann, John M.
Aflatoxin B1 (AFB1) and sterigmatocystin (ST) are mycotoxins that pose significant threats to human and animal health owing to their mutagenic, carcinogenic, and toxic properties. They are structurally similar and widely believed to exert their biological effects via the generation of DNA-damaging epoxides at their respective terminal furan rings. Despite structural identity in the warhead portion of each toxin, this work shows that distal parts of each molecule are responsible for the distinctive mutational fingerprints seen in gptΔ C57BL/6J mouse embryo fibroblasts (MEFs). The two toxins differ structurally in the puckered cyclopentenone ring of AFB1 and in the planar xanthone functionality of ST. While both toxins mainly induce GC→TA mutations, the aforementioned differences in structure apparently trigger unique patterns of mutations, as revealed by high-resolution duplex sequencing of MEF genomes. AFB1 is more mutagenic than ST and displays its transversion mutations in a pattern with primary and secondary hotspots (underscored) in 5′-CGC-3′ and 5′-CGG-3′ contexts, respectively. ST displays a modest 5′-CGG-3′ hotspot while its other GC→TA transversions are more uniformly distributed in a pattern resembling established oxidative stress mutational spectra. This research delineates the mutational spectra of AFB1 and ST, establishing these patterns as possible early-onset biomarkers of exposure.
</description>
<pubDate>Thu, 27 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158993</guid>
<dc:date>2025-02-27T00:00:00Z</dc:date>
</item>
<item>
<title>The Influence of Religiosity on Muslim Women’s Selection of Fund Providers in Malaysia</title>
<link>https://hdl.handle.net/1721.1/158992</link>
<description>The Influence of Religiosity on Muslim Women’s Selection of Fund Providers in Malaysia
Bouzekouk, Salim; Mansor, Fadillah
The purpose of this study is to analyze the factors influencing the attitudes of women investors in the context of Islamic unit trust funds in Malaysia, with a focus on women&amp;rsquo;s religiosity and on the perceived religiosity of fund providers. Using the UTAUT model, the study examines data from a survey of 263 Muslim women in Malaysia and considers seven key factors: risk aversion, religiosity, price sensitivity, and Islamic financial literacy on the side of the investing women and past performance, perceived religiosity, and perceived risk on the side of the fund providers. The findings indicate that the perceived religiosity of a fund provider has a significant and positive impact on attitude, with positive moderating effects on the women&amp;rsquo;s own religiosity and Islamic financial literacy, and a negative moderating effect on the women&amp;rsquo;s price sensitivity. The study also discusses the practical implications of these findings and offers recommendations for fund providers.
</description>
<pubDate>Wed, 26 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158992</guid>
<dc:date>2025-02-26T00:00:00Z</dc:date>
</item>
<item>
<title>Diffusion and Percolation: How COVID-19 Spread Through Populations</title>
<link>https://hdl.handle.net/1721.1/158991</link>
<description>Diffusion and Percolation: How COVID-19 Spread Through Populations
Harris, Jeffrey E.
I rely on the key concepts of diffusion and percolation to characterize the sequential but overlapping phases of the spread of infection through entire populations during the first year of the COVID-19 pandemic. Data from Los Angeles County demonstrate an extended initial diffusion phase propelled by radial geographic spread, followed by percolation within hotspots fueled by the presence of multigenerational households. Data from New York City, by contrast, reveal rapid initial diffusion along a unique, extensive subway network. Subsequent percolation within multiple hotspots, similarly powered by a high density of multigenerational households, exerted a positive feedback effect that further enhanced diffusion. Data from Florida counties support the generality of the phenomenon of viral transmission from more mobile, younger individuals to less mobile, older individuals. Data from the South Brooklyn hotspot reveal the limitations of some forms of government regulation in controlling mobility patterns that were critical to the continued percolation of the viral infection. Data from a COVID-19 outbreak at the University of Wisconsin&amp;mdash;Madison demonstrate the critical role of a cluster of off-campus bars as an attractor for the continued percolation of infection. The evidence also demonstrates the efficacy of quarantine as a control strategy when the hotspot is contained and well identified.
</description>
<pubDate>Thu, 20 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158991</guid>
<dc:date>2025-02-20T00:00:00Z</dc:date>
</item>
<item>
<title>Point-of-Care No-Specimen Diagnostic Platform Using Machine Learning and Raman Spectroscopy: Proof-of-Concept Studies for Both COVID-19 and Blood Glucose</title>
<link>https://hdl.handle.net/1721.1/158990</link>
<description>Point-of-Care No-Specimen Diagnostic Platform Using Machine Learning and Raman Spectroscopy: Proof-of-Concept Studies for Both COVID-19 and Blood Glucose
Chefitz, Allen B.; Singh, Rohit; Birch, Thomas; Yang, Yongwu; Hussain, Arib; Chefitz, Gabriella
Significance: We describe a novel, specimen-free diagnostic platform that can immediately detect both a metabolite (glucose) or an infection (COVID-19) by non-invasively using Raman spectroscopy and machine learning. Aim: Current diagnostic testing for infections and glucose monitoring requires specimens, disease-specific reagents and processing, and it increases environmental waste. We propose a new hardware&amp;ndash;software paradigm by designing and constructing a finger-scanning hardware device to acquire Raman spectroscopy readouts which, by varying the machine learning algorithm to interpret the data, allows for diverse diagnoses. Approach: A total of 455 patients were enrolled prospectively in the COVID-19 study; 148 tested positive and 307 tested negative through nasal PCR testing conducted concurrently with testing using our viral detector. The tests were performed on both outpatients (N = 382) and inpatients (N = 73) at Holy Name Medical Center in Teaneck, NJ, between June 2021 and August 2022. Patients&amp;rsquo; fingers were scanned using an 830 nm Raman System and then, using machine learning, processed to provide an immediate result. In a separate study between April 2023 and August 2023, measurements using the same device and scanning a finger were used to detect blood glucose levels. Using a Dexcom sensor and an Accu-Chek device as references, a cross-validation-based regression of 205 observations of blood glucose was performed with a machine learning algorithm. Results: In a five-fold cross-validation analysis (including asymptomatic patients), a machine learning classifier using the Raman spectra as input achieved a specificity for COVID-19 of 0.837 at a sensitivity of 0.80 and an area under receiver operating curve (AUROC) of 0.896. However, when the data were split by time, with training data consisting of observations before 1 July 2022 and test data consisting of observations after it, the model achieved an AUROC of 0.67, with 0.863 sensitivity at a specificity of 0.517. This decrease in AUROC may be due to substantial domain shift as the virus evolves. A similar five-fold cross-validation analysis of Raman glucose detection produces an area under precision&amp;ndash;recall curve (AUPR) of 0.58. Conclusions: The combination of Raman spectroscopy, AI/ML, and our patient interface admitting only a patient&amp;rsquo;s finger and using no specimen offers unprecedented flexibility in introducing new diagnostic tests or adapting existing ones. As the ML algorithm can be iteratively re-trained with new data and the software deployed to field devices remotely, it promises to be a valuable tool for detecting rapidly emerging infectious outbreaks and disease-specific biomarkers, such as glucose.
</description>
<pubDate>Wed, 19 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158990</guid>
<dc:date>2025-02-19T00:00:00Z</dc:date>
</item>
<item>
<title>Long-Term Ageing Studies on Eco-Friendly Resistive Plate Chamber Detectors</title>
<link>https://hdl.handle.net/1721.1/158989</link>
<description>Long-Term Ageing Studies on Eco-Friendly Resistive Plate Chamber Detectors
Abbrescia, Marcello; Aielli, Giulio; Aly, Reham; Arena, Maria Cristina; Barroso Ferreira, Mapse; Benussi, Luigi; Bianco, Stefano; Bordon, Fabio; Boscherini, Davide; Bruni, Alessia; Buontempo, Salvatore; Busato, Mattia; Camarri, Paolo; Cardarelli, Roberto; Congedo, Liliana; De Serio, Marilisa; Debernardis, Francesco; Di Ciaccio, Anna; Di Stante, Luigi; Dupieux, Pascal
In high-energy physics, resistive plate chamber (RPC) detectors operating in avalanche mode make use of a high-performance gas mixture. Its main component, Tetrafluoroethane (C2H2F4), is classified as a fluorinated greenhouse gas. The RPC EcoGas@GIF++ collaboration is pursuing an intensive R&amp;D on new gas mixtures for RPCs to explore eco-friendly alternatives complying with recent European regulations. The performance of different RPC detectors has been evaluated at the CERN Gamma Irradiation Facility with Tetrafluoropropene (C3H2F4)-CO2-based gas mixtures. A long-term ageing test campaign was launched in 2022, and since 2023, systematic long-term performance studies have been carried out thanks to dedicated beam tests. The results of these studies are discussed together with their future perspectives.
</description>
<pubDate>Tue, 11 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158989</guid>
<dc:date>2025-02-11T00:00:00Z</dc:date>
</item>
<item>
<title>Emerging membrane technologies for sustainable lithium extraction from brines and leachates: Innovations, challenges, and industrial scalability</title>
<link>https://hdl.handle.net/1721.1/158972</link>
<description>Emerging membrane technologies for sustainable lithium extraction from brines and leachates: Innovations, challenges, and industrial scalability
Foo, Zi Hao; Lienhard, John H
This perspective critically examines challenges in advancing membrane-based technologies for lithium extraction from industrial brines, salt lakes, and battery leachates. The rapidly rising deployment of electric vehicles and renewable energy systems has intensified global lithium demand, necessitating sustainable and efficient extraction methods. Traditional techniques like brine evaporation and hard rock mining are environmentally detrimental due to high water usage, ecological disruption, and significant carbon emissions, compounded by geopolitical risks from resource concentration. Emerging membrane technologies, utilizing lithium-selective ligands, biomimetic ion channels, and two-dimensional and porous materials, can potentially realize orders-of-magnitude improvements in lithium selectivity for direct lithium extraction (DLE). However, the effectiveness of DLE membranes is constrained by impurity co-extraction, environmental hazards, lack of scalability and material instability. Conventional lithium brine concentration (LBC) techniques, which complement DLE by concentrating lithium for downstream applications like battery production, face challenges in hypersaline environments, such as fouling and reduced selectivity. Advances in electrodialysis and nanofiltration with surface modifications offer promising solutions to sustain favorable monovalent selectivity under high salinity conditions. Key gaps in the current research landscape include the absence of standardized testing procedures, evaluation metrics poorly suited to hypersaline or multi-ionic environments, scalability challenges in manufacturing, and economic limitations arising from fouling and material degradation. Addressing these issues requires material characterization with representative solution compositions, the development of comprehensive evaluation frameworks, and strategies for co-extracting valuable metals to improve economic viability. A holistic focus on membrane manufacturability, material durability, and process integration is essential to unlock sustainable lithium extraction technologies that can support the global shift to clean energy.
</description>
<pubDate>Tue, 01 Apr 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158972</guid>
<dc:date>2025-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Can “stranded” Fossil Fuel Reserves Drive CCS Deployment?</title>
<link>https://hdl.handle.net/1721.1/158909</link>
<description>Can “stranded” Fossil Fuel Reserves Drive CCS Deployment?
Clark, Victoria R; Herzog, Howard J
Recent studies have evaluated the climate change implications of burning all of the world's proven reserves of carbon. To stay below the ambitious target of two degrees Celsius of warming above average pre-industrial temperatures, the International Energy Agency (IEA) estimates that we would need to emit no more than 884 GtCO2 globally between 2012 and 2050, equivalent to burning approximately one third of current global carbon reserves. This would require leaving large amounts of coal, oil and natural gas in the ground. These unutilized fossil reserves have been referred to as "stranded". In this paper, we analyze CCS not as a cost, but as a potential enabler of utilizing otherwise stranded fossil fuels. We examine case studies at Boundary Dam and Gorgon, introduce a "CO2 Normalized Price" as a useful metric for bottom-up assessments, and evaluate top-down model results to help value CCS as a way to rescue stranded fossil fuel assets.
</description>
<pubDate>Wed, 01 Jan 2014 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158909</guid>
<dc:date>2014-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Hard-to-Abate Sectors: The role of industrial carbon capture and storage (CCS) in emission mitigation</title>
<link>https://hdl.handle.net/1721.1/158908</link>
<description>Hard-to-Abate Sectors: The role of industrial carbon capture and storage (CCS) in emission mitigation
Paltsev, Sergey; Morris, Jennifer; Kheshgi, Haroon; Herzog, Howard
Carbon capture and storage (CCS) technology is an important option in the portfolio of emission mitigation solutions in scenarios that lead to deep reductions in greenhouse gas (GHG) emissions. We focus on CCS application in hard-to-abate sectors (cement industry, iron and steel, chemicals) and introduce industrial CCS options into the MIT Economic Projection and Policy Analysis (EPPA) model, a global multi-region multi-sector energy-economic model that provides a basis for the analysis of long-term energy deployment. We use the EPPA model to explore the potential for industrial CCS in different parts of the world, under the assumptions that CCS is the only mitigation option for deep GHG emission reductions in industry and that negative emission options are not available for other sectors of the economy. We evaluate CCS deployment in a scenario that limits the increase in average global surface temperature to 2 °C above preindustrial levels. When industrial CCS is not available, global costs of reaching the target are higher by 12% in 2075 and 71% in 2100 relative to the cost of achieving the policy with CCS. Overall, industrial CCS enables continued growth in the use of energy-intensive goods along with large reductions in global and sectoral emissions. We find that in scenarios with stringent climate policy, CCS in the industry sector is a key mitigation option, and our approach provides a path to projecting the deployment of industrial CCS across industries an
</description>
<pubDate>Fri, 01 Oct 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158908</guid>
<dc:date>2021-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>The COVID-19 effect on the Paris agreement</title>
<link>https://hdl.handle.net/1721.1/158907</link>
<description>The COVID-19 effect on the Paris agreement
Reilly, John M; Chen, Y-H Henry; Jacoby, Henry D
The pandemic and efforts to control it are causing sharp reductions in global economic activity and associated fossil energy use, with unknown influence on longer-term efforts to limit greenhouse gas emissions under the Paris Climate Agreement. To explore this effect, estimates of economic recession and recovery in near-term months are extended to cover a return to full employment in future years, to be compared with an estimate of growth had COVID-19 not occurred. On the assumption that the Paris emissions pledges for 2020 will be met in any case, projection of global emissions with and without the pandemic show that, through its growth impact alone, it will yield only a small effect on emissions in 2030 and beyond. Other COVID legacies may include residual influences in patterns of consumption and travel, and the direction of recovery funds to low carbon investments. Most important, however, will be the effect of the economic shocks on the willingness of nations to meet (or augment) their existing Paris emissions pledges. The main effect of the pandemic on the threat of climate change, therefore, will be not its growth impact but its influence on national commitments to action.
</description>
<pubDate>Wed, 01 Dec 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158907</guid>
<dc:date>2021-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Phase Speed Spectra and the Latitude of Surface Westerlies: Interannual Variability and Global Warming Trend</title>
<link>https://hdl.handle.net/1721.1/158906</link>
<description>Phase Speed Spectra and the Latitude of Surface Westerlies: Interannual Variability and Global Warming Trend
Chen, Gang; Lu, Jian; Frierson, Dargan M. W.
The extratropical annular-mode-like atmospheric responses to ENSO and global warming and the internal variability of annular modes are associated with similar, yet distinct, dynamical characteristics. In particular, La Niña, global warming, and the positive phase of annular modes are all associated with a poleward shift of midlatitude jet streams and surface westerlies. To improve understanding of these phenomena, the authors identify and compare patterns of interannual variability and global warming trends in the midlatitude surface westerlies and the space–time spectra of associated eddy momentum fluxes by analyzing simulations of the present climate in an atmosphere-only climate model, in which the ENSO-induced extratropical response is validated with that in reanalysis data, and by projection of future climate changes using a coupled atmosphere–ocean model.&#13;
&#13;
While the response to ENSO is consistent with the refraction of midlatitude eddies due to subtropical wind anomalies, the interannual internal variability of the annular modes marks a change in the eastward propagation speed of midlatitude eddies. In response to global warming, the dominant eddies exhibit a trend toward faster eddy phase speeds in both hemispheres, in a manner similar to the positive phase of interannual internal variability. These diagnoses suggest that the annular mode trend due to greenhouse gas increases may be more related to extratropical processes, especially in the upper troposphere/lower stratosphere, rather than being forced from the deep tropics.
</description>
<pubDate>Sat, 15 Nov 2008 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158906</guid>
<dc:date>2008-11-15T00:00:00Z</dc:date>
</item>
<item>
<title>Ocean colour signature of climate change</title>
<link>https://hdl.handle.net/1721.1/158784</link>
<description>Ocean colour signature of climate change
Dutkiewicz, Stephanie; Hickman, Anna E; Jahn, Oliver; Henson, Stephanie; Beaulieu, Claudie; Monier, Erwan
Monitoring changes in marine phytoplankton is important as they form the foundation of the marine food web and are crucial in the carbon cycle. Often Chlorophyll-a (Chl-a) is used to track changes in phytoplankton, since there are global, regular satellite-derived estimates. However, satellite sensors do not measure Chl-a directly. Instead, Chl-a is estimated from remote sensing reflectance (RRS): the ratio of upwelling radiance to the downwelling irradiance at the ocean’s surface. Using a model, we show that RRS in the blue-green spectrum is likely to have a stronger and earlier climate-change-driven signal than Chl-a. This is because RRS has lower natural variability and integrates not only changes to in-water Chl-a, but also alterations in other optically important constituents. Phytoplankton community structure, which strongly affects ocean optics, is likely to show one of the clearest and most rapid signatures of changes to the base of the marine ecosystem.
</description>
<pubDate>Tue, 01 Jan 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158784</guid>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Inclusive approaches to urban climate adaptation planning and implementation in the Global South</title>
<link>https://hdl.handle.net/1721.1/158783</link>
<description>Inclusive approaches to urban climate adaptation planning and implementation in the Global South
Chu, Eric; Anguelovski, Isabelle; Carmin, JoAnn
As cities increasingly engage in climate adaptation planning, many are seeking to promote public participation and facilitate the engagement of different civil society actors. Still, the variations that exist among participatory approaches and the merits and tradeoffs associated with each are not well understood. This article examines the experiences of Quito (Ecuador) and Surat (India) to assess how civil society actors contribute to adaptation planning and implementation. The results showcase two distinct approaches to public engagement. The first emphasizes participation of experts, affected communities, and a wide array of citizens to sustain broadly inclusive programmes that incorporate local needs and concerns into adaptation processes and outcomes. The second approach focuses on building targeted partnerships between key government, private, and civil society actors to institutionalize robust decision-making structures, enhance abilities to raise funds, and increase means to directly engage with local community and international actors. A critical analysis of these approaches suggests more inclusive planning processes correspond to higher climate equity and justice outcomes in the short term, but the results also indicate that an emphasis on building dedicated multi-sector governance institutions may enhance long-term programme stability, while ensuring that diverse civil society actors have an ongoing voice in climate adaptation planning and implementation. Policy relevance Many local governments in the Global South experience severe capacity and resource constraints. Cities are often required to devolve large-scale planning and decision-making responsibilities, such as those critical to climate adaptation, to different civil society actors. As a result, there needs to be more rigorous assessments of how civil society participation contributes to the adaptation policy and planning process and what local social, political, and economic factors dictate the way cities select different approaches to public engagement. Also, since social equity and justice are key indicators for determining the effectiveness and sustainability of adaptation interventions, urban adaptation plans and policies must also be designed according to local institutional strengths and civic capacities in order to account for the needs of the poor and most vulnerable. Inclusivity, therefore, is critical for ensuring equitable planning processes and just adaptation outcomes.
</description>
<pubDate>Sat, 02 Apr 2016 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158783</guid>
<dc:date>2016-04-02T00:00:00Z</dc:date>
</item>
<item>
<title>Understanding inclusive innovation processes in agricultural systems: A middle-range conceptual model</title>
<link>https://hdl.handle.net/1721.1/158782</link>
<description>Understanding inclusive innovation processes in agricultural systems: A middle-range conceptual model
Hoffecker, Elizabeth
Inclusive innovation as a strategy for inclusive development has received increased attention from development policymakers, practitioners, and scholars in recent years. What these processes entail in practical terms, however, remains contested and under-theorized. This paper addresses the scarcity of mid-level analysis and models of inclusive innovation processes within complex systems, which are needed to enable a coherent empirical research agenda and to inform program theory-building, implementation, and evaluation. Looking to smallholder-oriented agricultural systems in the Global South, where the majority of inclusive innovation implementation and research has been located, this paper proposes that it is possible to identify the essential features and causal logic of these processes to create an empirically-derived, middle-range model with cross-context applicability. Drawing on methods from realist evaluation and social inquiry, I conducted a theory-driven, cross-case synthesis of three studies of inclusive innovation processes in agricultural systems, with one case each from South America, Southeast Asia, and Africa. I find that despite significant diversity in project designs, facilitation approaches, and local contexts, the three inclusive innovation processes unfolded in strikingly similar ways, and that this modus operandi can be modeled as a middle-range theory of change. In each case, I find that a consistent set of activities and processes changed the local context for the inclusive innovation initiative. These altered contextual factors interacted with ongoing programmatic activities in consistent ways to trigger processes of social learning, social capital strengthening, collective cognition, and consensus formation, which acted as causal mechanisms responsible for producing the intermediate outcomes that led to technical, organizational, and institutional system innovation. The middle-range model enables cross-context insights into how inclusive innovation processes work and what capacities are needed to facilitate them. It can also guide the adaptive management and assessment of these processes, while offering testable hypotheses to guide future empirical work and evaluation.
</description>
<pubDate>Wed, 20 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158782</guid>
<dc:date>2021-01-20T00:00:00Z</dc:date>
</item>
<item>
<title>Strong suppression of heat conduction in a laboratory replica of galaxy-cluster turbulent plasmas</title>
<link>https://hdl.handle.net/1721.1/158781</link>
<description>Strong suppression of heat conduction in a laboratory replica of galaxy-cluster turbulent plasmas
Meinecke, J.; Tzeferacos, P.; Ross, J.S.; Bott, A.F.A.; Feister, S.; Park, H.-S.; Bell, A.R.; Blandford, R.; Berger, R.L.; Bingham, R.; Casner, A.; Chen, L.E.; Foster, J.; Froula, D.H.; Goyon, C.; Kalantar, D.; Koenig, M.; Lahmann, Brandon; Li, Chi-Kang; Lu, Y.; Palmer, C.A.J.; Petrasso, Richard D.; Poole, H.; Remington, B.; Reville, B.; Reyes, A.; Rigby, A.; Ryu, D.; Swadling, G.; Zylstra, A.; Miniati, F.; Sarkar, S.; Schekochihin, A.A.; Lamb, D.Q.; Gregori, G.
In conventional gases and plasmas, it is known that heat fluxes are proportional to temperature gradients, with collisions between particles mediating energy flow from hotter to colder regions and the coefficient of thermal conduction given by Spitzer’s theory. However, this theory breaks down in magnetized, turbulent, weakly colli- sional plasmas, although modifications are difficult to predict from first principles due to the complex, multiscale nature of the problem. Understanding heat transport is important in astrophysical plasmas such as those in gal- axy clusters, where observed temperature profiles are explicable only in the presence of a strong suppression of heat conduction compared to Spitzer’s theory. To address this problem, we have created a replica of such a sys- tem in a laser laboratory experiment. Our data show a reduction of heat transport by two orders of magnitude or more, leading to large temperature variations on small spatial scales (as is seen in cluster plasmas).
Submitted for publication in Science Advances
</description>
<pubDate>Sat, 01 May 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158781</guid>
<dc:date>2021-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>National Diagnostic Working Group (NDWG) for inertial confinement fusion (ICF)/high-energy density (HED) science: The whole exceeds the sum of its parts</title>
<link>https://hdl.handle.net/1721.1/158780</link>
<description>National Diagnostic Working Group (NDWG) for inertial confinement fusion (ICF)/high-energy density (HED) science: The whole exceeds the sum of its parts
Kilkenny, K.D.; Hsing, W.W.; Batha, S.; Rochau, G.A.; Sangster, T.C.; Bell, P.M.; Bradley, D.K.; Chen, H.; Frenje, Johan A.; Gatu Johnson, Maria; Glebov, V. Yu; Leeper, R.J.; Mackinnon, A.J.; Regan, S.P.; Ross, J.S.
The National Diagnostic Working Group (NDWG) has led the effort to fully exploit the major inertial confinement fusion/high-energy density facilities in the US with the best available diagnostics. These diagnostics provide key data used to falsify early theories for ignition and suggest new theories, recently leading to an experiment that exceeds the Lawson condition required for ignition. The factors contributing to the success of the NDWG, collaboration and scope evolution, and the methods of accomplishment of the NDWG are discussed in this Review. Examples of collaborations in neutron and gamma spectroscopy, x-ray and neutron imaging, x-ray spectroscopy, and deep-ultraviolet Thomson scattering are given. An abbreviated history of the multi-decade collaborations and the present semiformal management framework is given together with the latest National Diagnostic Plan.
Submitted for publication in Review of Scientific Instruments
</description>
<pubDate>Sun, 01 Jan 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158780</guid>
<dc:date>2023-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>SPARC X-ray diagnostics: technical and functional overview</title>
<link>https://hdl.handle.net/1721.1/158779</link>
<description>SPARC X-ray diagnostics: technical and functional overview
Vezinet, D.; Perks, C.J.; Panontin, E.; Normile, S.; Tinguely, R. Alex; Rice, John E.; Reinke, M.; Cario, M.; Raimond, J.; Hoffman, A.; Dubas, E.; Saltos, A.; Kennedy, R.
An overview is given of SPARC’s three main X-ray diagnostics with a focus on the functions they fulfill with respect to tokamak operation. The first is an in-vessel soft X-ray tomography diagnostic, aimed at providing early-campaign information on plasma position, MHD activity and impurity content. The second is an ex-vessel set of hard X-ray scintillators aimed at detecting the presence of runaway electrons, in particular during plasma startup phases. The third is a set of X-ray Bragg spectrometers, located outside of the Tokamak Hall, aimed at informing on the ion temperature as an indirect constraint to reduce uncertainties on the fusion power, on providing plasma rotation velocity estimates and on observing impurity emission. Finally, more technical details are given on the beamlines at the end of which the spectrometers are located. It is explained how their design allows to ensure tritium containment and limiting neutron radiation while providing a straight view into the plasma that can also be used for for testing new innovative sensors.
Submitted for publication in Review of Scientific Instruments
</description>
<pubDate>Thu, 01 Aug 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158779</guid>
<dc:date>2024-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Learning from Each Other: Cross-Cutting Diagnostic Development Activities Between Magnetic and Inertial Confinement Fusion</title>
<link>https://hdl.handle.net/1721.1/158778</link>
<description>Learning from Each Other: Cross-Cutting Diagnostic Development Activities Between Magnetic and Inertial Confinement Fusion
Gatu Johnson, Maria; Schlossberg, D.; Appelbe, B.; Ball, J.; Bitter, M.; Casey, D.T.; Celora, A.; Ceurvorst, L.; Chen, H.; Conroy, S.; Crilly, A.; Croci, G.; Dal Molin, A.; Delgado-Aparicio, L.; Efthimion, P.; Eriksson, B.; Eriksson, J.; Forrest, C.; Fry, C.; Frenje, Johan A.; Gao, L.; Geppert-Kleinrath, H.; Geppert-Kleinrath, V.; Gilson, E.; Heuer, P.V.; Hill, K.; Khater, H.; Kraus, F.; Laggner, F.; Lawrence, Y.; Mackie, S.; Meaney, K.; Milder, A.; Moore, A.; Nocente, M.; Pablant, N.; Panontin, E.; Rebai, M.; Reichelt, Benjamin L.; Reinke, M.; Rigamonti, D.; Ross, J.S.; Rubery, M.; Russell, L.; Tardocchi, M.; Tinguely, R. Alex; Wink, Christopher W.
Inertial and Magnetic Confinement Fusion (ICF and MCF) follow different paths toward goals that are largely common. In this paper, the claim is made that progress can be accelerated by learning from each other across the two fields. Examples of successful cross-community knowledge transfer are presented that highlight the gains from working together, specifically in the areas of high-resolution x-ray imaging spectroscopy and neutron spectrometry. Opportunities for near and mid-term collaboration are identified, including in Chemical Vapor Deposition (CVD) diamond detector technology, using gamma rays to monitor fusion gain, handling neutron-induced backgrounds and developing radiation hard technology, and collecting fundamental supporting data needed for diagnostic analysis. Fusion research is rapidly moving into the igniting and burning regimes, posing new opportunities and challenges for ICF and MCF diagnostics. This includes new physics to probe, such as alpha heating; increasingly harsher environmental conditions; and (in the slightly longer term) the need for new plant monitoring diagnostics. Substantial overlap is expected in all of these emerging areas, where joint development across the two subfields as well as between public and private researchers can be expected to speed up advancement for all.
Submitted for publication in Review of Scientific Instruments
</description>
<pubDate>Mon, 01 Apr 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158778</guid>
<dc:date>2024-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>A knock-on deuteron imager for measurements of fuel and hotspot asymmetry in direct-drive inertial confinement fusion implosions (invited)</title>
<link>https://hdl.handle.net/1721.1/158777</link>
<description>A knock-on deuteron imager for measurements of fuel and hotspot asymmetry in direct-drive inertial confinement fusion implosions (invited)
Rinderknecht, H.G.; Heuer, P.V.; Kunimune, Justin H.; Adrian, Patrick J.; Knauer, J.P.; Theobald, W.; Fairbanks, R.; Brannon, B.; Ceurvorst, L.; Gopalaswamy, V.; Williams, C.A.; Radha, P.B.; Regan, S.P.; Gatu Johnson, Maria; Séguin, Frederick H.; Frenje, Johan A.
A knock-on deuteron imager (KoDI) has been implemented to measure the fuel and hotspot asymmetry of cryogenic inertial confinement fusion implosions on OMEGA. Energetic neutrons produced by D–T fusion elastically scatter (“knock on”) deuterons from the fuel layer with a probability that depends on ρR. Deuterons above 10 MeV are produced by near-forward scattering, and imaging them is equivalent to time-integrated neutron imaging of the hotspot. Deuterons below 6 MeV are produced by a combination of side scattering and ranging in the fuel, and encode information about the spatial distribution of the dense fuel. The KoDI instrument consists of a multi-penumbral aperture positioned 10–20 cm from the implosion using a ten-inch manipulator and a detector pack at 350 cm from the implosion to record penumbral images with magnification of up to 35×. Range filters and the intrinsic properties of CR-39 are used to distinguish different charged-particle images by energy along the same line of sight. Image plates fielded behind the CR-39 record a 10 keV x-ray image using the same aperture. A maximum-likelihood reconstruction algorithm has been implemented to infer the source from the projected penumbral images. The effects of scattering and aperture charging on the instrument point-spread function are assessed. Synthetic data are used to validate the reconstruction algorithm and assess an appropriate termination criterion. Significant aperture charging has been observed in the initial experimental dataset, and increases with aperture distance from the implosion, consistent with a simple model of charging by laser-driven EMP.
Submitted for publication in Review of Scientific Instruments
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158777</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>First application of a digital mirror Langmuir probe for real-time plasma diagnosis</title>
<link>https://hdl.handle.net/1721.1/158776</link>
<description>First application of a digital mirror Langmuir probe for real-time plasma diagnosis
McCarthy, William; Golfinopoulos, T.; Woller, K.B.; Vincent, C.; Kuang, Adam Q.; Labombard, Brian
For the first time, a digital Mirror Langmuir probe (MLP) has successfully sampled plasma temperature, ion saturation current, and floating potential together on a single probe tip in real time in a radio-frequency driven helicon linear plasma device. This is accomplished by feedback control of the bias sweep to ensure a good fit to I-V characteristics with a high frequency, high power digital amplifier and field-programmable gate array (FPGA) controller. Measurements taken by the MLP were validated by a low speed I-V characteristic manually collected during static plasma conditions. Plasma fluctuations, induced by varying the axial magnetic field (f̃ = 10 Hz), were also successfully monitored with the MLP. Further refinement of the digital MLP pushes it towards a turn-key system that minimizes the time to deployment and lessens the learning curve, positioning the digital MLP as a capable diagnostic for the study of low radio-frequency plasma physics. These demonstrations bolster confidence in fielding such digital MLP diagnostics in magnetic confinement experiments with high spatial and adequate temporal resolution such as edge plasma, scrape-off layer, and divertor probes.
Submitted for publication in Review of Scientific Instruments
</description>
<pubDate>Fri, 01 Oct 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158776</guid>
<dc:date>2021-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Characterizing x-ray transmission through filters used in high energy density physics diagnostics</title>
<link>https://hdl.handle.net/1721.1/158775</link>
<description>Characterizing x-ray transmission through filters used in high energy density physics diagnostics
Pearcy, J.; Kabadi, N.; Birkel, A.; Adrian, P.; Lahmann, B.; Reichelt, B.; Johnson, T.M.; Sutcliffe, G.; Kunimune, Justin H.; Gatu Johnson, Maria; Bose, A.; Li, Chi-Kang
We report on the design and implementation of a new system used to characterize the energy-dependent x-ray transmission curve, Θ(E), through filters used in high-energy density physics diagnostics. Using an Amptek X-123-CdTe x-ray spectrometer together with a partially depleted silicon surface barrier detector, both the energy spectrum and total emission of an x-ray source have been accurately measured. By coupling these detectors with a custom PROTO-XRD x-ray source with interchangeable cathodes, accurate characterizations of Θ(E) for filters of varying materials and thicknesses have been obtained. The validity of the technique has been confirmed by accurately reproducing areal densities for high-purity filters with known x-ray transmission properties. In this paper, the experimental setup is described and the results of absorption calibrations performed on a variety of different filters are presented.
Submitted for publication in Review of Scientific Instruments
</description>
<pubDate>Tue, 01 Dec 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158775</guid>
<dc:date>2020-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Interpolating individual line-of-sight neutron spectrometer measurements onto the “sky” at the National Ignition Facility (NIF)</title>
<link>https://hdl.handle.net/1721.1/158774</link>
<description>Interpolating individual line-of-sight neutron spectrometer measurements onto the “sky” at the National Ignition Facility (NIF)
Hartouni, E.P.; Bionta, R.M.; Casey, D.T.; Eckart, M.J.; Gatu Johnson, Maria; Grim, G.P.; Hahn, K.D.; Jeet, J.; Kerr, S.M.; Kritcher, A.L.; MacGowan, B.J.; Moore, A.S.; Munro, D.H.; Schlossberg, D.J.; Zylstra, A.
Nuclear diagnostics provide measurements of inertial confinement fusion (ICF) implosions used as metrics of performance for the shot. The interpretation of these measurements for shots with low mode asymmetries requires a way of combining the data to produce a “sky map” where the individual line-of-sight values are used to interpolate to other positions in the sky. These interpolations can provide information regarding the orientation of the low mode asymmetries. We describe the interpolation method, associated uncertainties, and the correlations between different metrics, e.g. Tion, down scatter ratio (DSR) and hot-spot velocity direction. This work is also related to recently reported studies [H. G. Rinderknecht et al., Phys. Rev. Lett. 124, 145002 (2020) and K. M. Woo et al., Phys. Plasmas 27, 062702 (2020)] of low mode asymmetries. We report an analysis that makes use of a newly commissioned line-of-sight, a scheme for incorporating multiple neutron spectrum measurement types, and recent work on the sources of implosion asymmetry to provide a more complete picture of implosion performance.
Submitted for publication in Review of Scientific Instruments
</description>
<pubDate>Tue, 01 Dec 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158774</guid>
<dc:date>2020-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Quantum Lattice Representation for the Curl Equations of Maxwell Equations</title>
<link>https://hdl.handle.net/1721.1/158773</link>
<description>Quantum Lattice Representation for the Curl Equations of Maxwell Equations
Vahala, George; Hawthorne, John; Vahala, Linda; Ram, Abhay K.; Soe, Min
A quantum lattice representation (QLA) is devised for the initial value problem of one-dimensional (1D) propagation of an electromagnetic disturbance in a scalar dielectric medium satisfying directly only the two curl equations of Maxwell. It si found that only 4 qubits/node are required. The collision, streaming, and potential operators are determined so as to recover the two curl equations to second order. Both polarizations are considered.
Submitted for publication in Radiation Effects and Defects in Solids
</description>
<pubDate>Mon, 01 Nov 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158773</guid>
<dc:date>2021-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Building a Three-Dimensional Quantum Lattice Algorithm for Maxwell Equations</title>
<link>https://hdl.handle.net/1721.1/158772</link>
<description>Building a Three-Dimensional Quantum Lattice Algorithm for Maxwell Equations
Vahala, George; Valhala, Linda; Soe, Min; Ram, Abhay K.
A three-dimensional quantum lattice algorithm (QLA) for electromagnetic wave propagation is being developed by stitching together the individual QLAs for 1D wave  propagation in the three orthogonal  Cartesian directions.
Submitted for publication in Radiation Effects and Defects in Solids
</description>
<pubDate>Tue, 01 Sep 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158772</guid>
<dc:date>2020-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Electron heating in kinetic-Alfvén-wave turbulence</title>
<link>https://hdl.handle.net/1721.1/158771</link>
<description>Electron heating in kinetic-Alfvén-wave turbulence
Zhou, Muni; Liu, Zhuo; Loureiro, Nuno F.
We report analytical and numerical investigations of sub-ion-scale turbulence in low-beta plasmas using a rigorous reduced kinetic model.   We show that efficient electron heating occurs, and is primarily due to Landau damping of kinetic Alfv\'en waves, as opposed to Ohmic dissipation. This collisionless damping is facilitated by the local weakening of advective nonlinearities and the ensuing unimpeded phase mixing near intermittent current sheets, where free energy concentrates. The linearly damped energy of electromagnetic fluctuations at each scale explains the steepening of their energy spectrum with respect to a fluid model where such damping is excluded (i.e., a model that imposes an isothermal electron closure). The use of a Hermite-polynomial representation to express the velocity-space dependence of the electron distribution function enables us to obtain an analytical, lowest-order solution for the Hermite moments of the distribution, which is borne out by numerical simulations.
Submitted for publication in PNAS: Proceedings of the National Academy of Sciences of the United States of America
</description>
<pubDate>Sat, 01 Apr 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158771</guid>
<dc:date>2023-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>On the minimum transport required to passively suppress runaway electrons in SPARC disruptions</title>
<link>https://hdl.handle.net/1721.1/158770</link>
<description>On the minimum transport required to passively suppress runaway electrons in SPARC disruptions
Tinguely, R. Alex; Pusztai, I.; Izzo, V.A.; Särkimäki, K.; Fülöp, T.; Garnier, D.T.; Granetz, R.S.; Hoppe, M.; Paz-Soldan, C.; Sunström, A.; Sweeney, Ryan
In [V.A. Izzo et al 2022 Nucl. Fusion 62 096029], state-of-the-art modeling of thermal and current quench (CQ) MHD coupled with a self-consistent evolution of runaway electron (RE) generation and transport showed that a non-axisymmetric (n = 1) in-vessel coil could passively prevent RE beam formation during disruptions in SPARC, a compact high- field tokamak projected to achieve a fusion gain Q &gt; 2 in DT plasmas. However, such suppression requires  nite transport of REs within magnetic islands and re-healed flux surfaces; conservatively assuming zero transport in these regions leads to an upper bound of RE current ~1 MA compared to ~8.7 MA of pre-disruption plasma current. Further investigation fi nds that core-localized electrons, within r/a &lt; 0.3 and with kinetic energies ~0.2-15 MeV, contribute most to the RE plateau formation. Yet only a relatively small amount of transport, i.e. a diffusion coefficient ~18 m^2/s, is needed in the core to fully mitigate these REs. Properly accounting for (i) the CQ electric  field's effect on RE transport in islands and (ii) the contribution of significant RE currents to disruption MHD may help achieve this.
Submitted for publication in Plasma Physics and Controlled Fusion
</description>
<pubDate>Thu, 01 Dec 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158770</guid>
<dc:date>2022-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Demonstration of TNSA proton radiography on the National Ignition Facility Advanced Radiographic Capability (NIF-ARC) laser</title>
<link>https://hdl.handle.net/1721.1/158769</link>
<description>Demonstration of TNSA proton radiography on the National Ignition Facility Advanced Radiographic Capability (NIF-ARC) laser
Simpson, R.A.; Mariscal, D.A.; Kim, J.; Scott, G.G.; Williams, G.J.; Grace, E.; McGuffey, C.; Wilks, S.; Kemp, A.; Lemos, N.; Djordjevic, B.Z.; Folsom, E.; Kalantar, D.; Zacharias, R.; Pollock, B.; Moody, J.; Beg, F.; Morace, A.; Iwata, N.; Sentoku, Y.; Manuel, M. J.-E.; Mauldin, M.; Quinn, M.; Youngblood, K.; Gatu Johnson, Maria; Lahmann, B.; Haefner, C.; Neely, D.; Ma, T.
Proton radiography using short-pulse laser drivers is an important tool in high-energy density (HED) science for dynamically diagnosing key characteristics in plasma interactions. Here we detail the first demonstration of target-normal sheath acceleration (TNSA)-based proton radiography the NIF-ARC laser system aided by the use of compound parabolic concentrators (CPCs). The multi-kJ energies available at the NIF-ARC laser allows for a high-brightness proton source for radiography and thus enabling a wide range of applications in HED science. In this demonstration, proton radiography of a physics package was performed and this work details the spectral properties of the TNSA proton probe as well as description of the resulting radiography quality.
Submitted for publication in Plasma Physics and Controlled Fusion
</description>
<pubDate>Mon, 01 Nov 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158769</guid>
<dc:date>2021-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>High resolution density pedestal measurements during edge localized modes by short-pulse reflectometry in the TCV tokamak</title>
<link>https://hdl.handle.net/1721.1/158768</link>
<description>High resolution density pedestal measurements during edge localized modes by short-pulse reflectometry in the TCV tokamak
Molina Cabrera, Pedro A.; Labit, B.; Coda, S.; Porte, L.
This publication presents high spatio-temporal resolution (mm/μs) density profile measurements of the pedestal top during type I, III, and small edge localized mode (ELM) H-mode plasmas in the Tokamak à Configuration Variable (TCV). These measurements were performed using a novel short-pulse reflectometer. Average inter-ELM density profiles are obtained via conditional averaging using the Dα trace as ELM indicator. Changes to the pedestal density profile gradients prior to type-III ELMs reveal unique pedestal dynamics leading to the ELM crash which can provide important experimental data for validation of non-linear MHD ELM simulations. The small-ELM scenario is found to feature a ∼25-35 kHz quasi-coherent density fluctuation near the separatrix rho_psi ∼0.993-1.05 not observed during a similar type-I ELM discharge. This oscillation is also found in low-field-side magnetic pick- up probes displaying a ballooning character and n=+1 toroidal mode number. This oscillation could help explain the markedly different pedestal dynamics observed in the small-ELM regime.
Submitted for publication in Plasma Physics and Controlled Fusion
</description>
<pubDate>Tue, 01 Dec 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158768</guid>
<dc:date>2020-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Nuclear diagnostics for Inertial Confinement Fusion (ICF) plasmas</title>
<link>https://hdl.handle.net/1721.1/158767</link>
<description>Nuclear diagnostics for Inertial Confinement Fusion (ICF) plasmas
Frenje, Johan A.
The field of nuclear diagnostics for Inertial Confinement Fusion (ICF) is broadly reviewed from its beginning in the seventies to present day. During this time, the sophistication of the ICF facilities and the suite of nuclear diagnostics have substantially evolved, generally a consequence of the efforts and experience gained on previous facilities. As the fusion yields have increased several orders of magnitude during these years, the quality of the nuclear-fusion-product measurements has improved significantly, facilitating an increased level of understanding about the physics governing the nuclear phase of an ICF implosion. The field of ICF has now entered an era where the fusion yields are high enough for nuclear measurements to provide spatial, temporal and spectral information, which have proven indispensable to understanding the performance of an ICF implosion. At the same time, the requirements on the nuclear diagnostics have also become more stringent. To put these measurements into context, this review starts by providing some historical remarks about the field of ICF and the role of nuclear diagnostics, followed by a brief overview of the basic physics that characterize the nuclear phase and performance of an ICF implosion. A technical discussion is subsequently presented of the neutron, gamma-ray, charged-particle and radiochemistry diagnostics that are, or have been, routinely used in the field of ICF. This discussion is followed by an elaboration of the current view of the next-generation nuclear diagnostics. Since the seventies, the overall progress made in the areas of nuclear diagnostics and scientific understanding of an ICF implosion has been enormous, and with the implementation of new high-fusion-yield facilities world-wide, the next-generation nuclear diagnostics will play an even more important role for decades to come.
Submitted for publication in Plasma Physics and Controlled Fusion
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158767</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>First observations from the Kr multi-monochromatic X-ray imager for time and spatially resolved diagnosis of hot implosion cores</title>
<link>https://hdl.handle.net/1721.1/158766</link>
<description>First observations from the Kr multi-monochromatic X-ray imager for time and spatially resolved diagnosis of hot implosion cores
Gallardo-Diaz, E.; Mancini, R.C.; Clapp, J.; Adrian, Patrick J.; Evans, Tucker E.; Frenje, Johan A.; Florido, R.; Kruse, M.K.G.; Nagayama, T.
This paper presents initial findings from the recently deployed Kr Multi-Monochromatic X-ray Imager (MMI) at the Omega facility. The experiment focuses on exploring implosion dynamics in exploding pusher capsules at three distinct initial gas fill densities. Utilizing time-gated and spatially integrated measurements, core size, electron temperature (Te), and electron densities (ne) are extracted through the analysis of the spectral region encompassing the Kr Heα and its satellite lines. A comprehensive spectral database, incorporating atomic kinetics, spectroscopic quality radiation trans- port, and Stark-broadened line shapes, has been developed for rigorous data analysis. These measurements underscore the utility of the new Kr MMI instrument which combined with sophisticated analysis techniques enables the diagnosis of plasma conditions at Te &gt; 2000 eV, thereby extending the capabilities beyond the prior Ar MMI design. This is an important stepping stone for achieving time-gated and space-resolved diagnostics of electron temperature, electron density, and heat transport in high temperature implosion cores.
Submitted for publication in Physics of Plasmas
</description>
<pubDate>Sat, 01 Jun 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158766</guid>
<dc:date>2024-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Empirical probability and machine learning analysis of m, n = 2, 1 tearing mode onset parameter dependence in DIII-D H-mode scenarios</title>
<link>https://hdl.handle.net/1721.1/158765</link>
<description>Empirical probability and machine learning analysis of m, n = 2, 1 tearing mode onset parameter dependence in DIII-D H-mode scenarios
Bardóczi, L.; Richner, N.J.; Zhu, Jinxiang; Rea, Cristina; Logan, N.C.
m, n = 2, 1 tearing mode onset empirical probability and machine learning analyses of a multiscenario DIII-D database of over 14 000 H- mode discharges show that the normalized plasma beta, the rotation profile, and the magnetic equilibrium shape have the strongest impact on the 2,1 tearing mode stability, in qualitative agreement with neoclassical tearing modes (m and n are the poloidal and toroidal mode numbers, respectively). In addition, 2,1 tearing modes are most likely to destabilize when n &gt; 1 tearing modes are already present in the core plasma. The covariance matrix of tearing sensitive plasma parameters takes a nearly block-diagonal form, with the blocks incorporating thermodynamic, current and safety factor profile, separatrix shape, and plasma flow parameters, respectively. This suggests a number of paths to improved stability at fixed pressure and edge safety factor primarily by preserving a minimum of 1 kHz differential rotation, increasing the minimum safety factor above unity, using upper single null magnetic configuration, and reducing the core impurity radiation. In addition, lower triangularity, lower elongation, and lower pedestal pressure may also help to improve stability. The electron and ion temperature, collisionality, resistivity, internal inductance, and the parallel current gradient appear to only weakly correlate with the 2,1 tearing mode onsets in this database.
Submitted for publication in Physics of Plasmas
</description>
<pubDate>Tue, 01 Aug 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158765</guid>
<dc:date>2023-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Three dimensional low-mode areal-density non-uniformities in indirect-drive implosions at the National Ignition Facility</title>
<link>https://hdl.handle.net/1721.1/158764</link>
<description>Three dimensional low-mode areal-density non-uniformities in indirect-drive implosions at the National Ignition Facility
Casey, D.T.; Landen, O.L.; Hartouni, E.; Bionta, R.M.; Hahn, K.D.; Volegov, P.L.; Fittinghoff, D.N.; Geppert-Kleinrath, V.; Wilde, C.H.; Milovich, C.H.; Smalyuk, V.A.; Field, J.E.; Hurricane, O.A.; Zylstra, A.B.; Kritcher, A.L.; Clark, D.S.; Young, C.V.; Nora, R.C.; Callahan, D.A.; MacGowan, B.J.; Munro, D.H.; Spears, B.K.; Peterson, J.L.; Gaffney, J.A.; Humbird, K.D.; Kruse, M.K.G.; Moore, A.S.; Schlossberg, D.J.; Gatu Johnson, Maria; Frenje, Johan A.
To achieve hotspot ignition, an inertial confinement fusion (ICF) implosion must achieve high hotspot pressure that is inertially confined by a dense shell of DT fuel. This requires a symmetric implosion having high in-flight shell velocity and high areal density at stagnation.  The size of the driver and scale of the capsule required can be minimized by maintaining a high efficiency of energy coupling from the imploding shell to the hotspot. Significant 3D low mode asymmetries, however, are commonly observed in indirect-drive implosions and reduce the coupling of shell kinetic energy to the hotspot. To better quantify the magnitudes and impacts of shell density asymmetries, we have developed new analysis techniques and analytic models [Hurricane et. al., Physics of Plasmas 27 (6), 062704 (2020)]. To build confidence in the underlying data, we have also developed an analytic neutron transport model to cross-compare two independent measurements of asymmetry, which shows excellent agreement across shots for mode-1 (l=1). This work also demonstrates that asymmetry can introduce potential sampling bias into down-scattered ratio measurements causing the solid-angle-average and uncertainty-weighted-average down-scattered ratios to differ significantly. Diagnosing asymmetries beyond mode-1 (l&gt;1) presents significant challenges. Using new diagnostic instruments and analysis techniques, however, evidence of significant Legendre mode P2 (l=2, m=0) and additional 3D asymmetries (l&gt;1, m≠0) are beginning to emerge from the high precision activation diagnostic data (RTNADs) and down-scattered neutron imaging data.
Submitted for publication in Physics of Plasmas
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158764</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Shock Ignition Laser-Plasma Interactions in Ignition-Scale Plasmas</title>
<link>https://hdl.handle.net/1721.1/158763</link>
<description>Shock Ignition Laser-Plasma Interactions in Ignition-Scale Plasmas
Scott, R.H.H.; Glize, K.; Antonelli, L.; Khan, M.; Theobald, W.; Wei, M.; Betti, R.; Stoeckl, C.; Seaton, A.G.; Arber, T.D.; Barlow, D.; Goffrey, T.; Bennett, K.; Garbett, W.; Atzeni, S.; Casner, A.; Batani, D.; Li, Chi-Kang; Woolsey, N.
We use a subignition scale laser, the 30 kJ Omega, and a novel shallow-cone target to study laser-plasma interactions at the ablation-plasma density scale lengths and laser intensities anticipated for direct drive shock-ignition implosions at National Ignition Facility scale. Our results show that, under these conditions, the dominant instability is convective stimulated Raman scatter with experimental evidence of two plasmon decay (TPD) only when the density scale length is reduced. Particle-in-cell simulations indicate this is due to TPD being shifted to lower densities, removing the experimental back-scatter signature and reducing the hot-electron temperature. The experimental laser energy-coupling to hot electrons was found to be 1%– 2.5%, with electron temperatures between 35 and 45 keV. Radiation-hydrodynamics simulations employing these hot-electron characteristics indicate that they should not preheat the fuel in MJ-scale shock ignition experiments.
Submitted for publication in Physical Review Letters
</description>
<pubDate>Mon, 01 Mar 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158763</guid>
<dc:date>2021-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Dyson maps and unitary evolution for Maxwell equations in tensor dielectric media</title>
<link>https://hdl.handle.net/1721.1/158762</link>
<description>Dyson maps and unitary evolution for Maxwell equations in tensor dielectric media
Koukoutsis, Efstratios; Hizanidis, Kyriakos; Ram, Aghay K.; Vahala, George
The propagation and scattering of electromagnetic waves in dielectric media is of theoretical and experimental interest in a wide variety of fields. An understanding of observational results generally requires a numerical solution of Maxwell equations - usually implemented on conventional computers using sophisticated numerical algorithms. In recent years, advances in quantum information science and in the development of quantum computers have piqued curiosity about taking advantage of these resources for an alternate numerical approach to Maxwell equations. This requires a reformulation of the classical Maxwell equations into a form suitable for quantum computers which, unlike conventional computers, are limited to unitary operations. In this paper, a unitary framework is developed for the propagation of electromagnetic waves in a spatially inhomogeneous, passive, nondispersive, and anisotropic dielectric medium. For such a medium, generally, the evolution operator in the combined Faraday-Ampere equations is not unitary. There are two steps needed to convert this equation into a unitary evolution equation. In the first step, a weighted Hilbert space is formulated in which the generator of dynamics is a pseudo-Hermitian operator. In the second step, a Dyson map is constructed which maps the weighted-physical-Hilbert space to the original Hilbert space. The resulting evolution equation for the electromagnetic wave fields is unitary. Utilizing the framework developed in these steps, a unitary evolution equation is derived for electromagnetic wave propagation in a uniaxial dielectric medium. The resulting form is suitable for quantum computing.
Submitted for publication in Physical Review A
</description>
<pubDate>Sun, 01 Jan 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158762</guid>
<dc:date>2023-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Multi-code estimation of DTT edge transport parameters</title>
<link>https://hdl.handle.net/1721.1/158761</link>
<description>Multi-code estimation of DTT edge transport parameters
Balbinot, L.; Rubino, G.; Casiraghi, I.; Meineri, C.; Frassinetti, L.; Aucone, L.; Mantica, P.; Innocente, P.; Wigram, Mike; JET contributors; Alcator C-Mod Team
The main goal of the Divertor Tokamak Test facility (DTT) is to operate with a high value of power-exhaust relevant parameter PSOL/R in plasma scenarios similar to those foreseen for the Demonstration Fusion Power Plant (DEMO) in terms of low collisionality and neutral opacity. For these unique characteristics, accurate modelling of the principal scenario is necessary for machine designing. In edge numerical codes, cross-field transport profiles have a high impact on modelling results. This work aims at providing a coherent set of transport parameters for DTT full-power (FP) single-null (SN) scenario edge modelling. To evaluate such parameters for DTT, a transport analysis on the current machine has been performed using SOLEDGE2D-EIRENE and SOLPS-ITER. The transport parameters to be used in the simulations of the DTT single-null scenario were selected using two complementary methods. The first is the modelling of JET and Alcator C-Mod (C-Mod) with SOLEDGE2DEIRENE and SOLPS-ITER, validating transport parameters by comparing modelling results to experimental data from pulses which are considered DTT-relevant. JET pulses were selected with the highest auxiliary input power (from 29 to 33 MW), plasma current and toroidal field to better match DTT parameters; nitrogen and neon seeded pulses were selected to check possible seeding material dependencies. The considered C-Mod pulse better matches DTT plasma density and neutral opacity. Transport parameters are then scaled to DTT according to scaling laws. The second method derives the transport parameters by tuning their values inside the DTT separatrix to reproduce the pedestal profiles predicted by the EPED model via the Europed code and applied in DTT. The derived set of DTT transport parameters is consistent with the results obtained by modelling present machines, reproduces the expected heat flux decay length in detached conditions and, inside the separatrix, reproduces the predicted pedestal using transport parameters which are coherent with what is predicted by the quasi-linear turbulent model QuaLiKiz.
Submitted for publication in Nuclear Materials and Energy
</description>
<pubDate>Thu, 01 Dec 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158761</guid>
<dc:date>2022-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design of Passive and Structural Conductors for Tokamaks Using Thin-Wall Eddy Current Modeling</title>
<link>https://hdl.handle.net/1721.1/158760</link>
<description>Design of Passive and Structural Conductors for Tokamaks Using Thin-Wall Eddy Current Modeling
Battey, A.F.; Hansen, C.; Garnier, D.; Weisberg, D.; Paz-Soldan, C.; Sweeney, Ryan; Tinguely, R. Alex; Creely, A.J.
A new three-dimensional electromagnetic modeling tool (ThinCurr) has been developed using the existing PSI-Tet finite-element code in support of conducting structure design work for both the SPARC and DIII-D tokamaks. Within this framework a 3D conducting structure model was created for both the SPARC and DIII-D tokamaks in the thin-wall limit. This model includes accurate details of the vacuum vessel and other conducting structural elements with realistic material resistivities. This model was leveraged to support the design of a passive runaway electron mitigation coil (REMC), studying the effect of various design parameters, including coil resistivity, current quench duration, and plasma vertical position, on the effectiveness of the coil. The REMC is a non-axisymmetric coil designed to passively drive large non-axisymmetric fields during the plasma disruption thereby destroying flux surfaces and deconfining RE seed populations. These studies indicate that current designs should apply substantial 3D fields at the plasma surface during future plasma current disruptions as well as highlight the importance of having the REMC conductors away from the machine midplane in order to ensure they are robust to off-normal disruption scenarios.
Submitted for publication in Nuclear Fusion
</description>
<pubDate>Fri, 01 Dec 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158760</guid>
<dc:date>2023-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Constraints on ion velocity distributions from fusion product spectroscopy</title>
<link>https://hdl.handle.net/1721.1/158759</link>
<description>Constraints on ion velocity distributions from fusion product spectroscopy
Crilly, A.J.; Appelbe, B.D.; Mannion, O.M.; Taitano, W.; Hartouni, E.P.; Moore, A.S.; Gatu Johnson, Maria; Chittenden, J.P.
Recent inertial confinement fusion experiments have shown primary fusion spectral moments which are incompatible with a Maxwellian velocity distribution description. These results show that an ion kinetic description of the reacting ions is necessary. We develop a theoretical classification of non-Maxwellian ion velocity distributions using the spectral moments. At the mesoscopic level, a monoenergetic decomposition of the velocity distribution reveals there are constraints on the space of spectral moments accessible by isotropic distributions. General expressions for the directionally dependent spectral moments of anisotropic distributions are derived. At the macroscopic level, a distribution of fluid element velocities modifies the spectral moments in a constrained manner. Experimental observations can be compared to these constraints to identify the character and isotropy of the underlying reactant ion velocity distribution and determine if the plasma is hydrodynamic or kinetic.
Submitted for publication in Nuclear Fusion
</description>
<pubDate>Tue, 01 Mar 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158759</guid>
<dc:date>2022-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>A simple screening current simulation method using equivalent circuit model for REBCO pancake coils</title>
<link>https://hdl.handle.net/1721.1/158758</link>
<description>A simple screening current simulation method using equivalent circuit model for REBCO pancake coils
Noguchi, So; Imai, Teki; Park, Dongkeun; Hahn, Seungyong; Iwasa, Yukikazu
The screening current induced in rare-earth barium copper oxide (REBCO) tape generates an unwanted irregular magnetic field. The screening current-induced field (SCIF) is a challenging issue for MRI, NMR, and accelerators magnet composed of REBCO coils. A few FEM-based simulation methods have been proposed to estimate the SCIF; however, they require a long computation time. Recently, we have proposed a simple SCIF computation method based on the self and mutual inductances of REBCO pancake coils and screening current radial paths on the top and bottom of pancake coils. The accuracy of the proposed method is not excellent; however, the computation time is quite short. In this paper, we report an equivalent circuit model that includes the self and mutual inductances of a REBCO pancake coil and screening current radial path. Moreover, with this proposed method, we can compute the SCIF of no-insulation (NI) REBCO pancake coils, which is not the case with the previously proposed FEM-based simulation method. The proposed method has been validated by experimentation. The proposed method is available online.
Submitted for publication in Superconducting Science and Technology
</description>
<pubDate>Sat, 01 Aug 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158758</guid>
<dc:date>2020-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Realization of thousand-second improved confinement plasma with Super I-mode in Tokamak EAST</title>
<link>https://hdl.handle.net/1721.1/158757</link>
<description>Realization of thousand-second improved confinement plasma with Super I-mode in Tokamak EAST
Song, Yuntao; Zou, Xiaolan; Gong, Xianzu; Becoulet, Alain; Buttery, Richard; Bonoli, Paul T.; Hoang, Tuong; Zhong, Xiaoming; Liu, Adi; Li, Erzhong; Zang, Qing; Qian, Jinping; Liu, Haiqing; Wang, Liang; Xu, Liqing; Zhang, Ling; Li, Guoqiang; Garofalo, Andrea; Osborne, Tom; Leonard, Tony; Baek, Seung Gyou; Wallace, Greg M.; Wang, Shouxin; Chu, Yuqi; Zhang, Tao; Duan, Yanmin; Lian, Hui; Zhang, Xuexi; Jin, Yifei; Ding, Rui; Lyu, Bo; Zhang, Bin; Wang, Xiaojie; Ding, B.; Li, Miaohui; Zhang, Xinjun; Qing, Chengming; Xi, Weibin; Zhang, Jian; Huang, Liansheng; Yao, Damao; Hu, Yanlan; Zuo, Guizhong; Yuan, Qinping; Zhou, Zhiwei; Wang, Mao; Xu, Handong; Xie, Yahong; Wang, Zhengchu; Xu, Gupcheng; Hu, Jiansheng; Lu, Kun; Liu, Fukun; Wan, Baonian; Li, Jiangang; EAST Team
Mastering nuclear fusion, which is an abundant, safe, and environmentally competitive energy, is a great challenge for humanity. Tokamak represents one of the most promising paths toward controlled fusion. Obtaining a high-performance, steady-state, and long-pulse plasma regime remains a critical issue. Recently, a big breakthrough in steady-state operation was made on the Experimental Advanced Superconducting Tokamak (EAST). A steady-state plasma with a world-record pulse length of 1056 s was obtained, where the density and the divertor peak heat flux were well controlled, with no core impurity accumulation, and a new high-confinement and self-organizing regime (Super I-mode = I-mode + e-ITB) was discovered and demonstrated. These achievements contribute to the integration of fusion plasma technology and physics, which is essential to operate nextstep devices.
Submitted for publication in Science Advances
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158757</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Proton imaging of high-energy-density laboratory plasmas</title>
<link>https://hdl.handle.net/1721.1/158756</link>
<description>Proton imaging of high-energy-density laboratory plasmas
Schaeffer, D.B.; Bott, A.F.A.; Borghesi, M.; Flippo, K.A.; Fox, W.; Fuchs, J.; Li, Chi-Kang; Sénguin, Frederick H.; Park, H.-S.; Tzeferacos, P.; Willingale, L.
Proton imaging has become a key diagnostic for measuring electromagnetic fields in high-energydensity (HED) laboratory plasmas. Compared to other techniques for diagnosing fields, proton imaging is a measurement that can simultaneously offer high spatial and temporal resolution and the ability to distinguish between electric and magnetic fields without the protons perturbing the plasma of interest. Consequently, proton imaging has been used in a wide range of HED experiments, from inertial-confinement fusion to laboratory astrophysics. An overview is provided on the state of the art of proton imaging, including a discussion of experimental considerations like proton sources and detectors, the theory of proton-imaging analysis, and a survey of experimental results demonstrating the breadth of applications. Topics at the frontiers of proton-imaging development are also described, along with an outlook on the future of the field.
Submitted for publication in Reviews of Modern Physics
</description>
<pubDate>Sat, 01 Oct 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158756</guid>
<dc:date>2022-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Realization of a Gas Puff Imaging System on the Wendelstein 7-X Stellarator</title>
<link>https://hdl.handle.net/1721.1/158755</link>
<description>The Realization of a Gas Puff Imaging System on the Wendelstein 7-X Stellarator
Terry, James L.; von Stechow, A.; Baek, Seung Gyou; Ballinger, S.B.; Grulke, O.; von Sehren, C.; Laube, R.; Killer, C.; Scharmer, F.; Brunner, K.J.; Knauer, J.; Bois, S.; W7-X Team
A system for studying the spatio-temporal dynamics of fluctuations in the boundary of the W7-X plasma using the ``Gas-Puff Imaging'' (GPI) technique has been designed, constructed, installed, and operated. This GPI system addresses a number of challenges specific to long-pulse superconducting devices like W7-X, including the long distance between the plasma and the vacuum vessel wall, the long distance between the plasma and diagnostic ports, the range of last closed flux surface (LCFS) locations for different magnetic configurations in W7-X, and management of heat loads on the system's plasma-facing components. The system features a pair of ``converging-diverging'' nozzles for partially collimating the gas puffed locally approximately 110 mm radially outboard of the plasma boundary, a pop-up turning mirror for viewing the gas puff emission from the side (which also acts as a shutter for the re-entrant vacuum window), and a high-throughput optical system that collects visible emission resulting from the interaction between the puffed gas and the plasma and directs it along a water-cooled re-entrant tube directly onto the 8 x 16 pixel detector array of the fast camera. The DEGAS 2 neutrals code was used to simulate the H-alpha (656 nm) and the HeI (587 nm) line emission expected from well-characterized gas-puffs of H2 and He and excited within typical edge plasma profiles in W7-X, thereby predicting line brightnesses used to reduce the risks associated with system sensitivity and placement of the field of view. Operation of GPI on W7-X shows excellent signal to noise ratios (&gt;100 at 2 Mframes/s) over the field of view for minimally perturbing gas puffs. The GPI system provides detailed measurements of the 2-dimensional (radial and poloidal) dynamics of plasma fluctuations in the W7-X edge and scrape-off layer, and in and around the magnetic islands outside the LCFS that make up the island divertor configuration employed on W7-X.
Submitted for publication in Review of Scientific Instruments
</description>
<pubDate>Mon, 01 Apr 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158755</guid>
<dc:date>2024-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Determining spectral response of the National Ignition Facility particle time of flight diagnostic to x rays</title>
<link>https://hdl.handle.net/1721.1/158754</link>
<description>Determining spectral response of the National Ignition Facility particle time of flight diagnostic to x rays
Reichelt, Benjamin L.; Kabadi, Neel V.; Pearcy, Jacob A.; Gatu Johnson, Maria; Dannhoff, S.; Lahmann, Brandon; Frenje, Johan A.; Li, Chi-Kang; Sutcliffe, G.; Kunimune, Justin H.; Petrasso, Richard D.; Sio, H.; Moore, A.; Mariscal, E.; Hartouni, E.
The Particle Time of Flight (PTOF) diagnostic is a chemical vapor deposition (CVD) diamond detector used for measuring multiple nuclear bang times at the National Ignition Facility (NIF). Due to the non-trivial, polycrystalline structure of these detectors, individual characterization and measurement is required to interrogate the sensitivity and behavior of charge carriers. In this paper, a process is developed for determining the xray sensitivity of PTOF detectors and relating it to intrinsic properties of the detector. We demonstrate that the diamond sample measured has a significant non-homogeneity in its properties, with the sensitivity given by a linear model $ax+b$, where $a=0.60 \pm 0.16 V^{-1}mm^{-1}$ and $b=0.00 \pm 0.04 V^{-1}$. We also use this method to confirm an electron to hole mobility ratio of $1.5 \pm 1.0$ and an effective band gap of $1.8 eV$ rather than the theoretical $5.5eV$, leading to a large sensitivity increase.
Submitted for publication in Review of Scientific Instruments
</description>
<pubDate>Wed, 01 Jun 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158754</guid>
<dc:date>2022-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>A high-resolution neutron spectroscopic camera for the SPARC tokamak based on the JET European Torus Deuterium-Tritium experience</title>
<link>https://hdl.handle.net/1721.1/158753</link>
<description>A high-resolution neutron spectroscopic camera for the SPARC tokamak based on the JET European Torus Deuterium-Tritium experience
Tardocchi, M.; Rebai, M.; Rigamonti, D.; Tinguely, R. Alex; Caruggi, F.; Croci, G.; Dal Molin, A.; Ghani, Z.; Giacomelli, L.; Girolami, M.; Grosso, G.; Kushoro, M.; Marcer, G.; Mastellone, M.; Muraro, A.; Nocente, M.; Perelli Cippo, E.; Petruzzo, M.; Putignano, O.; Scionti, J.; Serpente, V.; Trucchi, D.M.; Mackie, S.; Saltos, A.A.; De Marchi, E.; Parisi, M.; Trotta, A.; de la Luna, E.; Garcia, J.; Kazakov, Y.; Maslov, Mm.; Stancar, Z.; Gorini, G.; JET contributors
Dedicated nuclear diagnostics have been designed, developed and built within EUROFUSION enhancement programs in the last ten years for installation at the Joint European Torus (JET) and capable of operation in high power Deuterium-Tritium (DT) plasmas. The recent DT Experiment campaign, called DTE2, has been successfully carried out in the second half of 2021 and provides a unique opportunity to evaluate the performance of the new nuclear diagnostics and for understanding of their behavior in the record high 14 MeV neutron yields (up to 4.7*10^18 n/s) and total number of neutrons (up to 2*10^19 n) achieved on a tokamak. In this work we will focus on the 14 MeV high resolution neutron spectrometers based on artificial diamonds which for the first time have extensively been used to measure 14 MeV DT neutron spectra with unprecedented energy resolution (FWHM of ~1% at 14 MeV). The work will describe their long-term stability and operation over the DTE2 campaign as well as their performance as neutron spectrometers in terms of achieved energy resolution and high rate capability.  This important experience will be used to outline the concept of a spectroscopic neutron camera for the SPARC tokamak. The proposed neutron camera will be the first one to feature the dual capability to measure i) the 2.5 and 14 MeV neutron emissivity profile, via the conventional neutron detectors based on liquid or plastics scintillators, and ii) the 14 MeV neutron spectral emission via the use of high-resolution diamond-based spectrometers. The new opportunities opened by the spectroscopic neutron camera to measure plasma parameters will be discussed.
Submitted for publication in Review of Scientific Instruments
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158753</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Response of CR-39 nuclear track detectors to protons with non-normal incidence</title>
<link>https://hdl.handle.net/1721.1/158752</link>
<description>Response of CR-39 nuclear track detectors to protons with non-normal incidence
Przybocki, Ryan; Gatu Johnson, Maria; Sutcliffe, G.; Lahmann, Brandon; Séguin, Frederick H.; Frenje, Johan A.; Adrian, Patrick J.; Johnson, Timothy M.; Pearcy, Jacob A.; Kabadi, Neel V.; Birkel, Andrew; Petrasso, Rirchard D.
This paper presents data from experiments with protons at non-normal incidence to CR-39 nuclear track detectors, analyzing the properties of detection efficiency, proton track diameter, track contrast, and track eccentricity. Understanding the CR-39 response to protons incident at an angle is important for designing charged particle detectors for inertial confinement fusion (ICF) applications. This study considers protons with incident energies less than 3 MeV. In this regime, an incident angle of 10° has no effect on CR-39 detection efficiency, and &gt;85% detection efficiency is preserved up through 25° in the range of 1.0 MeV–2.1 MeV. For ICF applications, incident angles above 30° are deemed impractical for detector design due to significant drops in proton detection at all energies. We observe significant reductions in detection efficiency compared to theoretical predictions, particularly at low energies where proton tracks are etched away. The proton track diameter measured by the scan system is observed to decrease with higher incident angles. The track diameters are analyzed with two fitting models, and it is shown that the diameter–energy relation can be fit with the existing models at angles up to 30°. The optical contrast of the tracks tends to increase with the angle, meaning that the tracks are fainter, and a larger increase is observed for higher energies. Eccentricity, a measure of how elongated proton tracks are, increases with the incident angle and drops after the critical angle. The lowest energy tracks remain nearly circular even at higher angles.
Submitted for publication in Review of Scientific Instruments
</description>
<pubDate>Tue, 01 Sep 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158752</guid>
<dc:date>2020-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>EuroPED-NN: Uncertainty aware surrogate model</title>
<link>https://hdl.handle.net/1721.1/158751</link>
<description>EuroPED-NN: Uncertainty aware surrogate model
Panera Alarez, A.; Ho, Aaron; Järvinen, A.; Saarelma, S.; Wiesen, S.; JET contributors; ASDEX Upgrade Team
This work successfully generates an uncertainty-aware surrogate model of the EuroPED plasma pedestal model using the Bayesian neural network with noise contrastive prior (BNN-NCP) technique. This model is trained using data from the JET-ILW pedestal database and subsequent model evaluations, conforming to EuroPED-NN. The BNN-NCP technique has been proven to be a suitable method for generating uncertainty-aware surrogate models. It matches the output results of a regular neural network while providing confidence estimates for predictions as uncertainties. Additionally, it highlights out-of-distribution (OOD) regions using surrogate model uncertainties. This provides critical insights into model robustness and reliability. EuroPED-NN has been physically validated, first, analyzing electron density ne(ψ_pol = 0.94) with respect to increasing plasma current, Ip, and second, validating the Δ−β_p,ped relation associated with the EuroPED model. This affirms the robustness of the underlying physics learned by the surrogate model. On top of that, the method was used to develop a EuroPED-like model fed with experimental data, i.e. an uncertainty aware experimental model, which is functional in JET database. Both models have been also tested in ~50 AUG shots.
Submitted for publication in Plasma Physics and Controlled Fusion
</description>
<pubDate>Sat, 01 Jun 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158751</guid>
<dc:date>2024-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design of the ion-optics for the MRSt neutron spectrometer at the National Ignition Facility (NIF)</title>
<link>https://hdl.handle.net/1721.1/158750</link>
<description>Design of the ion-optics for the MRSt neutron spectrometer at the National Ignition Facility (NIF)
Berg, G.P.A.; Frenje, Johan A.; Kunimune, Justin H.; Trosseille, C.A.; Couder, M.; Kilkenny, J.D.; Mackinnon, A.J.; Moore, A.S.; Waltz, C.S.; Wiescher, M.C.
A new Magnetic Recoil Spectrometer (MRSt) is designed to provide time-resolved measurements of the energy spectrum of neutrons emanating from an inertial confinement fusion implosion at the National Ignition Facility. At present, time integrated parameters are being measured using the existing magnet recoil and neutron time-of-flight spectrometers. The capability of high energy resolution of 2 keV and the extension to high time resolution of about 20 ps are expected to improve our understanding of conditions required for successful fusion experiments. The layout, ion-optics, and specifications of the MRSt will be presented.
Submitted for publication in Review of Scientific Instruments
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158750</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Top-level physics requirements and simulated performance of the MRSt on the National Ignition Facility</title>
<link>https://hdl.handle.net/1721.1/158749</link>
<description>Top-level physics requirements and simulated performance of the MRSt on the National Ignition Facility
Kunimune, Justin H.; Frenje, Johan A.; Berg, G.P.A.; Trosseille, C.A.; Nora, R.C.; Waltz, C.S.; Moore, A.S.; Kilkenny, J.D.; Mackinnon, A.J.
The time-resolving Magnetic Recoil Spectrometer (MRSt) for the National Ignition Facility (NIF) has been identified by the US National Diagnostic Working Group as one of the transformational diagnostics that will reshape the way inertial confinement fusion (ICF) implosions are diagnosed. The MRSt will measure the time-resolved neutron spectrum of an implosion, from which the time-resolved ion temperature, areal density, and yield will be inferred. Top-level physics requirements for the MRSt were determined based on simulations of numerous ICF implosions with varying degrees of alpha heating, P2 asymmetry, and mix. Synthetic MRSt data were subsequently generated for different configurations using Monte–Carlo methods to determine its performance in relation to the requirements. The system was found to meet most requirements at current neutron yields at the NIF. This work was supported by the DOE and LLNL.
Submitted for publication in Review of Scientific Instruments
</description>
<pubDate>Tue, 01 Dec 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158749</guid>
<dc:date>2020-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Wendelstein 7-X phase contrast imaging diagnostic</title>
<link>https://hdl.handle.net/1721.1/158748</link>
<description>The Wendelstein 7-X phase contrast imaging diagnostic
Huang, Zhouji; Edlund, E.; Porkolab, Miklos; von Stechow, A.; Bähner, J-P.; Böttger, L.-G.; v. Sehren, C.; Grulke, O.
A phase contrast imaging (PCI) diagnostic has been developed for the Wendelstein 7-X (W7-X) stellarator. The PCI diagnostic provides line-integrated measurement of turbulent electron density fluctuations, which is essential for understanding and achieving high performance scenarios that can lead to improved confinement at fusion-relevant temperatures and densities. The PCI system is also sensitive to coherent fluctuations, which arise from Alfvén eigenmodes or other MHD activity. This paper provides an overview of the hardware and the optical system and presents an example of PCI measurement from the W7-X OP1.2b experimental campaign.
Submitted for publication in Review of Scientific Instruments
</description>
<pubDate>Wed, 01 Apr 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158748</guid>
<dc:date>2020-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>The conceptual design of 1-ps time resolution neutron detector for fusion reaction history measurement at OMEGA and the National Ignition Facility</title>
<link>https://hdl.handle.net/1721.1/158747</link>
<description>The conceptual design of 1-ps time resolution neutron detector for fusion reaction history measurement at OMEGA and the National Ignition Facility
Arikawa, Yasunobu; Ota, Masato; Nakajima, Makoto; Shimizu, Tomoki; Segawa, Sadashi; Phan, Thanh Nhat Khoa; Sakawa, Youichi; Abe, Yuki; Morace, Alessio; Mirfayzi, Seyed Reza; Yogo, Akifumi; Fujioka, Shinsuke; Nakai, Mitsuo; Shiraga, Hiroyuki; Azechi, Hiroshi; Kodama, Ryosuke; Kan, Koichi; Frenje, Johan A.; Gatu Johnson, Maria; Bose, Arijit; Kabadi, Neel V.; Sutcliffe, Graeme D.; Adrian, Patrick J.; Li, Chi-Kang; Séguin, Fredrick H.; Petrasso, Richard D.
The nuclear burn history provides critical information about the dynamics of the hot-spot formation and high-density fuel-shell assembly of an Inertial Confinement Fusion (ICF) implosion, as well as information on the impact of alpha heating, and a multitude of implosion failure mechanisms. Having this information is critical for assessing the energy-confinement time τE and performance of an implosion. As the confinement time of an ICF implosion is a few tens of picoseconds, less than 10-ps time resolution is required for an accurate measurement of the nuclear burn history. In this study, we propose a novel 1-ps time-resolution detection scheme based on the Pockels effect. In particular, a conceptual design for the experiment on the National Ignition Facility and OMEGA are elaborated upon herein. A small organic Pockels crystal “DAST” is designed to be positioned ∼5 mm from the ICF implosion, which is scanned by a chirped pulse generated by a femtosecond laser transmitted through a polarization-maintained optical fiber. The originally linearly polarized laser is changed to an elliptically polarized laser by the Pockels crystal when exposed to neutrons, and the modulation of the polarization will be analyzed. Our study using 35-MeV electrons showed that the system impulse response is 0.6 ps. The response time is orders of magnitude shorter than current systems. Through measurements of the nuclear burn history with unprecedented time resolution, this system will help for a better understanding of the dynamics of the hot-spot formation, high-density fuel-shell assembly, and the physics of thermonuclear burn wave propagation.
Submitted for publication in Review of Scientific Instruments
</description>
<pubDate>Fri, 01 May 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158747</guid>
<dc:date>2020-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Novel SOLPS-ITER simulations of X-point target and snowflake divertors</title>
<link>https://hdl.handle.net/1721.1/158746</link>
<description>Novel SOLPS-ITER simulations of X-point target and snowflake divertors
Cowley, C.; Kuang, Adam Q.; Moulton, D.; Lore, J.D.; Canik, J.; Umansky, M.; Wigram, Mike; Ballinger, S.; Lipschulz, B.
The design and understanding of alternative divertor configurations may be crucial for achieving acceptable steady-state heat and particle material loads for magnetic confinement fusion reactors. Multiple X-point alternative divertor geometries such as snowflakes and X-point targets have great potential in reducing power loads, but have not yet been simulated widely in codes with kinetic neutrals. This paper discusses recent changes made to the SOLPS-ITER code to allow for the simulation of X-point target and low-field side snowflake divertor geometries. Snowflake simulations using this method are presented, in addition to the first SOLPS-ITER simulation of the X-point target. Analysis of these results show reasonable consistency with the simple modelling and theoretical predictions, supporting the validity of the methodology implemented.
Submitted for publication in Plasma Physics and Controlled Fusion
</description>
<pubDate>Tue, 01 Nov 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158746</guid>
<dc:date>2022-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Development of a synthetic phase contrast imaging diagnostic for turbulence studies at Wendelstein 7-X</title>
<link>https://hdl.handle.net/1721.1/158745</link>
<description>Development of a synthetic phase contrast imaging diagnostic for turbulence studies at Wendelstein 7-X
Hansen, Soren K.; Porkolab, Miklos; Bähner, J.-P.; Huang, Z.; von Stechow, A.; Grulke, O.; Edlund, E.M.; Wilms, F.; Bañón Navarro, A.; Jenko, F.; Sánchez, E.
We present a synthetic phase contrast imaging (PCI) diagnostic for studying turbulence at the Wendelstein 7-X (W7-X) stellarator. We first describe the implemented instrument response model, which captures diffraction effects, detector noise, and the long-wavelength cutoff due to the phase plate of the PCI system. To verify the instrument response model, we show that it is capable of reproducing the PCI signal generated by the sound wave speaker used for calibration at W7-X. Next, we discuss the calculation of synthetic PCI signals based on the global, nonlinear gyrokinetic codes GENE-3D and EUTERPE, including results from some of the first stellarator simulations of this type with kinetic electrons (KEs) in GENE-3D. While the simulations used in this work lack a neoclassical radial electric field, which is crucial for reproducing experimental PCI signals, they do indicate that the dominant rotation direction and velocities of the turbulent fluctuations can be inferred from the wave number-frequency spectra of the PCI signals, as expected. The synthetic PCI wave number spectra are further shown to be similar to those of the line-integrated fluctuating electron density, with distinct differences between adiabatic and KE simulations, explainable by previously published turbulence models. For example, the wave number spectra of all adiabatic electron simulations analyzed here follow a power law with an exponent close to −5 for sufficiently large wave numbers. This indicates that universal features of electron density turbulence at W7-X may be studied using the PCI system.
Submitted for publication in Plasma Physics and Controlled Fusion
</description>
<pubDate>Tue, 01 Mar 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158745</guid>
<dc:date>2022-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Effects of alpha-ion stopping on ignition and ignition criteria in inertial confinement fusion experiments</title>
<link>https://hdl.handle.net/1721.1/158744</link>
<description>Effects of alpha-ion stopping on ignition and ignition criteria in inertial confinement fusion experiments
Reichelt, Benjamin L.; Petrasso, Richard D.; Li, Chi-Kang
With the advent of ignited plasmas at the National Ignition Facility (NIF), alpha physics has become a driving factor in theoretical understanding and experimental behavior. In this communication, we explore aspects of direct alpha-ion heating through comparison of the consequences from the one-fluid and two-fluid models in the hydrodynamic approach. We show that the case with all alpha energy deposited in electrons raises the ignition criteria by ~4 keV or ~0.2 g/cm2 in the hotspot relative to the case with all alpha energy deposited in ions. In the case of the recently ignited NIF implosion, 30% of the 3.5 MeV a energy is deposited into the DT fuel ions, for which there is negligible difference between the one-fluid and two-fluid ignition criteria. However, changes in the ion stopping fraction through profile effects and alternate stopping power models could lead to ignition curve shifts of ~1 keV.
Submitted for publication in Physics of Plasmas
</description>
<pubDate>Sun, 01 Oct 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158744</guid>
<dc:date>2023-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Kinetic theory of parametric decay instabilities near the upper hybrid resonance in plasmas</title>
<link>https://hdl.handle.net/1721.1/158743</link>
<description>Kinetic theory of parametric decay instabilities near the upper hybrid resonance in plasmas
Han, Jiangyue; Gao, Zhe; Hansen, Soren K.
Parametric decay instabilities (PDIs) near the upper hybrid resonance layer are studied with a 1D framework. In a uniform plasma, the kinetic nonlinear dispersion relation of PDI is numerically calculated for parameters corresponding to electron cyclotron heating experiments at the ASDEX-U tokamak, in which O-mode radiation was converted to X-mode radiation by reflection from the high-field sidewall. The forward scattering processes driven by X-mode and linearly converted electron Bernstein waves (EBWs) are investigated and found to lead to a primary PDI where the pump waves decay into lower hybrid waves and sideband EBWs. A frequency shift of 930 MHz is obtained for the sideband EBWs in the primary PDIs. Subsequently, the sideband EBWs can decay into a low-frequency ion Bernstein quasi- mode (IBQM) and a secondary EBW, where the dominant forward scattering channel is the first-order IBQM with a frequency close to twice the ion cyclotron frequency. The decay channels obtained by numerical calculation can explain the characteristics of the signal observed in ASDEX-U experiments. The threshold of the pump electric field strength required to excite the primary PDI in the presence of plasma inhomogeneity is also estimated.
Submitted for publication in Physics of Plasmas
</description>
<pubDate>Sun, 01 Jan 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158743</guid>
<dc:date>2023-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Investigating Boosted Decision Trees as a Guide for Inertial Confinement Fusion Design</title>
<link>https://hdl.handle.net/1721.1/158742</link>
<description>Investigating Boosted Decision Trees as a Guide for Inertial Confinement Fusion Design
Maris, Andrew D.; Khan, Shahab F.; Pokornik, Michael M.; Peterson, J. Luc; Humbird, Kelli D.; Haan, Steven W.
Inertially confined fusion experiments at the National Ignition Facility have recently entered a new regimes apporaching ignition. Improved modelling and exploration of the experimental parameter space were essential to deepening our understanding of the mechanisms that degrade and amplify the neutron yield. The growing prevalence of machine learning in fusion studies opens a new avenue for investigation. In this paper, we have applied the Gradient Boosted Decision Tree (GBDT) machine learning architecture to further explore the parameter space and find correlations with the neutron yield, a key performance indicator. We find reasonable agreement between the measured and predicted yield, with a mean absolute percentage error on a randomly assigned test set of 35.5%. This model finds the characteristics of the laser pulse to  be the most influential in prediction, as well as the hohlraum opening size and the new capsule fabrication technique. We used the trained model to scan over the design space of experiments from three different campaigns to evaluate the potential of this technique to provide design changes that could improve the resulting neutron yield. While this data-driven model cannot predict ignition without examples of ignited shots in the training set, it can be used to indicate that an unseen shot design will at least be in the upper range of previously observed neutron yields.
Submitted for publication in Physics of Plasmas
</description>
<pubDate>Sun, 01 Jan 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158742</guid>
<dc:date>2023-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mitigation of mode-one asymmetry in laser-direct-drive inertial confinement fusion implosions</title>
<link>https://hdl.handle.net/1721.1/158741</link>
<description>Mitigation of mode-one asymmetry in laser-direct-drive inertial confinement fusion implosions
Mannion, O.M.; Igumenshchev, I.V.; Anderson, K.S.; Betti, R.; Campbell, E.M.; Cao, D.; Forrest, C.J.; Gatu Johnson, Maria; Glebov, V.Yu.; Goncharov, V.N.; Gopalaswamy, V.; Ivancic, S.T.; Jacobs-Perkins, D.W.; Kalb, A.; Knauer, J.P.; Kwiatkowski, J.; Lees, A.; Marshall, F.J.; Michalko, M.; Mohamed, Z.L.; Patel, D.; Rinderknecht, H.G.; Shah, R.C.; Stoeckl, C.; Theobald, W.; Woo, K.M.; Regan, S.P.
Nonuniformities present in the laser illumination and target in laser-driven inertial confi nement fusion experiments lead to an asymmetric compression of the target, resulting in an inefficient conversion of shell kinetic energy to thermal energy of the hot-spot plasma. In this paper, the effects of asymmetric compression of cryogenic deuterium tritium laser-direct-drive implosions are examined using a suite of nuclear and x-ray diagnostics on the OMEGA laser. The neutron-averaged hot-spot velocity (~uhs) and apparent ion temperature (Ti) asymmetry are determined from neutron time-of-flight measurements of the primary deuterium tritium fusion neutron energy spectrum, while the areal density (rhoR) of the compressed fuel surrounding the hot spot is inferred from measurements of the scattered neutron energy spectrum. The low-mode perturbations of the hot-spot shape are characterized from x-ray self-emission images recorded along three quasi-orthogonal lines of sight. Implosions with signifi cant mode-1 laser drive asymmetries show large hot-spot velocities (&gt;100 km/s) in a direction consistent with the hot-spot elongation observed in x-ray images, measured Ti asymmetry, and rhoR asymmetry. Laser drive corrections have been applied through shifting the initial target location in order to mitigate the observed asymmetry. With the asymmetry corrected, a more-symmetric hot spot is observed with reduced ~uhs, Ti asymmetry, rhoR asymmetry, and a 30% increase in the fusion yield.
Submitted for publication in Physics of Plasmas
</description>
<pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158741</guid>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Validation of IMEP on Alcator C-Mod and JET-ILW ELMy H-mode plasmas</title>
<link>https://hdl.handle.net/1721.1/158740</link>
<description>Validation of IMEP on Alcator C-Mod and JET-ILW ELMy H-mode plasmas
Luda, T.; Angioni, C.; Dunne, M.G.; Fable, E.; Kallenbach, A.; Bonanomi, N.; Schneider, P.A.; Siccinio, M.; Tardini, G.; ASDEX Upgrade Team; EUROfusion MST1 Team; Rodriguez Fernandez, Pablo; Hughes, Jerry W.; Howard, Nathan T.; Alcator C-Mod Team; Frassinetti, L.; Saarelma, S.; JET contributors
The recently developed integrated model based on engineering parameters (IMEP) (Luda et al 2020 Nucl. Fusion 61 126048; Luda et al 2021 Nucl. Fusion 60 036023), so far validated on ASDEX Upgrade, has been tested on a database of 3 Alcator C-Mod and 55 JET-ILW ELMy (type I) H-mode stationary phases. The empirical pedestal transport model included in IMEP, consisting now of imposing a fixed value of  R &lt; rTe &gt; =Te;top = -82:5, allows an accurate prediction of the pedestal top temperature (when the pedestal top density is fixed to the experimental measurements) across these three machines with different sizes, when the pedestal is peeling–ballooning (PB) limited. Cases far from the ideal PB boundary, corresponding to high edge Spitzer resistivity, are instead strongly overpredicted by IMEP. A comparison between the predictions of Europed and IMEP for a subset of JET-ILW cases shows that IMEP can more accurately reproduce the experimental pedestal width. This allows IMEP to better capture profile effects on the pedestal stability, and therefore to correctly describe the negative effect of fueling on the pedestal pressure for PB limited cases. A strong correlation between the separatrix density and the fueling rate has been identified for a subset of JET-ILW cases, when taking into account different divertor configurations. Overall, these promising results encourage further developments of integrated models to obtain reliable predictions of pedestal and global confinement using only engineering parameters for present and future machines.
Submitted for publication in Plasma Physics and Controlled Fusion
</description>
<pubDate>Sun, 01 Jan 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158740</guid>
<dc:date>2023-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Diagnosing hot-spot symmetry in surrogate ignition experiments via secondary DT-neutron spectroscopy at the NIF</title>
<link>https://hdl.handle.net/1721.1/158739</link>
<description>Diagnosing hot-spot symmetry in surrogate ignition experiments via secondary DT-neutron spectroscopy at the NIF
Adrian, Patrick J.; Bionta, R.; Casey, D.; Gatu Johnson, Maria; Kerr, S.; Lahmann, Brandon; Li, Chi-Kang; Nora, R.; Petrasso, Richard D.; Rigon, G.; Schlossberg, D.; Séguin, Frederick H.; Frenje, Johan A.
The directional energy spectrum of neutrons generated from the in-flight fusion reaction of 1-MeV tritons contains information about the hot-spot symmetry. The National Ignition Facility (NIF) fields Symmetry Capsule (Symcap) implosions, which have historically measured the symmetry of the radiation, drive by measuring the hot-spot shape via x-ray self-emission. Symcaps are used to tune the hot-spot symmetry for ignition experiments at the NIF. This work shows the relationship between directional secondary DT-n spectra and x-ray imaging data for a large database of Symcap implosions. A correlation is observed between the relative widths of the DT-n spectra measured with nTOFs and the shape measured with x-ray imaging. A Monte Carlo model, which computes the directional secondary DT-n spectrum, is used to interpret the results. A comparison of the x-ray and secondary DT-n data with the Monte Carlo model indicates that 56% of the variance between the two datasets is explained by a P2 asymmetry. More advanced simulations using HYDRA suggest that the unaccounted variance is due to P1 and P4 asymmetries present in the hot spot. The comparison of secondary DT-n data and x-ray imaging data to the modeling shows the DT-n data contain important information that supplements current P2 measurements and contain new information about the P1 asymmetry.
Submitted for publication in Physics of Plasmas
</description>
<pubDate>Mon, 01 Apr 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158739</guid>
<dc:date>2024-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Further Rotation Reversal Studies in C-Mod L-mode Plasmas</title>
<link>https://hdl.handle.net/1721.1/158738</link>
<description>Further Rotation Reversal Studies in C-Mod L-mode Plasmas
Rice, John E.; Cao, N.M.; Diamond, P.H.; Greenwald, M.J.; Hubbard, Amanda E.; Marmar, E.S.; Reinke, M.L.; Rodriguez-Fernandez, P.
Studies of core toroidal rotation reversal phenomenology in C-Mod deuterium L-mode plasmas have been expanded to include details of the dependences on plasma current and toroidal magnetic field. Rotation reversal occurs at a critical density and universal scaling indicates that the product of n_crit q_95 R ~ B_T/2, with n_crit in 10^20/m^3, R in m and B_T in T. Measurements in H and He plasmas exhibit similar behavior, including a connexion with the LOC/SOC transition and the cut-off for non-diffusive heat transport. Electron density and ICRF power modulation experiments suggest that the collisionality nu_* is a unifying parameter. Strong impurity puffing causes the critical density to increase, indicating that the situation is more complicated than only collisionality, perhaps involving the details of the effects of dilution on ITG mode stability.
Submitted for publication in Physics of Plasmas
</description>
<pubDate>Mon, 01 May 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158738</guid>
<dc:date>2023-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Measurements of improved stability to achieve higher fuel compression in ICF</title>
<link>https://hdl.handle.net/1721.1/158737</link>
<description>Measurements of improved stability to achieve higher fuel compression in ICF
Do, A.; Casey, D.T.; Clark, D.S.; Bachman, B.; Baker, K.L.; Braun, T.; Briggs, T.M.; Chapman, T.D.; Celliers, P.M.; Chen, H.; Choate, C.; Dewald, E.L.; Divol, L.; Fathi, G.; Fittinghoff, D.N.; Hall, G.N.; Hartouni, E.; Holunga, D.M.; Khan, S.F.; Kritcher, A.L.; Landen, O.L.; MacPhee, A.G.; Millot, M.; Marley, E.V.; Milovich, J.L.; Nikroo, A.; Pak, A.E.; Schlossberg, D.J.; Smalyuk, V.A.; Stadermann, M.; Strozzi, D.J.; Tommasini, R.; Weber, C.R.; Woodworth, B.N.; Yanagisawa, D.K.; Birge, N.W.; Danly, C.R.; Durocher, M.; Freeman, M.S.; Geppert-Kleinrath, H.; Geppert-Kleinrath, V.; Kim, Y.; Meaney, K.D.; Wilde, C.H.; Gatu Johnson, Maria; Allen, A.; Ratledge, M.; Kong, C.; Fehrenbach, T.; Wild, C.
While nuclear fusion ignition has been achieved at the National Ignition Facility (NIF) in inertial confinement fusion (ICF) experiments, obtaining higher gain and more efficient burn is still desired. In that regard, increasing the compression of the fuel is an important factor. In recent indirect-drive capsule implosions, the SQ-n campaign is testing the hypothesis that reducing the hydrodynamic growth of perturbations is key to achieving higher compression of high-density carbon (HDC) based-ablators for ICF. SQ-n uses a design at lower adiabat with a ramped foot laser pulse shape to minimize early-time hydrodynamic instability growth, predicted to be reduced by a factor of 10, and an optimized ablator dopant distribution. Subsets of experiments were conducted within the SQ-n campaign to study the implosion symmetry, laser backscatter, stability, and compression. Only the latter two will be reviewed here. Shock timing experiments using the VISAR diagnostic enabled the development of a gently accelerating shock velocity. The ice-ablator interface acceleration, important for managing the Richtmyer-Meshkov phase growth, was observed with refraction enhanced radiography (RER) and the ablation front growth was measured using radiography of pre-imposed modulations. Finally, layered THD and DT implosions demonstrate that between 15%+/-3% and 30%+/-6% improved compression has been achieved.
Submitted for publication in Physics of Plasmas
</description>
<pubDate>Mon, 01 May 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158737</guid>
<dc:date>2023-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Experimental Study of the Edge Radial Electric Field in Different Drift Configurations and its Role in the Access to H-mode at ASDEX Upgrade</title>
<link>https://hdl.handle.net/1721.1/158736</link>
<description>Experimental Study of the Edge Radial Electric Field in Different Drift Configurations and its Role in the Access to H-mode at ASDEX Upgrade
Plank, U.; Brida, D.; Conway, G.D.; Happel, T.; Hubbard, Amanda E.; Pütterich, T.; Angioni, C.; Cavedon, M.; Dux, R.; Eich, T.; Fischer, R.; Hennequin, P.; ASDEX Upgrade Tea,
The formation of the equilibrium radial electric field (Er) has been studied experimentally at ASDEX Upgrade (AUG) in L-modes of ’favourable’ (ion ∇B-drift towards primary X-point) and ’unfavourable’ (ion ∇B-drift away from primary X-point) drift configuration, in view of its impact on H-mode access, which changes with drift configuration. Edge electron and ion kinetic profiles, impurity velocity and mean-field Er profiles across the separatrix are investigated, employing new and improved measurement techniques. The experimental results are compared to local neoclassical theory as well as to a simple 1D scrape-off layer (SOL) model. It is found that in L-modes of matched heating power and plasma density the upstream SOL Er and the main ion pressure gradient in the plasma edge are the same for either drift configuration, whereas the Er well in the confined plasma is shallower in unfavourable compared to favourable drift configuration. The contributions of toroidal and poloidal main ion flows to Er, which are inferred from local neoclassical theory and the experiment, cannot account for these observed differences. Furthermore, it is found that in L-mode the intrinsic toroidal edge rotation decreases with increasing collisionality and it is co-current in the bananaplateau regime for all different drift configurations at AUG. This gives rise to a possible interaction of parallel Pfirsch-Schlüter flows in the SOL with the confined plasma. Thus, the different H-mode power threshold for the two drift configurations can not be explained in the same way at AUG as suggested by LaBombard et al. for Alcator C-Mod1. Finally, comparisons of Er profiles in favourable and unfavourable drift configuration at the respective confinement transitions show that also there the Er gradients are all different, which indirectly indicates a different type or strength of the characteristic edge turbulence in the two drift configurations.
Submitted for publication in Physics of Plasmas
</description>
<pubDate>Wed, 01 Mar 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158736</guid>
<dc:date>2023-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Inferences of hot electron preheat and its spatial distribution in OMEGA direct drive implosions</title>
<link>https://hdl.handle.net/1721.1/158735</link>
<description>Inferences of hot electron preheat and its spatial distribution in OMEGA direct drive implosions
Christopherson, A.R.; Betti, R.; Forrest, C.J.; Howard, J.; Theobald, W.; Campbell, E.M.; Delettrez, J.; Rosenberg, M.J.; Solodov, A.; Stoeckl, C.; Patel, D.; Gopalaswamy, V,; Cao, D.; Peebles, J.; Edgell, D.; Seka, W.; Epstein, R.; Scullin, W.; Radha, P.B.; Wei, M.S.; Regan, S.P.; Gatu Johnson, Maria; Simpson, R.
Hot electrons generated from laser plasma instabilities degrade performance of direct drive implosions by preheating the deuterium and tritium (DT) fuel resulting in early decompression and lower areal densities at stagnation. A technique to quantify the hot electron preheat of the dense DT fuel and connect it to the degradation in areal density is described in detail. Hot electrons are measured primarily from the hard x-rays they emit as they slow down in the target. The DT preheat is inferred from a comparison of the hard x-ray signals between a DT-layered implosion and its mass equivalent ablator only implosion. The preheat energy spatial distribution within the imploding shell is inferred from experiments using high Z payloads of varying thicknesses. It is found that the electrons deposit their energy uniformly throughout the shell material. For typical direct-drive OMEGA implosions driven with an overlapped intensity of ∼9·10^14 W/cm2, approximately ∼0.02%–0.03% of the laser energy is converted into preheat of the stagnated fuel which corresponds to areal density degradations of 10%–20%. The degradations in areal density explain some of the observed discrepancies between the simulated and measured areal densities.
Submitted for publication in Physics of Plasmas
</description>
<pubDate>Sat, 01 Jan 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158735</guid>
<dc:date>2022-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Modeling hydrodynamics, magnetic fields and synthetic radiographs for high-energy-density plasma flows in shock-shear targets</title>
<link>https://hdl.handle.net/1721.1/158734</link>
<description>Modeling hydrodynamics, magnetic fields and synthetic radiographs for high-energy-density plasma flows in shock-shear targets
Lu, Yingchoa; Li, Shengtai; Li, Hui; Flippo, Kirk A.; Barnak, Dan; Birkel, Andrew; Lahmann, Brandon; Li, Chi-Kang; Rasmus, Alexander M.; Kelso, Kwyntero; Zylstra, Alex; Liang, Edison; Tzeferacos, Petros; Lamb, Don
Three-dimensional FLASH radiation-magnetohydrodynamics (radiation-MHD) modeling is carried out to study the hydrodynamics and magnetic fields in the shock-shear derived platform. Simulations indicate that fields of tens of Tesla can be generated via Biermann battery effect due to vortices and mix in the counter-propagating shock-induced shear layer. Synthetic proton radiography simulations using MPRAD and synthetic X-ray image simulations using SPECT3D are carried out to predict the observable features in the diagnostics. Quantifying the effects of magnetic fields in inertial confinement fusion (ICF) and high-energy-density (HED) plasmas represents frontier research that has far-reaching implications in basic and applied sciences.
Submitted for publication in Physics of Plasmas
</description>
<pubDate>Tue, 01 Oct 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158734</guid>
<dc:date>2019-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Edge turbulence measurements in L-mode and I-mode at ASDEX Upgrade</title>
<link>https://hdl.handle.net/1721.1/158733</link>
<description>Edge turbulence measurements in L-mode and I-mode at ASDEX Upgrade
Bielajew, R.; Conway, G.D.; Griener, M.; Happel, T.; Höfler, K.; Howard, Nathan T.; Hubbard, Amanda E.; McCarthy, William; Molina Cabrera, Pedro A.; Nishizawa, T.; Rodriguez-Fernandez, P.; Silvagni, D.; Vanovac, B.; Wendler, D.; Yoo, C.; White, Anne E.; The ASDEX Upgrade Team
The I-mode confinement regime is promising for future reactor operation due to high energy confinement without high particle confinement. However, the role of edge turbulence in creating I-mode's beneficial transport properties is still unknown. New measurements of edge turbulence in L-modes and I-modes at low and high densities at ASDEX Upgrade are presented in this paper. A high radial resolution correlation electron cyclotron emission radiometer measures the broadband turbulence throughout the L-mode and I-mode edge and pedestal. The weakly coherent mode (WCM) is measured in both L-mode and I-mode near the last closed flux surface with Te fluctuation levels of 2.3%–4.2%, with a frequency shift between the two phases related to a deeper Er well in I-mode. An nT phase diagnostic captures a change of the WCM nT phase between L-mode and I-mode. The thermal He beam diagnostic measures a WCM wavenumber range of −0.5 to −1.0 cm−1. A low-frequency edge oscillation (LFEO) appears in the I-mode phase of these discharges and displays coupling to the WCM, but the LFEO does not appear in the L-mode phase. Linear gyrokinetic simulations of the outer core and pedestal top turbulence indicate that while the dominant turbulent modes in the outer core are ion directed and electrostatic, the turbulence becomes increasingly electron directed and electromagnetic with increasing radius. Collisionality is not found to impact characteristics of the L-mode and I-mode edge turbulence with respect to the presence of the WCM; however, the quality of global confinement decreases with collisionality.
Submitted for publication in Physics of Plasmas
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158733</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Scaling of laser-driven electron and proton acceleration as a function of laser pulse duration, energy, and intensity in the multi-picosecond regime</title>
<link>https://hdl.handle.net/1721.1/158732</link>
<description>Scaling of laser-driven electron and proton acceleration as a function of laser pulse duration, energy, and intensity in the multi-picosecond regime
Simpson, R.A.; Scott, G.G.; Mariscal, D.; Rusby, D.; King, P.M.; Grace, E.; Aghedo, A.; Pagano, I.; Sinclair, M.; Armstrong, C.; Manuel, M. J.-E.; Haid, A.; Flippo, K.; Winslow, L.; Gatu Johnson, Maria; Frenje, Johan A.; Neely, D.; Kerr, S.; Williams, G.J.; Andrews, S.; Cauble, R.; Charron, K.; Costa, R.; Fischer, B.; Maricle, S.; Stuart, B.; Albert, F.; Lemos, N.; Mackinnon, A.; MacPhee, A.; Pak, A.; Ma, T.
A scaling study of short-pulse laser-driven proton and electron acceleration was conducted as a function of pulse duration, laser energy, and laser intensity in the multi-picosecond (ps) regime (∼0.8 ps–20 ps). Maximum proton energies significantly greater than established scaling laws were observed, consistent with observations at other multi-ps laser facilities. In addition, maximum proton energies and electron temperatures in this regime were found to be strongly dependent on the laser pulse duration and preplasma conditions. A modified proton scaling model is presented that is able to better represent the accelerated proton characteristics in this multi-ps regime.
Submitted for publication in Physics of Plasmas
</description>
<pubDate>Thu, 01 Jul 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158732</guid>
<dc:date>2021-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Effects of F3+ ion implantation on the properties of W and W0.5(TaTiVCr)0.5 for depth marker-based plasma erosion analysis</title>
<link>https://hdl.handle.net/1721.1/158731</link>
<description>Effects of F3+ ion implantation on the properties of W and W0.5(TaTiVCr)0.5 for depth marker-based plasma erosion analysis
Waseem, Owais Ahmed; Woller, Kevin Benjamin; Sweidan, Faris Bassam; JinRyu, Ho
The irradiation resistance of tungsten (W) and a high-entropy alloy-based material W0.5(TaTiVCr)0.5 was analysed using depth marker implantation (F3+ ions irradiation). Mirror-polished W and W0.5(TaTiVCr)0.5 samples were exposed to 5.0 MeV and 4.2 MeV, respectively, F3+ ions up to a maximum fluence of 3.2x1012 ions/cm2. The scanning electron and atomic force microscopy of implanted W showed nanostructure and pinholes, respectively, whereas the surface of implanted W0.5(TaTiVCr)0.5 remained fairly smooth. The nanoindentation hardness of W and W0.5(TaTiVCr)0.5 increased from 6.6 GPa to 8.5 GPa and from 13.9 GPa to 16.3 GPa, respectively, due to implantation. The ion implantation induced lattice defects and compressive stress, as a result, the BCC peaks of W and W0.5(TaTiVCr)0.5 moved to higher Bragg angles. The irradiation induced strain in W0.5(TaTiVCr)0.5 (4.4x10-4) remained lower than that in pure W (8.5x10-4). The comparison of W and W0.5(TaTiVCr)0.5 suggested the higher resistance of W0.5(TaTiVCr)0.5 to high energy ion implantation.
Submitted for publication in Nuclear Materials and Energy
</description>
<pubDate>Tue, 01 Sep 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158731</guid>
<dc:date>2020-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Summary of the IAEA technical meeting on plasma disruptions and their mitigation</title>
<link>https://hdl.handle.net/1721.1/158730</link>
<description>Summary of the IAEA technical meeting on plasma disruptions and their mitigation
Bandyopadhyay, Indranil; Barbarino, Matteo; Bhattacharjee, Amitava; Eidietis, Nicholas; Huber, Alexander; Isayama, Akihiko; Kim, Jayhyun; Konovalov, Sergey; Lehnen, Michael; Nardon, Eric; Pautasso, Gabriella; Rea, Cristina; Sozzi, Carlo; Villone, Fabio; Zeng, Long
This report summarizes the contributions presented at the IAEA technical meeting on plasma disruptions and their mitigation, held virtually, 20–23 July 2020. The meeting brought together more than 120 experts from nuclear fusion research sites worldwide to discuss experimental, theoretical and modelling work in the field of plasma disruptions with special emphasis on developing a solid basis for possible disruption mitigation strategies in ITER and next generation fusion devices. The main topics of the meeting were: (i) disruption consequences, including electromagnetic loads, heat loads, and runaway electrons; (ii) disruption prediction and avoidance, including machine learning and physics-based approaches, and control aspects; and (iii) disruption mitigation, including shattered pellet injection, alternative techniques and general aspects of disruption mitigation.
Submitted for publication in Nuclear Fusion
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158730</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Scoping study of lower hybrid current drive for CFETR</title>
<link>https://hdl.handle.net/1721.1/158729</link>
<description>Scoping study of lower hybrid current drive for CFETR
Wallace, Greg M.; Ding, B.J.; Li, M.H.; Chen, J.; Baek, Seung Gyou; Bonoli, Paul T.; Shiraiwa, S.; Liu, L.; Wu, C.B.
The paper assesses the applicability of lower hybrid current drive (LHCD) for two potential operating scenarios for the China Fusion Engineering Test Reactor (CFETR): the “hybrid” scenario in which some of the plasma current is sustained by the Ohmic transformer, and the fully non-inductive “steady state” scenario. The πScope workflow engine was used to set up a large number of ray tracing/Fokker- Planck simulations (&gt; 10^4) with parametric scans in the antenna poloidal position and launched parallel refractive index (n||) for both the hybrid and steady state scenarios. Modeling predicts efficient off-axis current drive (1.3 MA for 20 MW launched power) with a peak near ρ of 0.6-0.65 for waves launched from the high field side (HFS). Waves launched from the low field side (LFS) damp at larger radius (ρ &gt; 0.73) with similar efficiency to HFS launch. Stability analysis of the CFETR scenarios favors current drive profiles peaked near the mid-radius, suggesting that HFS launch is preferable due to the current drive location. The effect of wave scattering from density blobs in the edge/scrape-off-layer region was assessed through rotation of the perpendicular wavenumber at the ray origin. Simulations show that this effect can be quite large both in efficiency and damping location, however by adjusting the launched n|| much of the unperturbed performance can be recovered.
Submitted for publication in Nuclear Fusion
</description>
<pubDate>Thu, 01 Jul 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158729</guid>
<dc:date>2021-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Hybrid deep learning architecture for general disruption prediction across tokamaks</title>
<link>https://hdl.handle.net/1721.1/158728</link>
<description>Hybrid deep learning architecture for general disruption prediction across tokamaks
Zhu, Jinxiang; Rea, Cristina; Montes, Kevin J.; Granetz, R.S.; Sweeney, Ryan; Tinguely, R. Alex
In this paper, we present a new deep learning disruption prediction algorithm based on important findings from explorative data analysis which effectively allows knowledge transfer from existing devices to new ones, thereby predicting disruptions using very limited disruptive data from the new devices. The explorative data analysis conducted via unsupervised clustering techniques confirms that time-sequence data are much better separators of disruptive and non-disruptive behavior than the instantaneous plasma state data with further advantageous implications for a sequence-based predictor. Based on such important findings, we have designed a new algorithm for multi-machine disruption prediction that achieves high predictive accuracy on the C-Mod (AUC=0.801), DIII-D (AUC=0.947) and EAST (AUC=0.973). tokamaks with limited hyperparameter tuning. Through numerical experiments, we show that boosted accuracy (AUC=0.959) is achieved on EAST predictions by including in the training only 20 disruptive discharges, thousands of non-disruptive discharges from EAST, and combining this with more than a thousand discharges from DIII-D and C-Mod. The improvement of predictive ability obtained by combining disruptive data from other devices is found to be true for all permutations of the three devices. Furthermore, by comparing the predictive performance of each individual numerical experiment, we find that non-disruptive data are machine-specific while disruptive data from multiple devices contain device-independent knowledge that can be used to inform predictions for disruptions occurring on a new device.
Submitted for publication in Nuclear Fusion
</description>
<pubDate>Sat, 01 Aug 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158728</guid>
<dc:date>2020-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>On the very high energy confinement observed in super H-mode DIII-D experiments</title>
<link>https://hdl.handle.net/1721.1/158727</link>
<description>On the very high energy confinement observed in super H-mode DIII-D experiments
Ding, S.; Garofalo, A.M.; Knolker, M.; Marinoni, Alessandro; McClenaghan, J.; Grierson, B.A.
Analysis of recent super H-mode experiments on DIII-D shows that high rotation, not high pedestal, plays the essential role in achieving very high conﬁnement H98y2 &gt; 1.5. Very high conﬁnement is reached early on in the H-mode phase of these discharges, when the pedestal is still very low, but after the toroidal rotation has already built-up to very high levels in the core. As the discharge evolves, the rotation drops, and so does the energy conﬁnement, despite a sustained very high pressure pedestal. During this evolution, the conﬁnement quality is linearly correlated with the core toroidal rotation, which varies according to diﬀerent levels of injected neutral beam torque per particle. Core transport modeling shows that the contribution from rotation in the E×B shear is responsible for conﬁnement quality signiﬁcantly in excess of standard H-mode (H98y2 ∼ 1).
Submitted for publication in Nuclear Fusion
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158727</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Neoclassical transport in strong gradient regions of large aspect ratio tokamaks</title>
<link>https://hdl.handle.net/1721.1/158726</link>
<description>Neoclassical transport in strong gradient regions of large aspect ratio tokamaks
Trinczek, Silvia; Parra, Felix I.; Catto, Peter J.; Calvo, Iván; Landreman, Matt
We present a new neoclassical transport model for large aspect ratio tokamaks where the gradient scale lengths are of the size of the poloidal gyroradius. Previous work on neoclassical transport across transport barriers assumed large density and potential gradients but a small temperature gradient, or neglected the gradient of the mean parallel flow. Using large aspect ratio and low collisionality expansions, we relax these restrictive assumptions. We define a new set of variables based on conserved quantities, which simplifies the drift kinetic equation whilst keeping strong gradients, and derive equations describing the transport of particles, parallel momentum and energy by ions in the banana regime. The poloidally varying parts of density and electric potential are included. Studying contributions from both passing and trapped particles, we show that the resulting transport is dominated by trapped particles. We find that a non-zero neoclassical particle flux requires parallel momentum input which could be provided through interaction with turbulence or impurities. We derive upper and lower bounds for the energy flux across a transport barrier in both temperature and density and present example profiles and fluxes.
Submitted for publication in Journal of Plasma Physics
</description>
<pubDate>Thu, 01 Dec 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158726</guid>
<dc:date>2022-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Reimagining full wave rf quasilinear theory in a tokamak</title>
<link>https://hdl.handle.net/1721.1/158725</link>
<description>Reimagining full wave rf quasilinear theory in a tokamak
Catto, Peter J.; Tolman, Elizabeth A.
The velocity dependent resonant interaction of particles with applied radio frequency (rf) waves during heating and current drive in the presence of pitch angle scattering collisions gives rise to narrow collisional velocity space boundary layers that dramatically enhance the role of collisions as recently shown by Catto (J. Plasma Phys., vol. 86, 815860302, 2020). The behavior is a generalization of the narrow collisional boundary layer that forms during Landau damping as found by Johnston (Phys. Fluids, vol. 14, 1971, pp. 2719-2726) and Auerbach (Phys. Fluids, vol. 20, 1977, pp. 1836-1844). For a wave of parallel wave number k|| interacting with weakly collisional plasma species of collision frequency ν and thermal speed vth , the effective collision frequency becomes of order ν(k_||v_th /ν)^2/3&gt;&gt; ν . The narrow boundary layers that arise because of the diffusive nature of the collisions allows a physically meaningful wave-particle interaction time to be defined that is the inverse of this effective collision frequency. The collisionality implied by the narrow boundary layer results in changes in the standard quasilinear treatment of applied rf fields in tokamaks while remaining consistent with causality. These changes occur because successive poloidal interactions with the rf are correlated in tokamak geometry and because the resonant velocity space dependent interactions are controlled by the spatial and temporal behavior of the perturbed full wave fields rather than just the spatially local Landau and Doppler shifted cyclotron wave-particle resonance condition associated with unperturbed motion of the particles. The correlation of successive poloidal circuits of the tokamak leads to the appearance in the quasilinear operator of transit averaged resonance conditions localized in velocity space boundary layers that maintain negative definite entropy production.
Submitted for publication in Journal of Plasma Physics
</description>
<pubDate>Tue, 01 Dec 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158725</guid>
<dc:date>2020-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Contamination of Argon X-ray Spectra by Tungsten and Other Elements Commonly Found in Tokamaks</title>
<link>https://hdl.handle.net/1721.1/158724</link>
<description>Contamination of Argon X-ray Spectra by Tungsten and Other Elements Commonly Found in Tokamaks
Rice, John E.; Gu, M.; Cao, N.M.; Hughes, Jerry W.; Reinke, M.L.; Sertoli, M.; Vezinet, D.
Emission lines which appear in the spectral ranges of ground state transitions from n = 2 levels in He- and H-like argon ions are discussed. X-ray transitions from elements commonly found in tokamaks (tungsten, molybdenum, iron and sulphur) which radiate in the wavelength range from 3700 - 4000 mA are identified by comparison with atomic structure calculations. Individual lines from tungsten charge states in the vicinity of Zn-like W^44+ are documented, along with B-like Mo^37+. The behavior of line ratios as a function of electron temperature is examined, in support of the identifications.
Submitted for publication in Journal of Physics B: Atomic, Molecular and Optical Physics
</description>
<pubDate>Sun, 01 Nov 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158724</guid>
<dc:date>2020-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Qubit Lattice Algorithms Based on the Schrodinger-Dirac Representation of Maxwell Equations and Their Extensions</title>
<link>https://hdl.handle.net/1721.1/158723</link>
<description>Qubit Lattice Algorithms Based on the Schrodinger-Dirac Representation of Maxwell Equations and Their Extensions
Vahala, George; Soe, Min; Kououtsis, Efstratios; Hizanidis, Kyriakos; Vahala, Linda; Ram, Abhay K.
It is well known that Maxwell equations can be expressed in a unitary Schrodinger-Dirac representation for homogeneous media. However, difficulties arise when considering inhomogeneous media. A Dyson map points to a unitary field qubit basis, but the standard qubit lattice algorithm of interleaved unitary collision-stream operators must be augmented by some sparse non-unitary potential operators that recover the derivatives on the refractive indices. The effect of the steepness of these derivatives on two-dimensional scattering is examined with simulations showing quite complex wavefronts emitted due to transmissions/reflections within the dielectric objects. Maxwell equations are extended to handle dissipation using Kraus operators. Then, our theoretical algorithms are extended to these open quantum systems. A quantum circuit diagram is presented as well as estimates on the required number of quantum gates for implementation on a quantum computer.
Submitted for publication in IntechOpen
</description>
<pubDate>Sat, 01 Jul 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158723</guid>
<dc:date>2023-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>First-Cut Design of a Benchtop Cryogen-Free 23.5-T/25-mm Magnet for 1-GHz Microcoil NMR</title>
<link>https://hdl.handle.net/1721.1/158722</link>
<description>First-Cut Design of a Benchtop Cryogen-Free 23.5-T/25-mm Magnet for 1-GHz Microcoil NMR
Park, Dongkeun; Dong, Fangliang; Lee, Wooseung; Bascuñán, Juan; Iwasa, Yukikazu
As a preliminary work, we have completed a 12.5-mm-cold-bore high-temperature superconducting (HTS) REBCO magnet prototype and successfully operated it up to 25 T at 10 K cooled by a cryocooler only, without liquid helium. In this paper we present the first-cut design of a cryogen-free all-REBCO 23.5-T/25-mm-warm-bore magnet having a high homogeneity of &lt;0.1 ppm over a 1-cm diameter of spherical volume for a benchtop 1-GHz microcoil NMR spectroscopy. We also investigate a shielding design to reduce a 5-gauss fringe field radius to ≤1.5 m. This benchtop magnet will incorporate all the innovative design and operation concepts validated by the prototype magnet: 1) all-HTS composition and operation at above 4.2 K; 2) no-insulation winding technique with an extra shunting that makes this high-field REBCO magnet compact, mechanically robust, and self-protecting; 3) a single coil formation that leads, compared with the traditional multi- nested high-field NMR magnet, to simpler and more affordable manufacturing processes; 4) operational temperature-controlled screening-current reduction method which reduces peak stresses within the REBCO coil and field errors; and 5) cryogenic design for conduction-cooling operation.
Submitted for publication in IEEE Transactions on Applied Superconductivity
</description>
<pubDate>Mon, 01 Aug 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158722</guid>
<dc:date>2022-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>An MgB2 Superconducting Joint with its own Heat-Treatment Schedule</title>
<link>https://hdl.handle.net/1721.1/158721</link>
<description>An MgB2 Superconducting Joint with its own Heat-Treatment Schedule
Tanaka, Hiromi; Li, Yi; Choi, Yoonhyuck; Park, Dongkeun; Lee, Wooseung; Tanaka, Hideki; Bascuñán, Juan; Iwasa, Yukikazu
We suggested an MgB2 joint process with its own heat-treatment schedule to apply it for our 1.5-T MgB2 “finger” MRI magnet. In fabricating the MgB2 magnet, the optimal heat-treatment schedule to attain a reproducible and high critical current is different in a joint and a coil. To solve this problem, we introduced an additional heating system, which is composed of a cartridge heater and a thermocouple connected with a copper block, into a box-type furnace. Then, we carried out heattreatments with exclusively increasing the joint-part temperature above theMgmelting point of 645 °C—the jointwas actually heated up to 700 °C.We evaluated a critical current and a crystal structure of the obtained MgB2 joint. From experimental results, we found that the joint heated with the own heat-treatment schedule, which is 700 °C for 1 h+600°C for 11 h, showed a good Ic of over 450 A at 15K under self-field. The joint resistance was estimated by the coil operation for 18 days, and it was expected to be less than 10−12 Ω.
Submitted for publication in IEEE Transactions on Applied Superconductivity
</description>
<pubDate>Tue, 01 Dec 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158721</guid>
<dc:date>2020-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Overview of the Neutron Diagnostic Systems for the SPARC Tokamak</title>
<link>https://hdl.handle.net/1721.1/158720</link>
<description>Overview of the Neutron Diagnostic Systems for the SPARC Tokamak
Raj, P.; Ball, J.L.; Carmichael, J.; Frenje, Johan A.; Gocht, R.; Gorini, G.; Holmes, I.; Gatu Johnson, Maria; Kennedy, R.; Mackie, S.; Noncente, M.; Panontin, E.; Petruzzo, M.; Rebai, M.; Reinke, M.; Rice, John E.; Rigamonti, D.; Dalla Rosa, M.; Saltos, A.A.; Tardocchi, M.; Tinguely, R. Alex; Wang, X.
Neutron measurement is the primary tool in the SPARC tokamak for fusion power (Pfus) monitoring, research on the physics of burning plasmas, validation of the neutronics simulation workflows, and providing feedback for machine protection. A demanding target uncertainty (10% for Pfus) and coverage of a wide dynamic range (&gt;8 orders of magnitude going up to 5x10^19 n/s), coupled with a fast-track timeline for design and deployment, make the development of the SPARC neutron diagnostics challenging. Four subsystems are under design, which exploit the high flux of direct DT and DD plasma neutrons emanating from a shielded opening in a midplane diagnostic port. The systems comprise: a set of ~15 flux monitors mainly ionization chamber and proportional counters for measurement of the neutron yield rate, two independent foil activation systems for measurement of the neutron fluence, a spectrometric radial neutron camera for poloidal profiling of the plasma emissivity, and a high-resolution magnetic proton recoil spectrometer for measurement of the core neutron spectrum. Together, the four systems ensure redundancy of sensors and methods, and aim to provide high resolutions of time (10 ms), space (~7 cm), and energy (&lt;2% at 14 MeV). This paper presents the broader objectives behind the preliminary design of the SPARC neutron diagnostics, and discusses the ongoing studies on neutronics, detector comparisons, prototyping, and integration with the unique infrastructure of SPARC. Engineering details of the four subsystems and the concepts for in-situ neutron calibration are also highlighted.
Submitted for publication in Review of Scientific Instruments
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158720</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Quantification and visualization of uncertainties in reconstructed penumbral images of implosions at Omega</title>
<link>https://hdl.handle.net/1721.1/158719</link>
<description>Quantification and visualization of uncertainties in reconstructed penumbral images of implosions at Omega
Kunimune, Justin H.; Heuer, P.V.; Reichelt, Benjamin L.; Johnson, Timothy M.; Frenje, Johan A.
Penumbral imaging is a technique used in plasma diagnostics in which a radiation source shines through one or more large apertures onto a detector. To interpret a penumbral image, one must reconstruct it to recover the original source. The inferred source always has some error due to noise in the image and uncertainty in the instrument geometry. Interpreting the inferred source thus requires quantification of that inference’s uncertainty. Markov chain Monte Carlo algorithms have been used to quantify uncertainty for similar problems but have never been used for the inference of the shape of an image. Because of this, there are no commonly accepted ways of visualizing uncertainty in two- dimensional data. This paper demonstrates the application of the Hamiltonian Monte Carlo algorithm to the reconstruction of penumbral images of fusion implosions and presents ways to visualize the uncertainty in the reconstructed source. This methodology enables more rigorous analysis of penumbral images than has been done in the past.
Submitted for publication in Review of Scientific Instruments
</description>
<pubDate>Sat, 01 Jun 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158719</guid>
<dc:date>2024-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>The next-generation Magnetic Recoil Spectrometer (MRSnext) on OMEGA and NIF for diagnosing ion temperature, yield, areal density, and alpha heating</title>
<link>https://hdl.handle.net/1721.1/158718</link>
<description>The next-generation Magnetic Recoil Spectrometer (MRSnext) on OMEGA and NIF for diagnosing ion temperature, yield, areal density, and alpha heating
Wink, Christopher W.; Gatu Johnson, Maria; Mackie, S.; Kunimune, Justin H.; Dannhoff, S.G.; Lawrence, Y.; Berg, G.P.A.; Casey, D.T.; Schlossberg, D.J.; Gopalaswamy, V.; Katz, J.; Regan, S.P.; Stoeckl, C.; Burgett, T.; Ivancic, S.; McClow, H.; Scott, M.; Frelier, J.; Frenje, Johan A.
The next-generation magnetic recoil spectrometer (MRSnext) is being designed to replace the current MRS at the National Ignition Facility and OMEGA for measurements of the neutron spectrum from an inertial confinement fusion implosion. The MRSnext will provide a far-superior performance and faster data turnaround than the current MRS systems, i.e., a 2× and 6× improvement in energy resolution at the NIF and OMEGA, respectively, and 20× improvement in data turnaround time. The substantially improved performance of the MRSnext is enabled by using electromagnets that provide a short focal plane (12–16 cm) and unprecedented flexibility for a wide range of applications. In addition to being able to measure neutron yield, apparent ion temperature, areal density, and plasma-flow velocity over a wide range of yields, the NIF MRSnext will be able to directly, uniquely assess the alpha heating of the fuel ions through measurements of the alpha knock-on tail in the neutron spectrum. The goal is to implement a radiation-hard electronic detection system capable of providing rapid data acquisition and analysis. The development of the MRSnext will also set the foundation for the more advanced, time-resolving MRSt and serve as a testbed for its implementation on the NIF.
Submitted for publication in Review of Scientific Instruments
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158718</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Aligning the Thomson scattering and charge exchange recombination diagnostics using neutral beam emission at DIII-D</title>
<link>https://hdl.handle.net/1721.1/158717</link>
<description>Aligning the Thomson scattering and charge exchange recombination diagnostics using neutral beam emission at DIII-D
Feyrer, Abigail; Haskey, S.R.; Chrystal, C.; Aidala, C.A.
This work addresses discrepancies in the alignment of the H-mode pedestal profiles of the electron and ion properties in the DIII-D tokamak as measured by Thomson Scattering (TS) and Charge Exchange Recombination Spectroscopy (CER) diagnostics. While the alignment of these profiles is key for accurate studies of tokamak physics and plasma confinement, misalignments can occur due to inaccuracies, such as in magnetic equilibrium reconstructions required to map measurements in different poloidal and toroidal locations. Both FIDASIM, an established simulation package, and a simplified collisional radiative model are used to simulate neutral beam state densities and neutral beam emission. Simulated neutral beam emissions are calculated based on shifted TS profiles and compared to beam emission measurements from the Main Ion CER system to determine the best shift for aligning TS with CER. This analysis is performed on various DIII-D discharges.
Submitted for publication in Review of Scientific Instruments
</description>
<pubDate>Fri, 01 Mar 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158717</guid>
<dc:date>2024-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Robust Identification of Multiple Input Single Output System Response for Efficient Pickup Noise Removal from Tokamak Diagnostics</title>
<link>https://hdl.handle.net/1721.1/158716</link>
<description>Robust Identification of Multiple Input Single Output System Response for Efficient Pickup Noise Removal from Tokamak Diagnostics
Odstrcil, T.; Laggner, F.; Rosenthal, Aaron M.; Bortolon, A.; Hughes, Jerry W.; Spendlove, J.C.; Wilks, Theresa M.
Electromagnetic pickup noise in the tokamak environment imposes an imminent challenge for measuring weak diagnostic photocurrents nA range. The diagnostic signal can be contaminated by an unknown mixture of crosstalk signals from coils powered by currents in kA range. To address this issue, an algorithm for robust identification of linear multi-input single-output (MISO) systems has been developed. MISO model describes the dynamic relationship between measured signals from power sources and observed signals in the diagnostics and allows for a precise subtraction of the noise component. The proposed method was tested on experimental diagnostic data from the DIII-D tokamak, and it has reduced noise by up to 20 dB in 1–20 kHz range.
Submitted for publication in Review of Scientific Instruments
</description>
<pubDate>Mon, 01 Aug 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158716</guid>
<dc:date>2022-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>High-yield magnetic recoil neutron spectrometer on the National Ignition Facility for operation up to 60 MJ</title>
<link>https://hdl.handle.net/1721.1/158715</link>
<description>High-yield magnetic recoil neutron spectrometer on the National Ignition Facility for operation up to 60 MJ
Gatu Johnson, Maria; Johnson, Timothy M.; Lahmann, Brandon; Séguin, Frederick H.; Sperry, B.; Bhandarkar, N.; Bionta, R.M.; Casco, E.; Casey, D.T.; Mackinnon, A.J.; Masters, N.; Moore, A.; Nikroo, A.; Hoppe, M.; Mohammed, R.; Sweet, W.; Freeman, C.; Picciotto, V.; Roumell, J.; Frenhe, Johan A.
Recent progress at the National Ignition Facility (NIF), with neutron yields of order 1 × 10^17, places new constraints on diagnostics used to characterize implosion performance. The Magnetic Recoil neutron Spectrometer (MRS), which is routinely used to measure yield, ion temperature (Tion), and down-scatter ratio (dsr), has been adapted to allow measurements of dsr up to 5 × 10^17, and yield and Tion up to 2 × 10^18 in the near term with new data processing techniques and conversion foil solutions. This paper presents a solution for extending MRS operation up to a yield of 2 × 10^19 (60 MJ) by moving the spectrometer outside of the NIF shield wall. This will not only enhance the upper yield limit by 10× but also improve signal-to-background by 5×.
Submitted for publication in Review of Scientific Instruments
</description>
<pubDate>Fri, 01 Apr 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158715</guid>
<dc:date>2022-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Absolute calibration of the Lyman- α measurement apparatus at DIII-D</title>
<link>https://hdl.handle.net/1721.1/158714</link>
<description>Absolute calibration of the Lyman- α measurement apparatus at DIII-D
Laggner, F.M.; Bortolon, A.; Rosenthal, Aaron M.; Wilks, Theresa M.; Hughes, Jerry W.; Freeman, C.; Golfinopoulos, T.; Nagy, A.; Mauzey, D.; Shafer, M.W.; DIII-D Team
The LLAMA (Lyman-Alpha Measurement Apparatus) diagnostic was recently installed on the DIII-D tokamak [Rosenthal et al., Rev. Sci. Instrum. (submitted) (2020)]. LLAMA is a pinhole camera system with a narrow band Bragg mirror, a bandpass interference filter, and an absolute extreme ultraviolet photodiode detector array, which measures the Ly-α brightness in the toroidal direction on the inboard, high field side (HFS) and outboard, low field side (LFS). This contribution presents a setup and a procedure for an absolute calibration near the Ly-α line at 121.6 nm. The LLAMA in-vacuum components are designed as a compact, transferable setup that can be mounted in an ex situ vacuum enclosure that is equipped with an absolutely calibrated Ly-α source. The spectral purity and stability of the Ly-α source are characterized using a vacuum ultraviolet spectrometer, while the Ly-α source brightness is measured by a NIST-calibrated photodiode. The non-uniform nature of the Ly-α source emission was overcome by performing a calibration procedure that scans the Ly-α source position and employs a numerical optimization to determine the emission pattern. Nominal and measured calibration factors are determined and compared, showing agreement within their uncertainties. A first conversion of the measured signal obtained from DIII-D indicates that the Ly-α brightness on the HFS and LFS is on the order of 1020 Ph sr^{−1} m^{−2} s^{−1}. The established calibration setup and procedure will be regularly used to re-calibrate the LLAMA during DIII-D vents to monitor possible degradation of optical components and detectors.
Submitted for publication in Review of Scientific Instruments
</description>
<pubDate>Mon, 01 Mar 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158714</guid>
<dc:date>2021-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Diagnostic development for parallel wave-number measurement of lower hybrid waves in EAST</title>
<link>https://hdl.handle.net/1721.1/158713</link>
<description>Diagnostic development for parallel wave-number measurement of lower hybrid waves in EAST
Wang, Y.F.; Ding,B.J.; Li, M.H.; Baek, Seung Gyou; Wallace, Greg M.; Liu, L.; Zhao, L.M.; Wang, M.; Wu, Z.G.; Liu, F.K.; Shan, J.F.; Zhang, X.J.; Li, Y.C.; Wu, C.B.
An eight-channel magnetic probe diagnostic system has been designed and installed adjacent to the 4.6 GHz lower hybrid (LH) grill antenna in the low-field side of the EAST tokamak in order to study the n|| evolution of lower hybrid waves in the first pass from the launcher to the core plasma. The magnetic probes are separated by 6.6 mm, which allows measurement of the dominant parallel refractive index n|| up to n|| =5 for 4.6GHz LH waves. The magnetic probes are designed to be sensitive to the magnetic field component perpendicular to the background magnetic field with a slit on the casing that encloses the probe. The intermediate frequency (IF) stage, which consists of two mixing stages, down-coverts the frequency of the measured wave signals at 4.6 GHz to 20 MHz. A bench test demonstrates the phase stability of the magnetic probe diagnostic system. By evaluating the phase variation of the measured signals along the background magnetic field, the dominant n|| of the LH wave in the scrape-off layer (SOL) has been deduced during the 2019 experimental campaign. In the low density plasma, the measured dominant n|| of the lower hybrid waves is about 2.1, corresponding to the main peak 2.04 of the launched n|| spectrum. The n|| deduced by the least square linear fit method remains near this value in the low density plasma with a high spatial correlation magnitude of 0.9. With an eight-channel probe system a wave-number spectrum has also been deduced, which has a peak near to the measured dominant n||.
Submitted for publication in Review of Scientific Instruments
</description>
<pubDate>Mon, 01 Jun 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158713</guid>
<dc:date>2020-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Influence of Proton Irradiation on Corrosion in Liquid Lead</title>
<link>https://hdl.handle.net/1721.1/158712</link>
<description>Influence of Proton Irradiation on Corrosion in Liquid Lead
Zhou, Weiyue; Cairang, Wande; Amadeo, Paola; Woller, Kevin B.; Short, Michael P.
The next-generation Gen IV nuclear reactors are designed to operate under increasingly challenging environments, aiming for higher thermal efficiency while adhering to strict physical and safety constraints. These harsh conditions, characterized by elevated temperatures and accelerated corrosion rates, coupled with the presence of high radiation damage rates, necessitate a thorough understanding of the complex interaction between radiation and corrosion. However, experiments that incorporate radiation into the corrosion evaluation of structural materials, particularly in liquid metal environments, are scarce and challenging to conduct. To address this research gap, we have developed a unique experimental apparatus that enables simultaneous irradiation and corrosion testing using proton beams as the radiation source. In this setup, a foil sample is exposed to liquid lead on one side, while protons are directed from the opposite side, resulting in a central region within the foil that experiences both irradiation and liquid lead corrosion. By comparing the behavior of this central region with the surrounding areas, we can observe the specific effects introduced by the additional proton beam on the corrosion process. This facility provides valuable insights into the rates and mechanisms of radiation-altered corrosion in lead and lead-bismuth eutectic (LBE) environments, ultimately contributing to improved material selection, design optimization, and enhanced corrosion resistance in nextgeneration reactor systems.
Submitted for publication in Transactions of the American Nuclear Society
</description>
<pubDate>Wed, 01 Nov 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158712</guid>
<dc:date>2023-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Persistent-mode operation and magnetization behavior of a solid-nitrogen-cooled MgB2small-scale test coil towards a tabletop 1.5-T osteoporosis MRI</title>
<link>https://hdl.handle.net/1721.1/158711</link>
<description>Persistent-mode operation and magnetization behavior of a solid-nitrogen-cooled MgB2small-scale test coil towards a tabletop 1.5-T osteoporosis MRI
Choi, Yoonhyuck; Park, Dongkeun; Li, Yi; Tanaka, Hiromi; Lee, Wooseung; Bascuñan, Juan; Iwasa, Yukikazu
We present results—cooldown, energization, and persistent-mode operation—of a solid nitrogen (SN2)-cooled, magnesium diboride (MgB2) small-scale test coil. The test coil, immersed in a volume of solid nitrogen at 6 K, successfully operated in persistent-mode at 108 A for a period of 5 days. Although designated a “persistent-mode” coil, its center field was measured to decay at a rate of &lt; 0.6 ppm·h-1, which is still considered low enough to meet the temporal stability requirement for most magnetic resonance imaging magnets. This decay rate translates to a calculated circuit resistance of &lt; 1.79 × 10-12 Ω, which is mainly from one MgB2-MgB2 joint in the circuit. However, when the coil temperature increased from 6 to 16 K, the field had dropped by 0.33%: we believe this was caused by the change of magnetization in the MgB2 superconductor, which in turn decreased a screening-current field (SCF) at the magnet center. We performed a finite element analysis with a simplified numerical model based on H formulation to verify whether magnetization-induced SCF is responsible for this 0.33% drop. Indeed, the model shows that the change of magnetization, i.e., screening current reduction and current density redistribution, happens during temperature-cycle-induced Jc(T) variation, and thus affects the center magnetic field. However, the Jc(T) variation in the 2nd cycle had little effect on MgB2 magnetization and thus had negligible magnetic field change.
Submitted for publication in Superconducting Science and Technology
</description>
<pubDate>Thu, 01 Oct 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158711</guid>
<dc:date>2020-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Prototype REBCO Z1 and Z2 shim coils for ultra high‑fieldhigh‑temperature superconducting NMR magnets</title>
<link>https://hdl.handle.net/1721.1/158710</link>
<description>Prototype REBCO Z1 and Z2 shim coils for ultra high‑fieldhigh‑temperature superconducting NMR magnets
Park, Dongkeun; Lee, Jiho; Bascuñán, Juan; Li, Zhuyong; Iwasa, Yukikazu
We present promising results of novel high‑temperature superconducting (HTS) shim coil prototypes that circumvent the size and strength limitation of our earlier innovative HTS shim concept based on 46‑mm wide REBCO tape. The HTS shim coil is placed inside the HTS magnet, mainly for ultrahigh‑field (&gt; 1 GHz or 23.5 T) NMR magnets, and thus unaffected from the windings’ diamagnetic wall effects. One full‑scale version will be applied to clean up Z1 and Z2 harmonic errors in the MIT 1.3‑ GHz high‑resolution NMR magnet composed of an 835‑MHz HTS insert, while another version for an MIT 1‑GHz microcoil NMR magnet whose small‑scale model we are currently building. The prototype sets were wound with a 2‑pile, 1.03‑mm wide, 0.30‑mm thick REBCO conductor. Operated at 77 K, the Z1 shim set generated a 1st harmonic field strength of 179 kHz/cm at 70 A, while the Z2 shim set, composed of two pairs, Z21 and Z22, generated the 2nd harmonic field of 141 kHz/cm2 at 50 A. Together with discussion on technical challenges for this REBCO shim coil concept, we demonstrate its feasibility for the next generation of ultra‑high‑field (UHF) HTS NMR magnets.
Submitted for publication in Scientific Reports
</description>
<pubDate>Thu, 01 Oct 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158710</guid>
<dc:date>2020-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>3D reconstruction of an inertial-confinement fusion implosion with neural networks using multiple heterogeneous data sources</title>
<link>https://hdl.handle.net/1721.1/158709</link>
<description>3D reconstruction of an inertial-confinement fusion implosion with neural networks using multiple heterogeneous data sources
Kunimune, Justin H.; Casey, D.T.; Kustowski, B.; Geppert-Kleinrath, V.; Divol, L.; Fittinghoff, D.N.; Volegov, P.L.; Kruse, M.K.G.; Gaffney, J.A.; Nora, R.C.; Frenje, Johan A.
3D asymmetries are major degradation mechanisms in inertial-confinement fusion implosions at the National Ignition Facility (NIF). These asymmetries can be diagnosed and reconstructed with the neutron imaging system (NIS) on three lines of sight around the NIF target chamber. Conventional tomographic reconstructions are used to reconstruct the 3D morphology of the implosion using NIS [Volegov et al., J. Appl. Phys. 127, 083301 (2020)], but the problem is ill-posed with only three imaging lines of sight. Asymmetries can also be diagnosed with the real-time neutron activation diagnostics (RTNAD) and the neutron time-of-flight (nToF) suite. Since the NIS, RTNAD, and nToF each sample a different part of the implosion using different physical principles, we propose that it is possible to overcome the limitations of too few imaging lines of sight by performing 3D reconstructions that combine information from all three heterogeneous data sources. This work presents a new machine learning-based reconstruction technique to do just this. By using a simple physics model and group of neural networks to map 3D morphologies to data, this technique can easily account for data of multiple different types. A simple proof-of-principle is presented, demonstrating that this technique can accurately reconstruct a hot-spot shape using synthetic primary neutron images and a hot-spot velocity vector. In particular, the hot-spot’s asymmetry, quantified as spherical harmonic coefficients, is reconstructed to within ±4% of the radius in 90% of test cases. In the future, this technique will be applied to actual NIS, RTNAD, and nToF data to better understand 3D asymmetries at the NIF.
Submitted for publication in Review of Scientific Instruments
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158709</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Particle transport constraints via Bayesian spectral fitting of multiple atomic lines</title>
<link>https://hdl.handle.net/1721.1/158708</link>
<description>Particle transport constraints via Bayesian spectral fitting of multiple atomic lines
Sciortino, Francesco; Cao, N.M.; Howard, Nathan T.; Marmar, E.S.; Rice, John E.
Optimized operation of fusion devices demands detailed understanding of plasma transport, a problem that must be addressed with advances in both measurement and data analysis techniques. In this work, we adopt Bayesian inference methods to determine experimental particle transport, leveraging opportunities from high-resolution He-like ion spectra in a tokamak plasma. The Bayesian spectral fitting code is used to analyze resonance (w), forbidden (z), intercombination (x, y), and satellite (k, j) lines of He-like Ca following laser blow-off injections on Alcator C-Mod. This offers powerful transport constraints since these lines depend differently on electron temperature and density, but also differ in their relation to Li-like, He-like, and H-like ion densities, often the dominant Ca charge states over most of the C-Mod plasma radius. Using synthetic diagnostics based on the AURORA package, we demonstrate improved effectiveness of impurity transport inferences when spectroscopic data from a progressively larger number of lines are included.
Submitted for publication in Review of Scientific Instruments
</description>
<pubDate>Thu, 01 Apr 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158708</guid>
<dc:date>2021-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Reconstructing 3-D Asymmetries in Laser-Direct-Drive Implosions on OMEGA</title>
<link>https://hdl.handle.net/1721.1/158707</link>
<description>Reconstructing 3-D Asymmetries in Laser-Direct-Drive Implosions on OMEGA
Mannion, O.M.; Woo, K.M.; Crilly, A.J.; Forrest, C.J.; Frenje, Johan A.; Glebov, V.Yu.; Gatu Johnson, Maria; Knauer, J. P.; Mohamed, Z.L.; Romanofsky, M.H.; Stoeckl, C.; Theobald, W.; Regan, S.P.
Three-dimensional reconstruction algorithms have been developed which determine the hot-spot velocity, hot-spot apparent ion temperature distribution, and fuel areal-density distribution present in laser-direct- drive inertial confinement fusion implosions on the OMEGA laser. These reconstructions rely on multiple independent measurements of the neutron energy spectrum emitted from the fusing plasma. Measurements of the neutron energy spectrum on OMEGA are made using a suite of quasi-orthogonal neutron time-of-flight detectors and a magnetic recoil spectrometer. These spectrometers are positioned strategically around the OMEGA target chamber to provide unique 3-D measurements of the conditions of the fusing hot spot and compressed fuel near peak compression. The uncertainties involved in these 3-D reconstructions are discussed and are used to identify a new nTOF diagnostic line of sight which when built will reduce the uncertainty in the hot-spot apparent ion temperature distribution from 700 to &lt;400 eV.
Submitted for publication in Review of Scientific Instruments
</description>
<pubDate>Tue, 01 Dec 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158707</guid>
<dc:date>2020-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Saturn-ring proton backlighters for the NIF</title>
<link>https://hdl.handle.net/1721.1/158706</link>
<description>Saturn-ring proton backlighters for the NIF
Zylstra, A.B.; Craxton, R.S.; Rygg, J.R.; Li, Chi-Kang; Carlson, L.; Manuel, M. J.-E.; Youngblood, K.; Garcia, E.M.; Browning, L.T.; Le Pape, S.; Candeias Lemos, N.; Lahmann, Brandon; Gatu Johnson, Maria
Proton radiography is a well-established technique for measuring electromagnetic ﬁelds in high-energy-density plasmas. Fusion reactions producing monoenergetic particles, such as D3He, are commonly used as a source, produced by a capsule implosion. Using smaller capsules for radiography applications is advantageous as the source size decreases, but on the NIF this is complicated by the risk introduced from increasing blow-by light, since the phase plate focal spot size is much larger than the capsules. We report a demonstration of backlighter targets where a ‘Saturn’ ring is placed around the capsule to block this light. The nuclear performance of the backlighters is unperturbed by the addition of a ring. We also test a ring with an equatorial cutout, which severely aﬀects the proton emission and is not viable for radiography applications. These results demonstrate the general viability of Saturn ring backlighter targets for use on NIF.
Submitted for publication in Review of Scientific Instruments
</description>
<pubDate>Wed, 01 Apr 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158706</guid>
<dc:date>2020-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Disruption prediction on EAST tokamak using a deep learning algorithm</title>
<link>https://hdl.handle.net/1721.1/158705</link>
<description>Disruption prediction on EAST tokamak using a deep learning algorithm
Guo, B.H.; Shen, B.; Chen, D.L.; Rea, Cristina; Granetz, R.S.; Huang, Y.; Zeng, L.; Zhang, H.; Qian, J.P.; Sun, Y.W.; Xiao, B.J.
In this study, a long short-term memory (LSTM) model is trained on a large disruption warning database to predict the disruption on EAST tokomak. To compare the performance of the proposed model with the previously reported full convolutional neural network (CNN) (Guo et al 2020 Plasma Phys. Control. Fusion 63 025008), the same data set and diagnostic signals are used. Based on the test set, the area under the receiver operating characteristic curve, i.e. the AUC value of the LSTM model is obtained as 0.87, and the true positive rate (TPR) is sim87.5%, while the false positive rate (FPR) is sim15.1%. Since the LSTM model is more sensitive to radiation fluctuations than CNN, the prediction performance of LSTM model is inferior to that of CNN model (for CNN, AUC sim 0.92, TPR sim 87.5%, FPR sim 6.1%). However, the advance warning time of LSTM model is 14 ms earlier than that of CNN. To reduce the FPR and improve the performance of the model, more fast bolometer channels are added as the input signals of the LSTM model, including the radiation from the upper and lower edges and the plasma core. Consequently, for the same test set, the AUC value increases to 0.89, and the FPR decreases to sim9.4%, but the TPR also decreases to sim83.9%. In addition, the sensitivity of the model to radiation fluctuations caused by impurity behavior decreases significantly, and the warning time becomes 8.7 ms earlier as compared to that of the original model. Overall, it is proved that deep learning algorithms exhibit immense application potential in the disruption prediction of long-pulse fusion devices.
Submitted for publication in Plasma Physics and Controlled Fusion
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158705</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Results from the Alfven Eigenmode Active Diagnostic during the 2019-2020 JET deuterium campaign</title>
<link>https://hdl.handle.net/1721.1/158704</link>
<description>Results from the Alfven Eigenmode Active Diagnostic during the 2019-2020 JET deuterium campaign
Tinguely, R. Alex; Puglia, P.G.; Fil, N.; Dowson, S.; Porkolab, Miklos; Fasoli, A.; Testa, D.; JET Contributors
This paper presents results of extensive analysis of mode excitation observed during the operation of the Alfven Eigenmode Active Diagnostic (AEAD) in the JET tokamak during the 2019-2020 deuterium campaign. Six of eight toroidally spaced antennas, each with independent power and phasing, were successful in actively exciting stable MHD modes in 479 plasmas. In total, 4768 magnetic resonances were detected with up to fourteen fast magnetic probes. In this work, we present the calculations of resonant frequencies f0, damping rates \gamma &lt; 0, and toroidal mode numbers n, spanning the parameter range f0 = 30 - 250 kHz, -\gamma = 0 - 13 kHz, and |n| &lt; 30. In general, good agreement is seen between the resonant and the calculated toroidal Alfven Eigenmode frequencies, and between the toroidal mode numbers applied by the AEAD and estimated of the excited resonances. We note several trends in the database: the probability of resonance detection decreases with plasma current and external heating power; the normalized damping rate increases with edge safety factor but decreases with external heating. These results provide key information to prepare future experimental campaigns and to better understand the physics of excitation and damping of Alfven Eigenmodes in the presence of alpha particles during the upcoming DT campaign, thereby extrapolating with confidence to future tokamaks.
Submitted for publication in Plasma Physics and Controlled Fusion
</description>
<pubDate>Wed, 01 Jul 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158704</guid>
<dc:date>2020-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>First large capsule implosions in a frustum-shaped hohlraum</title>
<link>https://hdl.handle.net/1721.1/158703</link>
<description>First large capsule implosions in a frustum-shaped hohlraum
Baker, K.L.; Amendt, P.A.; Ross, J.S.; Smalyuk, V.A.; Landen, O.L.; Ho, D.D.; Khan, S.; Haan, S.W.; Lindl, J.D.; Mariscal, D.; Milovich, J.L.; MacLaren, S.; Ping, Y.; Strozzi, D.J.; Bionta, R.M.; Casey, D.T.; Celliers, P.M.; Fittinghoff, D.N.; Geppert-Kleinrath, H.; Geppert-Kleinrath, V.; Hahn, K.D.; Gatu Johnson, Maria; Kim, Y.; Meaney, K.; Millot, M.; Nora, R.; Volegov, P.L.; Wilde, C.H.
We report on the first indirect-drive implosions driven by a dual conical frustum-shaped hohlraum denoted “frustraum” and the experimental tuning campaigns leading up to two layered implosions. The campaign utilized 1.2 mm and 1.4 mm inner radius HDC capsules and represented the largest HDC capsules to be imploded on the National Ignition Facility via indirect drive. Several techniques were successfully implemented to control the mode 2 symmetry of the implosions including changing the wall angle of the frustraum which is not possible with cylindrical hohlraums.  A mode 4 feature was observed and its implications for hot spot mix discussed.  Two layered implosions were conducted with 1.2 mm inner radius capsules, the latter of which achieved the highest layered capsule absorbed energy on the NIF using only 1.74 MJ of laser energy. The layered implosion results suggest that increasing capsule absorbed energy by itself is insufficient, and that further reducing coast time (time between end of laser pulse and bang time) to the 1 ns level is warranted to improve areal density, hot spot temperature and alpha heating and yield amplification.
Submitted for publication in Physics of Plasmas
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158703</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>First Implementation of Gyrokinetic Exact Linearized Landau Collision Operator and Comparison with Models</title>
<link>https://hdl.handle.net/1721.1/158702</link>
<description>First Implementation of Gyrokinetic Exact Linearized Landau Collision Operator and Comparison with Models
Pan, Qingjiang; Ernst, Darin R.; Crandall, Paul
Gyrokinetic simulations are fundamental to understanding and predicting turbulent transport in magnetically confined fusion plasmas. Previous simulations have used model collision operators with approximate field-particle terms of unknown accuracy and/or have neglected collisional finite Larmor radius (FLR) effects. We have implemented the linearized Fokker–Planck collision operator with exact field-particle terms and full FLR effects in a gyrokinetic code (GENE). The new operator, referred to as “exact” in this paper, allows the accuracy of model collision operators to be assessed. The conservative Landau form is implemented because its symmetry underlies the conservation laws and the H-theorem, and enables numerical methods to preserve this conservation, independent of resolution. The implementation utilizes the finite-volume method recently employed to discretize the Sugama collision model in GENE, allowing direct comparison between the two operators. Results show the Sugama model appears accurate for the growth rates of trapped electron modes (TEMs) driven only by density gradients, but appreciably underestimates the growth rates as the collisionality and electron temperature gradient increase. The TEM turbulent fluxes near the nonlinear threshold using the exact operator are similar to the Sugama model for the eta_e=0 case, but substantially larger than the Sugama model for the eta_e=1 case. The FLR effects reduce the growth rates increasingly with wavenumber, deepening a “valley” at the intermediate binormal wavenumber as the unstable mode extends from the TEM regime to the electron temperature gradient (ETG) instability regime. Application to the Hinton–Rosenbluth problem shows zonal flows decay faster as the radial wavenumber increases and the exact operator yields weaker decay rates.
Submitted for publication in Physics of Plasmas
</description>
<pubDate>Sun, 01 Mar 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158702</guid>
<dc:date>2020-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Impact of stalk on directly-driven inertial confinement fusion implosions</title>
<link>https://hdl.handle.net/1721.1/158701</link>
<description>Impact of stalk on directly-driven inertial confinement fusion implosions
Gatu Johnson, Maria; Adrian, Patrick J.; Anderson, K.S.; Appelbe, B.D.; Chittenden, J.P.; Crilly, A.J.; Edgell, D.; Forrest, C.J.; Frenje, Johan A.; Glebov, V.Yu.; Haines, B.M.; Igumenshchev, I.; Jacobs-Perkins, D.; Janezic, R.; Kabadi, Neel V.; Knauer, J.P.; Lahmann, Brandon; Mannion, O.M.; Marshall, F.J.; Michel, T.; Séguin, Frederick H.; Shah, R.; Stoeckl, C.; Walsh, C.A.; Petrasso, Richard D.
Low-mode asymmetries have emerged as one of the primary challenges to achieving high-performing inertial confinement fusion (ICF) implosions. In direct-drive ICF, an important potential seed of such asymmetries is the capsule stalk mount, the impact of which has remained a contentious question. In this paper, we describe results from an experiment on the OMEGA laser with intentional offsets at varying angle to the capsule stalk mount, which clearly demonstrate the impact of the stalk mount on implosion dynamics. The angle between stalk and offset is found to significantly impact observables. Specifically, a larger directional flow is observed in neutron spectrum measurements when the offset is towards than away from the stalk, while an offset at 42deg to the stalk gives minimal directional flow but still generates a large flow field in the implosion. No significant directional flow is seen due to stalk only. Time-integrated x-ray images support these flow observations. A trend is also seen in implosion yield, with lower yield obtained for offsets with smaller angle than with larger angle towards the stalk. Radiation hydrodynamics simulations using 2D DRACO and 2D/3D Chimera not including the stalk mount and using 2D xRAGE including the stalk mount are brought to bear on the data. The yield trend, the minimal directional flow with stalk only, and the larger flow enhancement observed with the offset towards the stalk are all reproduced in the xRAGE simulations. The results strongly indicate that the stalk impact must be considered and mitigated to achieve high-performing implosions.
Submitted for publication in Physics of Plasmas
</description>
<pubDate>Sun, 01 Dec 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158701</guid>
<dc:date>2019-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>2H(p, gamma) 3He cross section measurement using high-energy-density plasmas</title>
<link>https://hdl.handle.net/1721.1/158700</link>
<description>2H(p, gamma) 3He cross section measurement using high-energy-density plasmas
Zylstra, A.B.; Herrmann, H.W.; Kim, Y.H.; McEvoy, A.; Frenje, Johan A.; Gatu Johnson, Maria; Petrasso, Richard D.; Glebov, V.Yu.; Forrest, C.; Delettrez, J.; Gales, S.; Rubery, M.
An absolute cross section for the radiative capture reaction 2H(p, γ ) 3He has been measured at the OMEGA laser facility using inertially confined plasmas. These high-temperature plasmas are created by imploding a fuel containing capsule using laser ablation, and are advantageous in that they better mimic astrophysical systems. We measure an S factor for this reaction of 0.429 ± 0.026stat ± 0.072sys eV b at Ec.m. = 16.35 ± 0.40 keV, which is higher than the adopted evaluations. This reaction is important as a source of nuclear energy in protostars and brown dwarfs. It is also a critical reaction during big-bang nucleosynthesis, and an accurate cross section can be used as a constraint on cosmology.
Submitted for publication in Physical Review. C, Nuclear physics
</description>
<pubDate>Wed, 01 Apr 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158700</guid>
<dc:date>2020-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Effect of Strongly Magnetized Electrons and Ions on Heat Flow and Symmetry of Inertial Fusion Implosions</title>
<link>https://hdl.handle.net/1721.1/158699</link>
<description>Effect of Strongly Magnetized Electrons and Ions on Heat Flow and Symmetry of Inertial Fusion Implosions
Bose, A.; Peebles, J.; Walsh, C.A.; Frenje, Johan A.; Kabadi, Neel V.; Adrian, Patrick J.; Sutcliffe, G.D.; Gatu Johnson, Maria; Frank, C.A.; Davies, J.R.; Betti, R.; Glebov, V.Yu.; Marshall, F.J.; Regan, S.P.; Stoeckl, C.; Campbell, E.M.; Sio, H.; Moody, J.; Crilly, A.; Appelbe, B.D.; Chittenden, J.P.; Atzeni, S.; Barbato, F.; Forte, A.; Li, Chi-Kang; Séguin, Frederick H.; Petrasso, Richard D.
This Letter presents the first observation on how a strong, 500 kG, externally applied B field increases the mode-two asymmetry in shock-heated inertial fusion implosions. Using a direct-drive implosion with polar illumination and imposed field, we observed that magnetization produces a significant increase in the implosion oblateness (a 2.5× larger P2 amplitude in x-ray self-emission images) compared with reference experiments with identical drive but with no field applied. The implosions produce strongly magnetized electrons (ωeτe ≫ 1) and ions (ωiτi &gt; 1) that, as shown using simulations, restrict the cross field heat flow necessary for lateral distribution of the laser and shock heating from the implosion pole to the waist, causing the enhanced mode-two shape.
Submitted for publication in Physical Review Letters
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158699</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Observation of Hydrodynamic Flows in Imploding Fusion Plasmas on the National Ignition Facility</title>
<link>https://hdl.handle.net/1721.1/158698</link>
<description>Observation of Hydrodynamic Flows in Imploding Fusion Plasmas on the National Ignition Facility
Schlossberg, D.J.; Grim, G.P.; Casey, D.T.; Moore, A.S.; Nora, R.; Bachmann, B.; Benedetti, L.R.; Bionta, R.M.; Eckart, M.J.; Field, J.E.; Fittinghoff, D.N.; Gatu Johnson, Maria; Geppert-Kleinrath, V.; Hartouni, E.P.; Hatarik, R.; Hsing, W.W.; Jarrott, L.C.; Khan, S.F.; Kilkenny, J.D.; Landen, O.L.; MacGowan, B.J.; Mackinnon, A.J.; Meaney, K.D.; Munro, D.H.; Nagel, S.R.; Pak, A.; Patel, P.K.; Spears, B.K.; Volegov, P.L.; Young, C.V.
Inertial confinement fusion implosions designed to have minimal fluid motion at peak compression often show significant linear flows in the laboratory, attributable per simulations to percent-level imbalances in the laser drive illumination symmetry. We present experimental results which intentionally varied the Mode 1 drive imbalance by up to 4% to test hydrodynamic predictions of flows and the resultant imploded core asymmetries and performance, as measured by a combination of DT neutron spectroscopy and high-resolution x-ray core imaging. Neutron yields decrease by up to 50% and anisotropic neutron Doppler broadening increases by 20%, in agreement with simulations. Furthermore, a tracer jet from the capsule fill tube perturbation that is entrained by the hot spot flow confirms the average flow speeds deduced from neutron spectroscopy.
Submitted for publication in Physical Review Letters
</description>
<pubDate>Sun, 01 Mar 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158698</guid>
<dc:date>2020-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Screening-Current-Induced Strain Gradient on REBCO Conductor: An Experimental and Analytical Study With Small Coils Wound With Monofilament and Striated Multifilament REBCO Tapes</title>
<link>https://hdl.handle.net/1721.1/158697</link>
<description>Screening-Current-Induced Strain Gradient on REBCO Conductor: An Experimental and Analytical Study With Small Coils Wound With Monofilament and Striated Multifilament REBCO Tapes
Li, Yi; Park, Dongkeun; Lee, Wooseung; Choi, Yoonhyuck; Tanaka, Hiromi; Bascuñán, Juan; Iwasa, Yukikazu
Screening currents in REBCO conductors, induced by time-varying magnetic fields, not only affect the field quality of HTS coils but also cause strain gradients along REBCO tape width that may overstress REBCO conductors used in NMR and other high-field magnets. In this paper, we present results of an experimental and analytical study on screening-current-induced strain gradients, performed with small REBCO pancake coils. Because we believe that screening current effect is reduced in multifilament conductor, we have studied 2 test coils,φ150 mm, one wound with monofilament and the other with 3-striate/4-filament REBCO tapes. A 5-T/ 300-mm room-temperature bore magnet was used not only to excite a strong screening current but also apply the nonuniform Lorentz force to each coil at 4.2 K. Our experiment and analysis have quantitatively demonstrated that we can effectively suppress the screening-current effect on strain gradient, not surprisingly, by using striated multifilament REBCO conductor.
Submitted for publication in IEEE Transactions on Applied Superconductivity
</description>
<pubDate>Sun, 01 Mar 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158697</guid>
<dc:date>2020-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Quench Analysis of an LTS Quadrupole Triplet Magnet System for the IBS RAON In-Flight Fragment Separator</title>
<link>https://hdl.handle.net/1721.1/158696</link>
<description>Quench Analysis of an LTS Quadrupole Triplet Magnet System for the IBS RAON In-Flight Fragment Separator
Lee, Wooseung; Park, Dongkeun; Iwasa, Yukukazu; Kim, Junseong; Lee, Jiho; Kim, Do Gyun
In this paper we present quench analysis results of a Low-Temperature Superconducting (LTS) quadrupole triplet magnet system, a part of the In-flight Fragment (IF) separator of a heavy ion linear accelerator complex, named RAON, currently being constructed by the Institute of Basic Science (IBS). This magnet system is composed of three quadrupole magnets: a triplet, surrounded by iron yokes and embedding hexapole/octupole LTS coils for field correction. The magnet will be operated at 4.2 K in liquid helium. For reliable and safe operation of this complex superconducting system, quench and protection analysis with possible failure scenarios must be performed. In this paper, we first discuss probable quench scenarios and then present results of the quench propagation analysis on: 1) coil currents and voltages by multi-coil model circuit analysis; and 2) simulated temperature distribution inside each coil. Our quench analysis results show that the maximum voltage and temperature in each coil are below safety limits, 2000 V and 150 K, respectively, and confirm that this quadruple triplet magnet system is self-protecting.
Submitted for publication in IEEE Transactions on Applied Superconductivity
</description>
<pubDate>Sat, 01 Feb 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158696</guid>
<dc:date>2020-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Comparison of ablators for the polar direct drive exploding pusher platform</title>
<link>https://hdl.handle.net/1721.1/158695</link>
<description>Comparison of ablators for the polar direct drive exploding pusher platform
Whitley, Heather D.; Kemp, G. Elijah; Yeamans, Charles B.; Walters, Zachary B.; Blue, Brent E.; Garbett, Warren J.; Schneider, Marilyn B.; Craxton, R. Stephen; Garcia, Emma M.; McKenty, Patrick W.; Gatu Johnson, Maria; Caspersen, Kyle; Castor, John I.; Däne, Markus; Ellison, C. Leland; Gaffney, Jim A.; Graziani, Frank R.; Klepeis, John E.; Kostinski, Natalie B.; Kritcher, Andrea L.; Lahmann, Brandon; Lazicki, Amy E.; Le, Hai P.; London, Richard A.; Maddox, Brian; Marshall, Michelle C.; Martin, Madison E.; Militzer, Burkhard; Nikroo, Abbas; Nilsen, Joseph; Ogitsu, Tadashi; Pask, John E.; Pino, Jesse E.; Rubery, Michael S.; Shepherd, Ronnie; Sterne, Philip A.; Swift, Damian C.; Yang, Lin; Zhang, Shuai
We examine the performance of pure boron, boron carbide, high density carbon, and boron nitride ablators in the polar direct drive exploding pusher (PDXP) platform. The platform uses the polar direct drive con guration at the National Ignition Facility to drive high ion temperatures in a room temperature capsule and has potential applications for plasma physics studies and as a neutron source. The higher tensile strength of these materials compared to plastic enables a thinner ablator to support higher gas pressures, which could help optimize its performance for plasma physics experiments, while ablators containing boron enable the possiblity of collecting addtional data to constrain models of the platform. Applying recently developed and experimentally validated equation of state models for the boron materials, we examine the performance of these materials as ablators in 2D simulations, with particular focus on changes to the ablator and gas areal density, as well as the predicted symmetry of the inherently 2D implosion.
Submitted for publication in High Energy Density Physics
</description>
<pubDate>Sun, 01 Dec 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158695</guid>
<dc:date>2019-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>The impact of disruptions on the economics of a tokamak power plant</title>
<link>https://hdl.handle.net/1721.1/158694</link>
<description>The impact of disruptions on the economics of a tokamak power plant
Maris, Andrew D.; Wang, Allen; Rea, Cristina; Granetz, Robert; Marmar, Earl
Tokamaks are often considered a leading candidate for near-term, cost-effective fusion energy, but are susceptible to sudden loss of confinement events called "disruptions.'' The threat of disruptions has garnered serious attention in research and development for the next generation of burning plasma experiments, such as ITER, but has received no thorough treatment in studies of magnetic fusion energy economics. In this paper, we provide a set of possible post-disruption recovery times based on technological and organization limitations, a list of various ways disruptions can add to the expense of a tokamak power plant (TPP), and a model for the cost of fusion electricity as a function of disruption-related parameters. We show how these tools can be used to more accurately compute the levelized cost of electricity (LCOE) of a TPP and quantify upper limits on disruption rate for TPPs such as DEMO-like and ARC-like concepts. We utilize these findings to highlight where future research can have a strong impact in neutralizing the ``showstopping'' potential of the disruption problem.
Submitted for publication in Fusion Science and Technology
</description>
<pubDate>Thu, 01 Dec 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158694</guid>
<dc:date>2022-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Multiphysics simulations of a steady-state lower hybrid current drive antenna for the FSNF</title>
<link>https://hdl.handle.net/1721.1/158693</link>
<description>Multiphysics simulations of a steady-state lower hybrid current drive antenna for the FSNF
Wallace, Greg M.; Bohm, T.; Kessel, C.E.
The Fusion Nuclear Science Facility (FNSF) is a proposed tokamak reactor with the mission to investigate operation of a fusion reactor in a nuclear environment. The high neutron fluence component of the FNSF mission requires steady-state operation for extremely long pulses (t_{pulse} ∼ months) at full power. Plasma sustainment and current drive will be critical components of a successful FNSF. COMSOL Multiphysics software is used for combined radiofrequency (RF) and thermal simulations of the lower hybrid current drive (LHCD) antenna system. These simu- lations consider the resistive RF losses in the antenna including realistic surface roughness and a range of potential materials. The thermal analysis adds volumetric nuclear heating, plasma heat flux on leading edges, and electromagnetic radiation from the plasma to the RF heating calculated by COMSOL. Additional neutronics calculations have been performed to determine the impact of these antenna designs on activated waste disposal for the materials considered. The simulations show that it is technically feasible to implement a fully-active multi-junction (FAM) rather than a passive-active multi-junction (PAM) style of antenna if the septum between adjacent waveguides is sufficiently wide and the thermal conductivity of the structural material is sufficiently high. The FAM has the benefit of higher achievable power density with respect to the PAM, which results in a more compact antenna with potentially lower impact on neutron shielding and tritium breeding. These considerations point to tungsten rather than steel as the preferred structural material in constructing the antenna.
Submitted for publication in Fusion Science and Technology
</description>
<pubDate>Thu, 01 Oct 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158693</guid>
<dc:date>2020-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Optimization of tritium breeding ratio in ARC reactor</title>
<link>https://hdl.handle.net/1721.1/158692</link>
<description>Optimization of tritium breeding ratio in ARC reactor
Segantin, Stefano; Testoni, Raffaella; Hartwig, Zachary S.; Whyte, Dennis; Zucchetti, Massimo
Affordable Robust Compact reactor is a conceptual design for a Tokamak conceived by Massachusetts Institute of Technology (MIT) researchers. The design of this tokamak is under development and update. One of the key parameters for fusion reactor power plants is the tritium breeding ratio (TBR), which has to guarantee the tritium self-sufficiency.  The tritium inventory circulating in a fusion power plant must be minimized. In the meantime, to enhance plant’s economics, the amount of tritium generated and stored should be maximized, since it would be used to startup new reactors. Both of the aforementioned trends meet their best in a TBR as high as possible. In this work, ARC tritium breeding ratio is studied and optimized.  Taking advantage of Monte Carlo neutron transport codes, several configurations of ARC’s blanket and vacuum vessel have been analyzed in order to find the most effective one for a high TBR. The study takes into account different materials for the structure, such as Inconel718, V-15Cr-5Ti and Eurofer97. Moreover, it scans different width of coolant’s channels and evaluates the effect of lithium-6 enrichment in the blanket looking for the best configuration in terms of TBR.
Submitted for publication in Fusion Engineering and Design
</description>
<pubDate>Fri, 01 May 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158692</guid>
<dc:date>2020-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Multi-species collisions for delta-f gyrokinetic simulations: Implementation and verification with GENE</title>
<link>https://hdl.handle.net/1721.1/158691</link>
<description>Multi-species collisions for delta-f gyrokinetic simulations: Implementation and verification with GENE
Crandall, P.; Jarema, D.; Doerk, H.; Pan, Qingjiang; Merlo, G.; Görler, T.; Bañón Navarro, A.; Told, D.; Maurer, M., Jenko, F.
A multi-species linearized collision operator based on the model developed by Sugama et al. has been implemented in the nonlinear gyrokinetic code, GENE. Such a model conserves particles, momentum, and energy to machine precision, and is shown to have negative definite free energy dissipation characteristics, satisfying Boltzmann’s H-theorem, including for realistic mass ratio. Finite Larmor Radius (FLR) effects have also been implemented into the local version of the code. For the global version of the code, the collision operator has been developed to allow for block-structured velocity space grids, allowing for computationally tractable collisional global simulations. The validity of the collision operator has been demonstrated by relaxation and conservation tests, as well as appropriate benchmarks. The newly implemented operator shall be used in future simulations to study magnetically confined fusion plasma turbulence and transport in more extreme regions with higher collisionality.
Submitted for publication in Computer Physics Communications
</description>
<pubDate>Thu, 01 Oct 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158691</guid>
<dc:date>2020-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>On fault-mode phenomenon in no-insulation superconducting magnets: A preventive approach</title>
<link>https://hdl.handle.net/1721.1/158690</link>
<description>On fault-mode phenomenon in no-insulation superconducting magnets: A preventive approach
Dong, Fangliang; Park, Dongkeun; Lee, Wooseung; Hao, Luning; Huang, Zhen; Bascuñán, Juan; Jin, Zhijian; Iwasa, Yukikazu
Here, we present experimental and analytical results of a preventive approach applied to a fault-mode phenomenon caused by electrodes or power-source failure in a no-insulation (NI) high-temperature superconducting REBa2Cu3O7−x (REBCO, RE = rare earth) magnet. It is generally agreed that the NI magnets, at least those of laboratory scale, are self-protected from overheating and, therefore, from quenching, chiefly because of turn-to-turn current bypassing unique to NI. However, these NI magnets do experience unexpected quenches, e.g., when the current through the magnet suddenly drops due to the aforementioned fault-mode phenomenon. Here, we report this phenomenon of a sudden-discharging-triggered quench of an NI REBCO coil, conduction-cooled, and operated at 4.2 K. We also present our preventive approach for this phenomenon that relies on an appropriately designed resistor shunted across the coil terminals. With this shunt resistor, a quench was prevented by suppressing the quench initiating turn-to-turn heat and induced overcurrent within the NI winding, and the coil current decayed safely.
Submitted for publication in Applied Physics Letters
</description>
<pubDate>Mon, 01 Aug 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158690</guid>
<dc:date>2022-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Comment on 'Evolution Equationsof Nonlinearly Permissible, Coherent Hole Structures Propagating Persistently in Collisionless Plasmas'</title>
<link>https://hdl.handle.net/1721.1/158689</link>
<description>Comment on 'Evolution Equationsof Nonlinearly Permissible, Coherent Hole Structures Propagating Persistently in Collisionless Plasmas'
Hutchinson, Ian H.
Recent critical remarks, published in ``Annalen der Physik'', about the present author's analysis of electron and ion holes and their stability are addressed and shown to be misunderstandings and misrepresentations
Submitted for publication in Annalen der Physik
</description>
<pubDate>Tue, 01 Aug 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158689</guid>
<dc:date>2023-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Reflection and transmission of electromagnetic pulses at a planar dielectric interface -- theory and quantum lattice simulations</title>
<link>https://hdl.handle.net/1721.1/158688</link>
<description>Reflection and transmission of electromagnetic pulses at a planar dielectric interface -- theory and quantum lattice simulations
Ram, Abhay K.; Vahala, George; Vahala, Linda; Soe, Min
There is considerable interest in the application of quantum information science to advance computations in plasma physics.   A particular point of curiosity is whether it is possible to take advantage of quantum computers to speed up numerical simulations  relative to conventional computers. Many of the topics in fusion plasma physics are classical in nature. In order to implement them on quantum computers it will require couching a classical problem in the language of quantum mechanics.  Electromagnetic waves are routinely used in fusion experiments to heat a plasma or to generate currents in the plasma. The propagation of electromagnetic waves is described by Maxwell equations with an appropriate description of the plasma as a dielectric medium. Before advancing to the tensor dielectric of a magnetized plasma, this paper considers electromagnetic wave propagation in a one-dimensional inhomogeneous scalar dielectric.   The classic theory of scattering of plane electromagnetic waves at a planar interface, separating two different dielectric media, leads to Fresnel equations for reflection and transmission coefficients.  In contrast to plane waves, this paper is on the reflection and transmission of a spatially confined  electromagnetic pulse. Following an analytical formulation for the scattering of a Gaussian pulse, it is deduced that the maximum  transmission coefficient for a pulse is $\sqrt{n_2/n_1}$ times that for a plane wave; the incident and transmitted pulses propagating in dielectric media with refractive indices $n_1$ and $n_2$, respectively.  The analytical theory is complemented by numerical simulations using a quantum lattice algorithm for Maxwell equations. The algorithm, based on the Riemann-Silberstein-Weber representation of the electromagnetic fields and expressed in term of qubits, is an interleaved sequence of entangling operators at each lattice site and unitary streaming operators which transmit information from one site to an adjacent lattice site. Besides substantiating results from the theory for Gaussian pulses, numerical simulations show their validity for non-Gaussian pulses.  Apart from their time-asymptotic forms, the simulations display an interplay between the incident, reflected, and  transmitted pulses in the vicinity of the transition region between two dielectric media.
Submitted for publication in AIP Advances
</description>
<pubDate>Sun, 01 Aug 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158688</guid>
<dc:date>2021-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Intrinsic fluence non-uniformity in D3He backlit proton radiography</title>
<link>https://hdl.handle.net/1721.1/158687</link>
<description>Intrinsic fluence non-uniformity in D3He backlit proton radiography
Johnson, Timothy M.; Shan, J.; Kishimori, R.; Cufari, M.J.; Adrian, Partick  J.; Buschmann, B.; Chang, C.W.; Dannhoff, S.G.; DeVault, A.; Evans, Tucker E.; Foo, B.; Kunimune, Justin H.; Lawrence, Y.; Pearcy, Jacob A.; Reichelt, Benjamin L.; Russell, L.; Sutcliffe, G.D.; Vanderloo, N.L.; Vargas, J.; Wink, Christopher W.; Gatu Johnson, Maria; Séguin, Frederick H.; Petrasso, Richard D.; Frenje, Johan A.; Li, Chi-Kang
Proton radiography is an essential diagnostic for studying magnetic fields in high energy density physics experiments. Protons are born in a fusion implosion, traverse the plasma, and are detected on CR-39 solid state nuclear track detectors. Here, it is shown that there is an intrinsic non-uniformity in ∼ 15 MeV D3He proton radiography data. The increasing angle between the proton trajectory and the center of the detector results in the proton traveling through more detector stack material. As the protons travel through more material and lose energy, the proton energy spectrum gets wider. Protons at the lower end of the spectrum can therefore be lost. The nominal filtering results in protons being ranged out at large angle, causing the intrinsic non-uniformity. This angular effect is confirmed with both OMEGA experiments and Geant4 simulations. It is found that reducing the filtering between the pieces of CR-39 in the detector stack mitigates this effect. Results from accelerator experiments show that this reduced filtering does not impact the detection efficiency of the CR-39. Accounting for this intrinsic fluence non-uniformity is essential for magnetic field reconstruction techniques using proton radiographs.
Submitted for publication in Review of Scientific Instruments
</description>
<pubDate>Mon, 01 Apr 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158687</guid>
<dc:date>2024-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Measurements of low-mode asymmetries in the areal density of laser-direct-drive deuterium–tritium cryogenic implosions on OMEGA using neutron spectroscopy</title>
<link>https://hdl.handle.net/1721.1/158686</link>
<description>Measurements of low-mode asymmetries in the areal density of laser-direct-drive deuterium–tritium cryogenic implosions on OMEGA using neutron spectroscopy
Forrest, C.J.; Crilly, A.; Schwemmlein, A.; Appelbe, B.; Gatu Johnson, Maria; Betti, R.; Knauer, J.P.; Glebov, V. Yu.; Gopalaswamy, V.; Mannion, O.M.; Mohamed, Z.L.; Radha, P.B.; Regan, S.P.; Stoeckl, C.; Theobald, W.
Areal density is one of the key parameters that determines the confinement time in inertial confinement fusion experiments, and low-mode asymmetries in the compressed fuel are detrimental to the implosion performance. The energy spectra from the scattering of the primary deuterium–tritium (DT) neutrons off the compressed cold fuel assembly are used to investigate low-mode nonuniformities in direct-drive cryogenic DT implosions at the Omega Laser Facility. For spherically symmetric implosions, the shape of the energy spectrum is primarily determined by the elastic and inelastic scattering cross sections for both neutron-deuterium and neutron-tritium kinematic interactions. Two highly collimated lines of sight, which are positioned at nearly orthogonal locations around the OMEGA target chamber, record the neutron time-of-flight signal in the current mode. An evolutionary algorithm is being used to extract a model-independent energy spectrum of the scattered neutrons from the experimental neutron time-of-flight data and is used to infer the modal spatial variations (l = 1) in the areal density. Experimental observations of the low-mode variations of the cold-fuel assembly (ρL0 + ρL1) show good agreement with a recently developed model, indicating a departure from the spherical symmetry of the compressed DT fuel assembly. Another key signature that has been observed in the presence of a low-mode variation is the broadening of the kinematic end-point due to the anisotropy of the dense fuel conditions.
Submitted for publication in Review of Scientific Instruments
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158686</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>One and Two Dimensional Quantum Lattice Algorithms for Maxwell Equations in Inhomogeneous Scalar Dielectric Media I: Theory</title>
<link>https://hdl.handle.net/1721.1/158685</link>
<description>One and Two Dimensional Quantum Lattice Algorithms for Maxwell Equations in Inhomogeneous Scalar Dielectric Media I: Theory
Vahala, George; Valhala, Linda; Soe, Min; Ram, Abhay K.
A quantum lattice algorithm (QLA) is developed for Maxwell equations in scalar dielectric media using the Riemann-Silberstein representation on a Cartesian grid. For x-dependent and y-dependent dielectric inhomogeneities, the corresponding QLA requires a minimum of 8 qubits/spatial lattice site. This is because the corresponding Pauli spin matrices have off-diagonal components which permit the local collisional entanglement of these qubits. However, z-dependent inhomogeneities require a QLA with a minimum of 16 qubits/lattice site since the Pauli spin matrix σz is diagonal. For 2 dimensional inhomogeneities, one can readily couple the 8-8 qubit schemes for x-y variations. z-x and y-z variations can be treated by either a 16-8 qubit scheme or a 16-16 qubit representation.
Submitted for publication in Radiation Effects and Defects in Solids
</description>
<pubDate>Tue, 01 Dec 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158685</guid>
<dc:date>2020-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Experimental achievement and signatures of ignition at the National Ignition Facility</title>
<link>https://hdl.handle.net/1721.1/158684</link>
<description>Experimental achievement and signatures of ignition at the National Ignition Facility
Zylstra, A.B.; Kritcher, A.L.; Hurricane, O.A.; Callahan, D.A.; Ralph, J.E.; Casey, D.T.; Pak, A.; Landen, O.L.; Bachmann, B.; Baker, K.L.; Berzak Hopkins, L.; Bhandarkar, S.D.; Biener, J.; Bionta, R.M.; Birge, N.W.; Braun, T.; Briggs, T.M.; Celliers, P.M.; Chen, H.; Choate, C.; Clark, D.S.; Divol, L.; Döppner, T.; Fittinghoff, D.; Edwards, M.J.; Gatu Johnson, Maria; Gharibyan, N.; Haan, S.; Hahn, K.D.; Hartouni, E.; Hinkel, D.E.; Ho, D.D.; Hohenberger, M.; Holder, J.P.; Huang, H.; Izumi, N.; Jeet, J.; Jones, O.; Kerr, S.M.; Khan, S.F.; Geppert Kleinrath, H.; Geppert Kleinrath, V.; Kong, C.; Lamb, K.M.; Le Pape, S.; Lemos, N.C.; Lindl, J.D.; MacGowan, B.J.; Mackinnon, A.J.; MacPhee, A.G.; Marley, E.V.; Meaney, K.; Millot, M.; Moore, A.S.; Newman, K.; Di Nicola, J.-M. G.; Nikroo, A.; Nora, R.; Patel, P.K.; Rice, N.G.; Rubery, M.S.; Sater, J.; Schlossberg, D.J.; Sepke, S.M.; Sequoia, K.; Shin, S.J.; Stadermann, M.; Stoupin, S.; Strozzi, D.J.; Thomas, C.A.; Tommasini, R.; Trosseille, C.; Tubman, E.R.; Volegov, P.L.; Weber, C.R.; Wild, C.; Woods, D.T.; Yang, S.T.; Young, C.V.
An inertial fusion implosion on the National Ignition Facility, conducted on August 8, 2021 (N210808), recently produced more than a megajoule of fusion yield and passed Lawson’s criterion for ignition [Phys. Rev. Lett. 129, 075001 (2022)]. We describe the experimental improvements that enabled N210808 and present the first experimental measurements from an igniting plasma in the laboratory. Ignition metrics like the product of hot-spot energy and pressure squared, in the absence of self-heating, increased by ∼35%, leading to record values and an enhancement from previous experiments in the hot-spot energy (∼3×), pressure (∼2×), and mass (∼2×). These results are consistent with self-heating dominating other power balance terms. The burn rate increases by an order of magnitude after peak compression, and the hot-spot conditions show clear evidence for burn propagation into the dense fuel surrounding the hot spot. These novel dynamics and thermodynamic properties have never been observed on prior inertial fusion experiments.
Submitted for publication in Physical Review E
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158684</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Asymmetric one-dimensional slow electron holes</title>
<link>https://hdl.handle.net/1721.1/158683</link>
<description>Asymmetric one-dimensional slow electron holes
Hutchinson, Ian H.
Slow solitary positive-potential peaks sustained by trapped electron deficit in a plasma with  asymmetric ion velocity distributions are in principle asymmetric, involving a potential change  across the hole. It is shown theoretically how to construct such asymmetric electron holes, thus  providing fully consistent solutions of the one-dimensional Vlasov-Poisson equation for a wide variety  of prescribed background ion velocity distributions. Because of ion reflection forces experienced by  the hole, there is generally only one discrete slow hole velocity that is in equilibrium. Moreover the  equilibrium is unstable unless there is a local minimum in the ion velocity distribution, in which  the hole velocity then resides. For stable equilibria with Maxwellian electrons, the potential drop  across the hole is shown to be  Delta\phi= 2/9 f''' (Te/e)  (e\psi/m_i)^2  ,  where \psi is  the  hole  peak  potential,  f'''  is the third derivative of the background ion velocity distribution function at the hole velocity, and Te the  electron temperature. Potential asymmetry is small for holes of the amplitudes usually observed,  &lt;~0.5Te/e.
Submitted for publication in Physical Review E
</description>
<pubDate>Mon, 01 Nov 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158683</guid>
<dc:date>2021-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>How can slow plasma electron holes exist?</title>
<link>https://hdl.handle.net/1721.1/158682</link>
<description>How can slow plasma electron holes exist?
Hutchinson, Ian H.
One-dimensional analysis is presented of solitary positive potential plasma structures whose velocity lies within the range of ion distribution velocities that are strongly populated: "slow" electron holes. It is shown that to avoid the self-acceleration of the hole velocity away from ion velocities it must lie within a local minimum in the ion velocity distribution. Quantitative criteria for the existence of stable equilibria are obtained. The background ion distributions required are generally stable to ion-ion modes unless the electron temperature is much higher than the ion temperature. Since slow positive potential solitons are shown not to be possible without a significant contribution from trapped electrons, it seems highly likely that such observed slow potential structures are indeed electron holes.
Submitted for publication in Physical Review E
</description>
<pubDate>Thu, 01 Jul 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158682</guid>
<dc:date>2021-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Isotope effects on intrinsic rotation in hydrogen, deuterium and tritium plasmas</title>
<link>https://hdl.handle.net/1721.1/158681</link>
<description>Isotope effects on intrinsic rotation in hydrogen, deuterium and tritium plasmas
Nave, M.F.F.; Delabie, E.; Ferreira, J.; Garcia, J.; King, D.; Lennholm, M.; Lomanowski, B.; Parra, F.; Rodriguez Fernandez, Pablo; Bernardo, J.; Baruzzo, M.; Barnes, M.; Casson, F.; Hillesheim, J.C.; Hubber, A.; Joffrin, E.; Kappatou, A.; Maggi, C.F.; Mauriya, A.; Meneses, L.; Romanelli, M.; Salzedas, F.; JET contributors
The isotope effect on intrinsic rotation was studied at the JET tokamak. With the unique capability of JET to operate with Tritium, for the first time, experiments in Hydrogen, Deuterium and Tritium in ohmic plasmas were compared. Two rotation reversals per isotope type are observed in plasma density scans spanning the linear and the saturated Ohmic confinement regimes. A clear isotope mass dependence is observed at the higher densities. The magnitude of the core rotation was found to depend on isotope mass, with stronger co-current rotation observed in hydrogen. Change on intrinsic rotation characteristics coexist with a stronger thermal energy confinement in Tritium.
Submitted for publication in Nuclear Fusion
</description>
<pubDate>Wed, 01 Mar 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158681</guid>
<dc:date>2023-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Developing high performance RF heating scenarios on the WEST tokamak</title>
<link>https://hdl.handle.net/1721.1/158680</link>
<description>Developing high performance RF heating scenarios on the WEST tokamak
Goniche, M.; Ostuni, V.; Bourdelle, C.; Maget, P.; Artaud, J.F.; Bernard, J.M.; Bobkov, V.; Bucalossi, J.; Clairet, F.; Colas, L.; Desgranges, C.; Delpech, L.; Devynck, P.; Dumont, R.; Ekedahl, A.; Fedorczak, N.; Garcia, J.; Gaspar, J.; Gil, C.; Guillemaut, C.; Gunn, J.; Hillairet, J.; Klepper, C.; Lau, C.; Lerche, E.; Lombard, G.; Manas, P.; Martin, E.H.; Mazon, D.; Meyer, O.; Morales, J.; Moreau, Ph.; Nardon, E.; Nouailletas, R.; Pegourié, B.; Peret, M.; Peysson, Y.; Regal-Mezin, X.; Sabot, R.; Shiraiwa, S.; Urbanzyck, G.; Vermare, L.; Vezinet, D.; Wallace, Greg M.; WEST Team
High power experiments, up to 9.2 MW with LHCD and ICRH, have been carried out in the full tungsten tokamak WEST. Quasi non inductive discharges have been achieved allowing to extend the plasma duration to 53s with stationary conditions in particular with respect to tungsten contamination. Transitions in H mode are obtained lasting up to 4s with weak energy increment at the power crossing the separatrix is close to the threshold. Hot L mode plasmas (Te(0)&gt;3keV) with a confinement time following the ITER L96 scaling are routinely obtained. The weak aspect ratio dependence of this scaling law is confirmed. Tungsten accumulation is generally not an operational issue on WEST. Difficulty of burning through tungsten can prevent from accessing to a hot core plasma in the ramp-up phase or can lead to rapid collapse of the central temperature when radiation is enhanced by a slight decrease of the temperature. Apart few pulses post-boronization, the plasma radiation is rather high (Prad/Ptot~50%) and is dominated by tungsten. This fraction does not vary as the RF power is ramped up and is quite similar in ICRH and/or LHCD heated plasmas. An estimate of the contribution of the RF antennas to the plasma contamination in tungsten is given.
Submitted for publication in Nuclear Fusion
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158680</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Suppression of first-wall interaction in negative triangularity plasmas on TCV</title>
<link>https://hdl.handle.net/1721.1/158679</link>
<description>Suppression of first-wall interaction in negative triangularity plasmas on TCV
Han, Woonghee; Offeddu, N.; Golfinopoulos, T.; Theiler, C.; Tsui, C.K.; Boedo, J.A.; Marmar, E.S.; TCV Team
Magnetically confined fusion plasmas with negative triangularity (d) exhibit greater L-mode confinement than with positive d. Recent experiments in the TCV and DIII-D tokamaks have correlated the confinement improvement to a reduction of fluctuations within the plasma core. We report on fluctuation measurements in the scrape-off layer (SOL) for −0.61 &lt; d &lt; +0.64 in limited and diverted ohmic L-mode plasmas; these reveal a strong reduction in SOL fluctuation amplitudes at d &lt; −0.25, and, surprisingly, an almost full suppression of plasma interaction with the main-chamber first-wall, which could have important implications for the prospects of using negative d plasmas as a reactor solution. An exploration of several physical mechanisms suggests that a reduced connection length—intrinsic to negative d plasmas—plays a critical role in the origin of this phenomenon.
Submitted for publication in Nuclear Fusion
</description>
<pubDate>Mon, 01 Feb 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158679</guid>
<dc:date>2021-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>A model investigation of the impact of lower hybrid wave scattering angle on current drive profile in EAST and Alcator C-Mod</title>
<link>https://hdl.handle.net/1721.1/158678</link>
<description>A model investigation of the impact of lower hybrid wave scattering angle on current drive profile in EAST and Alcator C-Mod
Baek, Seung Gyou; Biswas, B.; Wallace, Greg M.; Bonoli, Paul T.; Ding, B.J.; Li, M.H.; Li, Y.C.; Wang, Y.F.; Wang, M.; Wu, C.B.; Yan, G.H.; Chen, J.; Zhai, X.; Garofalo, A.M.; Choi, W.; Poli, F.; Shiraiwa, S.
Lower hybrid current drive (LHCD) is beneficial for developing a steady-state operation scenario in a tokamak. This paper conducts a modelling investigation to identify an optimum rotation angle of the initial lower hybrid perpendicular (to the background magnetic field) wavevector for best matching the experimental RF current profile. It is hypothesized that central RF power deposition widely observed in the present-day LHCD experiments arises from wave scattering by turbulence. In a standard model without considering such interactions, the predicted power deposition profile is generally broad with off-axis peaking, not in agreement with experimental observations. A heuristic approach is adopted by introducing a spectral broadening mechanism by modifying the initial orientation of the perpendicular wavevector. The ray-tracing/Fokker-Planck solver GENRAY/CQL3D is utilized within the python-based pi-scope framework. A focus is given to identify the perpendicular wavenumber orientation angle with respect to the magnetic surface normal vector at the initial ray location. Our modelling study shows that rotating the perpendicular wavevector in such a way as to increase the initial poloidal component is effective in reproducing the centrally peaked current profile observed in normal shear plasmas on both EAST and C-Mod. These waves can readily be absorbed to the central plasma, which reduces the sensitivity of the power deposition profile to a slight change of the plasma condition. The same approach is also found to help broaden the off-axis power deposition profile in a reverse-shear EAST plasma, leading to a better agreement with the experiment. The results presented here suggest that spectral modification arising from edge density fluctuations in a tokamak may need to be considered in understanding wave propagation and absorption. A further experimental and theoretical/modelling study is vital as a reverse approach is adopted in this study. Our work suggests that mitigation or control measures are critical for parasitic effects occurring on the first pass in a reactor regime.
Submitted for publication in Nuclear Fusion
</description>
<pubDate>Sun, 01 Aug 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158678</guid>
<dc:date>2021-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Observations of wall conditioning by means of boron powder injection in DIII-D H-mode plasmas</title>
<link>https://hdl.handle.net/1721.1/158677</link>
<description>Observations of wall conditioning by means of boron powder injection in DIII-D H-mode plasmas
Bortolon, A.; Maingi, R.; Nagy, A.; Ren, J.; Duran, J.D.; Maan, A.; Donovan, D.C.; Boedo, J.A.; Rudakov, D.L.; Hyatt, A.W.; Wilks, Theresa M.; Shafer, M.W.; Samuell, C.M.; Fenstermacher, M.E.; Gilson, E.P.; Lunsford, R.; Mansfield, D.K.; Abrams, T.; Nazikian, R.; DIII-D team
We report observations from the DIII-D tokamak indicating that boron (B) powder injection in tokamak plasmas improves wall conditions similarly to glow discharge boronization (GDB). Isotopically enriched B powder (B11 &gt; 95%) was introduced gravitationally in a sequence of H-mode plasma discharges at rates up to ∼160 mg s−1 for durations up to 3 s. Boron injection to cumulative amounts ≤0.1 g appeared to improve wall conditions similarly to boronization, with indications of reduced wall fueling, reduced recycling at the outer strike point and reduced impurity content at breakdown. Post-mortem analysis of graphite samples exposed to far scrape-off layer plasma fluxes during boron injection confirm the formation of a B-C layer, with average surface composition B:C ∼ 1. The results suggest that injecting boron-rich powders in tokamak plasmas can effectively replenish boron films on carbon plasma facing components to improve wall conditions and extend the duration of the beneficial effects of GDB.
Submitted for publication in Nuclear Fusion
</description>
<pubDate>Sun, 01 Mar 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158677</guid>
<dc:date>2020-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Experimental studies of plasma-antenna coupling with the JET Alfven Eigenmode Active Diagnostic</title>
<link>https://hdl.handle.net/1721.1/158676</link>
<description>Experimental studies of plasma-antenna coupling with the JET Alfven Eigenmode Active Diagnostic
Tinguely, R. Alex; Puglia, P.G.; Fil, N.; Dowson, S.; Porkolab, Miklos; Dvornova, A.; Fasoli, A.; Fitzgerlad, M.; Guillemot, V.; Huysmans, G.T.A.; Maslov, M.; Sharapov, S.; Testa, D.; JET contributors
This paper presents a dedicated study of plasma-antenna (PA) coupling with the Alfven Eigenmode Active Diagnostic (AEAD) in JET. Stable AEs and their resonant frequencies f, damping rates gamma &lt; 0, and toroidal mode numbers n are measured for various PA separations and limiter versus X-point magnetic configurations. Two stable AEs are observed to be resonantly excited at distinct low and high frequencies in limiter plasmas. The values of f and n do not vary with PA separation. However, |gamma| increases with PA separation for the low-f, but not high-f, mode, yet this may be due to slightly different edge conditions. The high-f AE is detected throughout the transition from limiter to X-point configuration, though its damping rate increases; the low-f mode, on the other hand, becomes unidentifiable. The linear resistive MHD code CASTOR is used to simulate the frequency scan of an AEAD-like external antenna. For the limiter pulses, the high-f mode is determined to be an n = 0 GAE, while the low-f mode is likely an n = 2 TAE. During the transition from limiter to X-point configuration, CASTOR indicates that n = 1 and 2 EAEs are excited in the edge gap. These results extend previous experimental studies in JET and Alcator C-Mod; validate the computational work performed by Dvornova et al 2020 Phys. Plasmas 27 012507; and provide guidance for the optimization of PA coupling in upcoming JET energetic particle experiments, for which the AEAD will aim to identify the contribution of alpha particles to AE drive during the DT campaign.
Submitted for publication in Nuclear Fusion
</description>
<pubDate>Thu, 01 Oct 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158676</guid>
<dc:date>2020-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Ion and electron acoustic bursts during anti-parallel magnetic reconnection driven by lasers</title>
<link>https://hdl.handle.net/1721.1/158675</link>
<description>Ion and electron acoustic bursts during anti-parallel magnetic reconnection driven by lasers
Zhang, Shu; Chien, Abraham; Gao, Lan; Ji, Hantao; Blackman, Eric G.; Follett, Russ; Froula, Dustin H.; Katz, Joseph; Daughton, William; Li, Chi-Kang; Birkel, Andrew; Petrasso, Richard D.; Moody, John; Chen, Hui
Magnetic reconnection converts magnetic energy into thermal and kinetic energy in plasma. Among the numerous candidate mechanisms, ion acoustic instabilities driven by the relative drift between ions and electrons (or equivalently, electric current) have been suggested to play a critical role in dissipating magnetic energy in collisionless plasmas. However, their existence and efectiveness during reconnection have not been well understood due to ion Landau damping and difculties in resolving the Debye length scale in the laboratory. Here we report a sudden onset of ion acoustic bursts measured by collective Thomson scattering in the exhaust of anti-parallel magnetically driven reconnection using high-power lasers. The ion acoustic bursts are followed by electron acoustic bursts with electron heating and bulk acceleration. We reproduce these observations with one- and two-dimensional particle-in-cell simulations in which an electron outfow jet drives ion acoustic instabilities, forming double layers. These layers induce electron two-stream instabilities that generate electron acoustic bursts and energize electrons. Our results demonstrate the importance of ion and electron acoustic dynamics during reconnection when ion Landau damping is inefective, a condition applicable to a range of astrophysical plasmas including near-Earth space, stellar fares and black hole accretion engines.
Submitted for publication in Nature Physics
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158675</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Yield degradation due to laser drive asymmetry in D3He backlit proton radiography experiments at OMEGA</title>
<link>https://hdl.handle.net/1721.1/158674</link>
<description>Yield degradation due to laser drive asymmetry in D3He backlit proton radiography experiments at OMEGA
Johnson, Timothy M.; Birkel, Andrew; Ramirez, H.E.; Sutcliffe, G.D.; Adrian, Patrick J.; Glebov, V.Yu.; Sio, H.; Gatu Johnson, Maria; Frenje, Johan A.; Petrasso, Richard D.; Li, Chi-Kang
Mono-energetic proton radiography is a vital diagnostic for numerous high-energy-density-physics, inertial-confinement-fusion, and laboratory-astrophysics experiments at OMEGA. With a large number of campaigns executing hundreds of shots, general trends in D3He backlighter performance are statistically observed. Each experimental configuration uses a different number of beams and drive symmetry, causing the backlighter to perform differently. Here, we analyze the impact of these variables on the overall performance of the D3He backlighter for proton-radiography studies. This study finds that increasing laser drive asymmetry can degrade the performance of the D3He backlighter. The results of this study can be used to help experimental designs that use proton radiography.
Submitted for publication in Review of Scientific Instruments
</description>
<pubDate>Thu, 01 Apr 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158674</guid>
<dc:date>2021-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>A 1D Lyman-alpha Profile Camera for Plasma Edge Neutral Studies  on the DIII-D Tokamak</title>
<link>https://hdl.handle.net/1721.1/158673</link>
<description>A 1D Lyman-alpha Profile Camera for Plasma Edge Neutral Studies  on the DIII-D Tokamak
Rosenthal, Aaron M.; Hughes, Jerry W.; Bortolon, A.; Laggner, F.M.; Wilks, Theresa M.; Vieira, R.; Leccacorvi, R.; Marmar, E.; Nagy, A.; Freeman, C.; Mauzey, D.
A one dimensional, absolutely calibrated pinhole camera system was installed on the DIII-D tokamak to measure edge Lyman-alpha (Ly-a) emission from hydrogenic isotopes which can be used to infer neutral density and ionization rate pro les. The system is composed of two cameras, each providing a toroidal fan of twenty lines of sight, viewing the plasma edge on the inboard and outboard side of DIII-D. The cameras' views lie in a horizontal plane 77 cm below the midplane. At its tangency radius, each channel provides a radial resolution of approximately 2 cm full width at half maximum (FWHM) with a total coverage of 22 cm. Each camera consists of a rectangular pinhole, Ly-a reflective mirror, narrow-band Ly-a transmission  fiter, and a 20 channel AXUV photodetector. The combined mirror and transmission  fiter have a FWHM of 5 nm, centered near the Ly-a wavelength of 121.6 nm and is capable of rejecting signifi cant, parasitic carbon-III (C-III) emission from intrinsic plasma impurities. To provide a high spatial resolution measurement in a compact footprint, the camera utilizes advanced engineering and manufacturing techniques including 3D printing, high stability mirror mounts, and a novel alignment procedure. Absolutely calibrated, spatially resolved Ly-a brightness measurements utilize a bright, isolated line with low parasitic surface reflections and enable quantitative comparison to modeling to study divertor neutral leakage, main chamber fueling and radial particle transport.
Submitted for publication in Review of Scientific Instruments
</description>
<pubDate>Mon, 01 Feb 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158673</guid>
<dc:date>2021-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Deep modelling of plasma and neutral fluctuations from gas puff turbulence imaging</title>
<link>https://hdl.handle.net/1721.1/158672</link>
<description>Deep modelling of plasma and neutral fluctuations from gas puff turbulence imaging
Mathews, Abhilash; Terry, James L.; Baek, Seung Gyou; Hughes, Jerry W.; Kuang, Adam Q.; LaBombard, Brian; Miller, M.A.; Zweben, S.J.; Stotler, D.; Reiter, D.; Zholobenko, W.; Goto, M.
The role of turbulence in setting boundary plasma conditions is presently a key uncertainty in projecting to fusion energy reactors. To robustly diagnose edge turbulence, we develop and demonstrate a technique to translate brightness measurements of HeI line radiation into local plasma fluctuations via a novel integrated deep learning framework that combines neutral transport physics and collisional radiative theory for the $3^3 D - 2^3 P$ transition in atomic helium. The tenets for experimental validity are reviewed, illustrating that this turbulence analysis for ionized gases is transferable to both magnetized and unmagnetized environments with arbitrary geometries. Based upon fast camera data on the Alcator C-Mod tokamak, we present the first 2-dimensional time-dependent experimental measurements of the turbulent electron density, electron temperature, and neutral density revealing shadowing effects in a fusion plasma using a single spectral line.
Submitted for publication in Review of Scientific Instruments
</description>
<pubDate>Sat, 01 Jan 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158672</guid>
<dc:date>2022-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Some comments on unitary qubit lattice algorithms for classical problems</title>
<link>https://hdl.handle.net/1721.1/158671</link>
<description>Some comments on unitary qubit lattice algorithms for classical problems
Anderson, Paul; Finegold-Sachs, Lillian; Vahala, George; Vahala, Linda; Ram, Abhay K.; Soe, Min; Koukoutsis, Efstratios; Hizandis, Kyriakos
A qubit lattice algorithm (QLA), which consists of a set of interleaved unitary collision-streaming operators, is developed for electromagnetic wave propagation in tensor dielectric media. External potential operators are required to handle gradients in the refractive indices, and these operators are typically non-unitary but sparse. A similar problem arises in the QLA for the Korteweg-de Vries equation, as the potential operator that models the KdV nonlinear term is also non-unitary. Several QLAs are presented here that avoid the need of this non-unitary potential operator by perturbing the collision operator. These QLAs are fully unitary.
Submitted for publication in Radiation Effects and Defects in Solids
</description>
<pubDate>Tue, 01 Nov 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158671</guid>
<dc:date>2022-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>One and Two Dimensional Quantum Lattice Algorithms for Maxwell Equations in Inhomogeneous Scalar Dielectric Media. II: Simulations</title>
<link>https://hdl.handle.net/1721.1/158670</link>
<description>One and Two Dimensional Quantum Lattice Algorithms for Maxwell Equations in Inhomogeneous Scalar Dielectric Media. II: Simulations
Vahala, George; Soe, Min; Vahala, Linda; Ram, Abhay K.
Long time quantum lattice algorithm (QLA) simulations are performed for the mul- tiple reflection-transmission of an initial electromagnetic pulse propagating normally to a boundary layer region joining two media of different refractive index. For these one dimensional (1D) sim- ulations, there is excellent agreement between x-, y- and z- representations, as well as very good agreement with nearly all the standard plane wave boundary condition results for reflection and transmission off a dielectric discontinuity. In the QLA simulation, no boundary conditions are im- posed at the continuous, but sharply increasing, dielectic boundary layers. Two dimensional (2D) QLA scattering simulations in the x-z plane are performed for an electromagnetic pulse interacting with a conical dielectric obstacle for the 8-16 qubit model.
Submitted for publication in Radiation Effects and Defects in Solids
</description>
<pubDate>Tue, 01 Dec 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158670</guid>
<dc:date>2020-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Scattering of Radio Frequency Waves by Density Filaments</title>
<link>https://hdl.handle.net/1721.1/158669</link>
<description>Scattering of Radio Frequency Waves by Density Filaments
Ram, Abhay K.; Hizanidis, K.; Bairaktaris, F.; Papadopoulos, A.; Valvis, S.-I.
The edge region and the scrape-off layer of magnetically confined fusion devices, like tokamaks and stellarators, are replete with turbulent plasma that is a mixture of coherent, blob or filament like, structures [1] and incoherent fluctuations [2]. The variation in the density due to turbulence can be comparable to or greater than the ambient density [2]. As part of an overall effort to optimize the efficiency of operation, radio frequency (RF) waves are commonly used for heating fusion plasmas and, in tokamaks, for generating plasma current needed for confinement and controlling instabilities. The RF waves are excited by antenna structures that are placed near the wall of a fusion device. In order to deliver energy and momentum to charged particles in the core of fusion plasmas, RF waves have to propagate through the turbulent plasma. In present fusion devices, the scrap-off layer and the edge plasma region is of the order of a few centimeters. In reactor type devices, like ITER, this region is expected to be of the order of tens of centimeters. Since the efficiency of operation of a fusion reactor is of prime importance, it is imperative that we understand the effect of turbulence on RF waves. The fluctuations in density lead to changes in the plasma permittivity. As in conventional electrodynamics, the propagation of RF waves through different dielectric media is subject to reflection, refraction, and diffraction. In this paper, we summarize our theoretical and computational studies on the propagation of RF waves through filamentary structures present in the scrape-off layer.
Submitted for publication in Radiation Effects and Defects in Solids
</description>
<pubDate>Tue, 01 Sep 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158669</guid>
<dc:date>2020-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Time-resolved turbulent dynamo in a laser plasma</title>
<link>https://hdl.handle.net/1721.1/158668</link>
<description>Time-resolved turbulent dynamo in a laser plasma
Bott, Archie F.A.; Tzeferacos, Petros; Chen, Laura; Palmer, Charlotte A.J.; Rigby, Alexandra; Bell, Anthony R.; Bingham, Robert; Birkel, Andrew; Graziani, Carlo; Froula, Dustin H.; Katz, Joseph; Koenig, Michel; Kunz, Matthew W.; Li, Chi-Kang; Meinecke, Jena; Miniati, Francesco; Petrasso, Richard D.; Park, Hye-Sook; Remington, Bruce A.; Reville, Brian; Ross, J. Steven; Ryu, Dongsu; Ryutov, Dmitri; Séguin, Fredrick H.; White, Thomas G.; Schekochihin, Alexander A.; Lamb, Donald Q.; Gregori, Gianluca
Understanding magnetic-field generation and amplification in turbulent plasma is essential to account for observations of magnetic fields in the universe. A theoretical framework attributing the origin and sustainment of these fields to the so-called fluctuation dynamo was recently validated by experiments on laser facilities in low-magnetic-Prandtl-number plasmas (Pm&lt;1). However, the same framework proposes that the fluctuation dynamo should operate differently when Pm≳1, the regime relevant to many astrophysical environments such as the intracluster medium of galaxy clusters. This paper reports an experiment that creates a laboratory Pm≳1 plasma dynamo. We provide a time-resolved characterization of the plasma’s evolution, measuring temperatures, densities, flow velocities, and magnetic fields, which allows us to explore various stages of the fluctuation dynamo’s operation on seed magnetic fields generated by the action of the Biermann-battery mechanism during the initial drive-laser target interaction. The magnetic energy in structures with characteristic scales close to the driving scale of the stochastic motions is found to increase by almost three orders of magnitude and saturate dynamically. It is shown that the initial growth of these fields occurs at a much greater rate than the turnover rate of the driving-scale stochastic motions. Our results point to the possibility that plasma turbulence produced by strong shear can generate fields more efficiently at the driving scale than anticipated by idealized magnetohydrodynamics (MHD) simulations of the nonhelical fluctuation dynamo; this finding could help explain the large-scale fields inferred from observations of astrophysical systems.
Submitted for publication in Proceedings of the National Academy of Science
</description>
<pubDate>Mon, 01 Mar 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158668</guid>
<dc:date>2021-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Physics of runaway electrons with Shattered Pellet Injection at JET</title>
<link>https://hdl.handle.net/1721.1/158667</link>
<description>Physics of runaway electrons with Shattered Pellet Injection at JET
Reux, C.; Paz-Soldan, C.; Eidietis, N.; Lehnen, M.; Aleynikov, P.; Silburn, S.; Bandaru, V.; Ficker, O.; Hoelzl, M.; Hollmann, E.M.; Jachmich, S.; Joffrin, E.; Lomas, P.J.; Rimini, F.; Baylor, L.; Bleasdale, A.; Calacci, L.; Causa, F.; Carnevale, D.; Coffey, I.; Craven, D.; Dal Molin, A.; de la Luna, E.; De Tommasi, G.; Garcia, J.; Gebhart, T.; Giacomelli, L.; Huber, A.; Khilkevich, E.; Lowry, C.; Macusova, E.; Manzanares, A.; Nocente, M.; Panontin, E.; Papp, G.; Pautasso, G.; Peacock, A.; Plyusnin, V.; Shevelev, A.; Shiraki, D.; Commariva, C.; Sozzi, C.; Sridhar, S.; Sweeney, Ryan; Szepesi, G.; Tinguely, R. Alex; Wilson, J.; JET contributors
Runaway electrons created during tokamak disruptions pose a threat to a reliable operation of future larger machines. Experiments using Shattered Pellet Injection (SPI) have been carried out at the JET tokamak to investigate ways to prevent their generation or suppress them if avoidance is not sufficient. Avoidance is possible if the SPI contains a sufficiently low fraction of high-Z material, or if it is  red early in advance of a disruption prone to runaway generation. These results are consistent with previous similar fi ndings obtained with Massive Gas Injection. Suppression of an already accelerated beam is not efficient using High-Z material, but deuterium leads to harmless terminations without heat loads. This effect is the combination of a large MHD instability scattering runaway electrons on a large area and the absence of runaway regeneration during the subsequent current collapse thanks to the flushing of high-Z impurities from the runaway companion plasma. This effect also works in situations where the runaway beam moves upwards and undergoes scraping-off on the wall.
Submitted for publication in Plasma Physics and Controlled Fusion
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158667</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Effects of Neoclassical Tearing Modes and Toroidal Field Ripple on Lost Alpha Power in the SPARC Tokamak</title>
<link>https://hdl.handle.net/1721.1/158666</link>
<description>Effects of Neoclassical Tearing Modes and Toroidal Field Ripple on Lost Alpha Power in the SPARC Tokamak
Braun, A.E.; Kramer, G.J.; Tinguely, R. Alex; Scott, S.D.; Sweeney, Ryan
Using the SPIRAL Monte Carlo, full particle-orbit simulation code [Kramer PPCF 2013], we investigate the effects of neoclassical tearing modes (NTMs) and toroidal field (TF) ripple on alpha power losses during steady-state operation of the SPARC primary reference discharge [Creely JPP 2020, Rodriguez-Fernandez JPP 2020]. Model perturbations for TF ripple and the m/n = 2/1 and 3/2 NTMs with exaggerated widths selected based on an H-mode plasma approaching thermal quench are added to a simulated SPARC magnetic equilibrium through which marker particles are tracked. The 3/2 and 2/1 NTMs are located at ρpol ∼ 0.76 and ρpol ∼ 0.86 respectively, well positioned to increase alpha particle transport into and within an outer lossy region of the plasma beyond ρpol ∼ 0.8 where over 95% of lost alpha particles are born [Scott JPP 2020]. Total alpha power losses are shown to increase modestly from 1.73% lost at a minimum to 2.34% lost at a maximum, and alpha particle surface power densities form localized hotspots on the first-wall near the lowfield side midplane due to NTMs and TF ripple. We establish a conservative upper limit for first-wall alpha surface power densities on a toroidally symmetric wall for typical, flattop operation and motivate the consideration of NTMs in the design of three dimensional limiter surfaces for SPARC.
Submitted for publication in Plasma Physics and Controlled Fusion
</description>
<pubDate>Sat, 01 Oct 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158666</guid>
<dc:date>2022-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Quantum Computing Perspective for Electromagnetic Wave Propagation in Cold Magnetized Plasma</title>
<link>https://hdl.handle.net/1721.1/158665</link>
<description>Quantum Computing Perspective for Electromagnetic Wave Propagation in Cold Magnetized Plasma
Koukoutsis, Efstratios; Hizanidis, Kyriakos; Vahala, George; Soe, Min; Vahala, Linda; Ram, Abhay K.
Electromagnetic waves are an inherent part of all plasmas - laboratory fusion plasmas or astrophysical plasmas. The conventional methods for studying properties of electromagnetic waves rely on discretization of Maxwell equations suitable for implementing on classical, present day, computers. The traditional methodology is not efficient for quantum computing implementation - a future computational source offering a tantalizing possibility of enormous speed up and a significant reduction in computational cost. This paper addresses two topics relevant to implementing Maxwell equations on a quantum computer. The first is on formulating a quantum Schrödinger representation of Maxwell equations for wave propagation in a cold, inhomogeneous, and magnetized plasma. This representation admits unitary, energy preserving, evolution and conveniently lends itself to appropriate discretization for a quantum computer. Riding on the coattails of these results, the second topic is on developing a sequence of unitary operators which form the basis for a qubit lattice algorithm (QLA). The QLA, suitable for quantum computers, can be implemented and tested on existing classical computers for accuracy as well as scaling of computational time with the number of available processors. In order to illustrate the QLA for Maxwell equations, results are presented from a time evolving, full wave simulation of propagation and scattering of an electromagnetic wave packet by non-dispersive dielectric medium localized in space.
Submitted for publication in Physics of Plasmas
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158665</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Stability and Transport of Gyrokinetic Critical Pedestals</title>
<link>https://hdl.handle.net/1721.1/158664</link>
<description>Stability and Transport of Gyrokinetic Critical Pedestals
Parisi, Jason; Oakleigh Nelson, Andrew; Guttenfelder, Walter; Gaur, Rahul; Berkery, John W.; Kaye, Stanley M.; Barada, Kshitish Kumar; Clauser, Cesar F.; Diallo, Ahmed; Hatch, David R.; Kleiner, Andreas; Lampert, Mate; Macwan, Tanmay; Mendard, Jonathan E.
A gyrokinetic threshold model for pedestal width-height scaling prediction is applied to multiple devices. A shaping and aspect-ratio scan is performed on NSTX equilibria, finding $\Delta_{\mathrm{ped}} = 0.92 A^{1.04} \kappa^{-1.24} 0.38^{\delta} \beta_{\theta,\mathrm{ped}}^{1.05}$ for the wide-pedestal branch with pedestal width $\Delta_{\mathrm{ped}}$, aspect-ratio $A$, elongation $\kappa$, triangularity $\delta$, and normalized pedestal height $\beta_{\theta,\mathrm{ped}}$. A width-transport scaling is found to vary significantly if pedestal height is varied either with fixed density or fixed temperature, showing how fueling and heating sources affect the pedestal density and temperature profiles for the kinetic-ballooning-mode (KBM) limited profiles. For an NSTX equilibrium, at fixed density, the wide-branch is $\Delta_{\mathrm{ped} } = 0.028 \left(q_e/\Gamma_e - 1.7 \right)^{1.5} \sim \eta_e ^{1.5}$ and at fixed temperature $\Delta_{\mathrm{ped} } = 0.31 \left(q_e/\Gamma_e - 4.7 \right)^{0.85} \sim \eta_e ^{0.85}$ where $q_e$ and $\Gamma_e$ are turbulent electron heat and particle fluxes and $\eta_e = \nabla \ln T_e / \nabla \ln n_e$ for electron temperature $T_e$ and density $n_e$. Pedestals close to the KBM limit are shown to have modified turbulent transport coefficients compared to strongly driven KBMs. The role of flow-shear is studied as a width-height scaling constraint and pedestal saturation mechanism for a standard and lithiated wide pedestal discharge. Finally, the stability, transport, and flow-shear constraints are combined and examined for a NSTX experiment.
Submitted for publication in Nuclear Fusion
</description>
<pubDate>Mon, 01 Jan 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158664</guid>
<dc:date>2024-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Summary report of the 4th IAEA Technical Meeting on Fusion Data Processing, Validation and Analysis (FDPVA)</title>
<link>https://hdl.handle.net/1721.1/158663</link>
<description>Summary report of the 4th IAEA Technical Meeting on Fusion Data Processing, Validation and Analysis (FDPVA)
Gonzalez de Vincente, S.M.; Mazon, D.; Xu, M.; Pinches, S.; Churchill, M.; Dinklage, A.; Fischer, R.; Murari, A.; Rodriguez Fernandez, Pablo; Stillerman, J.; Vega, J.; Verdoolaege, G.
The objective of the fourth Technical Meeting on Fusion Data Processing, Validation and Analysis was to provide a platform during which a set of topics relevant to fusion data processing, validation and analysis are discussed with the view of extrapolating needs to next step fusion devices such as ITER. The validation and analysis of experimental data obtained from diagnostics used to characterize fusion plasmas are crucial for a knowledge-based understanding of the physical processes governing the dynamics of these plasmas. This paper presents the recent progress and achievements in the domain of plasma diagnostics and synthetic diagnostics data analysis (including image processing, regression analysis, inverse problems, deep learning, machine learning, big data and physics-based models for control) reported at the meeting. The progress in these areas highlight trends observed in current major fusion confinement devices. A special focus is dedicated on data analysis requirements for ITER and DEMO with a particular attention paid to Artificial Intelligence for automatization and improving reliability of control processes.
Submitted for publication in Nuclear Fusion
</description>
<pubDate>Wed, 01 Mar 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158663</guid>
<dc:date>2023-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Stability analysis of alpha driven toroidal Alfvén eigenmodes observed in JET deuterium-tritium internal transport barrier plasmas</title>
<link>https://hdl.handle.net/1721.1/158662</link>
<description>Stability analysis of alpha driven toroidal Alfvén eigenmodes observed in JET deuterium-tritium internal transport barrier plasmas
Fitzgerald, M.; Dumont, R.; Keeling, D.; Mailloux, J.; Sharapov, S.; Dreval, M.; Figueiredo, A.; Coelho, R.; Ferreira, J.; Rodrigues, P.; Nabais, F.; Borba, D.; Stancar, Z.; Szepesi, G.; Tinguely, R. Alex; Puglia, P.G.; Oliver, H.J.C.; Kiptily, V.; Baruzzo, M.; Lennholm, M.; Siren, P.; Garcia, J.; Maggi, C.F.; JET contributors
A Toroidal Alfvén eigenmode (TAE) has been observed to be driven by alpha particles in a JET deuterium-tritium internal transport barrier plasma. The observation occurred 50ms after the removal of neutral beam heating (NBI). The mode is observed on magnetics, soft-xray, interferometry and reflectometry measurements. We present detailed stability calculations using a similar tool set validated during deuterium only discharges. These calculations strongly support the conclusion that the observed mode is a TAE, and that this mode was destabilized by alpha particles. Non-ideal effects from the bulk plasma are interpreted as responsible for suppressing the majority of TAEs which were also driven by alpha particles, but the mode that matches the observations is predicted to be exceptional in the weakness of these non-ideal effects. This mode located far from the core on the outboard midplane is found to be driven by both trapped and passing particles despite alpha particles originating in the core.
Submitted for publication in Nuclear Fusion
</description>
<pubDate>Sun, 01 Oct 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158662</guid>
<dc:date>2023-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Toroidal Alfven eigenmodes observed in low power JET deuterium-tritium plasmas</title>
<link>https://hdl.handle.net/1721.1/158661</link>
<description>Toroidal Alfven eigenmodes observed in low power JET deuterium-tritium plasmas
Oliver, H.J.C.; Sharapov, S.E.; Stancar, Z.; Fitzgerald, M.; Tholerus, E.; Breizman, B.; Dreval, M.; Ferreira, J.; Figueiredo, A.; Garcia, J.; Hawkes, N.; Keeling, D.L.; Puglia, P.G.; Rodrigues, P.; Tinguely, R. Alex; JET contributors
The Joint European Torus (JET) recently carried out an experimental campaign using a plasma consisting of both deuterium (D) and tritium (T). We observed a high-frequency mode using a reflectometer and an interferometer in a D-T plasma heated with low power neutral beam injection, PNBI = 11.6 MW. This mode was observed at a frequency f = 156 kHz and was located deep in the plasma. The observed mode was identified as a toroidal Alfven eigenmode (TAE) using the linear MHD code, MISHKA. The stability of 21 modes that match experimental measurements was investigated. Beam ions and fusion-born alpha particles were modelled using the full orbit particle tracking code LOCUST, which produces smooth distribution functions suitable for stability calculations without analytical fits or the use of moments. We calculated the stability of the 21 candidate modes using the HALO code, which models the wave-particle interaction. These calculations revealed that beam ions can drive TAEs with toroidal mode numbers n ≥ 8 with linear growth rates γd/ω ∼ 1%, while TAEs with n &lt; 8 are damped by the beam ion population. This finding was supported by a simple analytical model. Alpha particles drive modes with significantly smaller linear growth rates, γα/ω ≲ 0.1% due to the low alpha power generated almost exclusively by beam-thermal fusion reactions. Non-ideal effects were calculated using complex resistivity in the CASTOR code, leading to an assessment of radiative, collisional, and continuum damping for all 21 candidate modes. Ion Landau damping was modelled using Maxwellian distribution functions for bulk D and T ions in HALO. Radiative damping, the dominant damping mechanism, suppresses modes with high toroidal mode numbers. Comparing the drive from energetic particles with damping from thermal particles, we find all but one of the candidate modes are damped. The single net-driven n = 9 TAE with a net growth rate γ/ω = 0.02% matches experimental observations with a lab frequency f = 163kHz and location R = 3.31m. The TAE was driven by co-passing particles through the v∥ = vA/5 resonance, with additional sideband resonances contributing significant drive.
Submitted for publication in Nuclear Fusion
</description>
<pubDate>Sun, 01 Oct 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158661</guid>
<dc:date>2023-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Comparison of Core Ar^17+ and Mo^32+ Toroidal Rotation in C-Mod Plasmas</title>
<link>https://hdl.handle.net/1721.1/158660</link>
<description>Comparison of Core Ar^17+ and Mo^32+ Toroidal Rotation in C-Mod Plasmas
Rice, John E.; Angioni, C.; Cao, N.M.; Reinke, M.L.
Core (r/a &lt; 0.5) toroidal rotation from argon (Ar^17+, 40 AMU) and molybdenum (Mo^32+, 96 AMU) ions has been compared in C-Mod tokamak plasmas over a wide range of operating conditions and confinement schemes, including Ohmic L-mode in the linear and saturated regimes, ICRF heated I-mode and H-mode, as well as in discharges with induced locked modes and with external current and rotation drive. In all cases the velocities of the two impurities are identical within about 5%, for a range between -60 and +80 km/s. This is in general agreement with the predictions of neo-classical theory.
Submitted for publication in Nuclear Fusion
</description>
<pubDate>Wed, 01 Mar 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158660</guid>
<dc:date>2023-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Runaway electron deconfinement in SPARC and DIII-D by a passive 3D coil</title>
<link>https://hdl.handle.net/1721.1/158659</link>
<description>Runaway electron deconfinement in SPARC and DIII-D by a passive 3D coil
Izzo, V.A.; Pusztai, I.; Särkimäki, K.; Sundström, A.; Garnier, D.; Weisberg, D.; Tinguely, R. Alex; Paz-Soldan, C.; Granetz, R.S.; Sweeney, Ryan
The operation of a 3D coil (passively driven by the current quench loop voltage) for the deconfinement of runaway electrons is modeled for disruption scenarios in the SPARC and DIII-D tokamaks. Nonlinear MHD modeling is carried out with the NIMROD code including time-dependent magnetic  field boundary conditions to simulate the effect of the coil. Further modeling in some cases uses the ASCOT5 code to calculate advection and diffusion coefficients for runaway electrons based on the NIMROD-calculated fields, and the DREAM code to compute the runaway evolution in the presence of these transport coefficients. Compared with similar modeling in Tinguely, et al [2021 Nucl. Fusion 61 124003], considerably more conservative assumptions are made with the ASCOT5 results, zeroing low levels of transport, particularly in regions in which closed  flux surfaces have reformed. Of three coil geometries considered in SPARC, only the n = 1 coil is found to have sufficient resonant components to suppress the runaway current growth. Without the new conservative transport assumptions, full suppression of the RE current is maintained when the TQ MHD is included in the simulation or when the RE current is limited to 250kA, but when transport in closed  ux regions is fully suppressed, these scenarios allow RE beams on the order of 1-2MA to appear. Additional modeling is performed to consider the effects of the close ideal wall. In DIII-D, the current quench is modeled for both limited and diverted equilibrium shapes. In the limited shape, the onset of stochasticity is found to be insensitive to the coil current amplitude and governed largely by the evolution of the safety-factor pro le. In both devices, prediction of the q-pro le evolution is seen to be critical to predicting the later time effects of the coil.
Submitted for publication in Nuclear Fusion
</description>
<pubDate>Fri, 01 Jul 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158659</guid>
<dc:date>2022-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Simultaneous measurements of unstable and stable Alfven Eigenmodes in JET</title>
<link>https://hdl.handle.net/1721.1/158658</link>
<description>Simultaneous measurements of unstable and stable Alfven Eigenmodes in JET
Tinguely, R. Alex; Gonzalez-Martin, J.; Puglia, P.G.; Fil, N.; Dowson, S.; Porkolab, Miklos; Kumar, I.; Podestà, M.; Baruzzo, M.; Fasoli, A.; Kazakov, Ye.O.; Nave, M.F.F.; Nocente, M.; Ongena, J.; Stancar, Z.; JET Contributors
In this paper, we report the novel experimental observation of both unstable and stable Toroidicity-induced Alfven Eigenmodes (TAEs) measured simultaneously in a JET tokamak plasma. The three-ion-heating scheme (D-DNBI-3He) is employed to accelerate deuterons to MeV energies, thereby destabilizing TAEs with toroidal mode numbers n = 3-5, each decreasing in mode amplitude. At the same time, the Alfven Eigenmode Active Diagnostic resonantly excites a stable n = 6 TAE with total normalized damping rate ~1-4%. Hybrid kinetic-MHD modeling with codes NOVA-K and MEGA both  find eigenmodes with similar frequencies, mode structures, and radial locations as in experiment. NOVA-K demonstrates good agreement with the n = 3, 4, and 6 TAEs, matching the damping rate of the n = 6 mode within uncertainties and identifying radiative damping as the dominant contribution. Improved agreement is found with MEGA for all modes: the unstable n = 3-5 and stable n = 2, 6 modes, with the latter two stabilized by higher intrinsic damping and lower fast ion drive, respectively. While some discrepancies remain to be resolved, this unique validation effort gives us confidence in TAE stability predictions for future fusion devices.
Submitted for publication in Nuclear Fusion
</description>
<pubDate>Mon, 01 Aug 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158658</guid>
<dc:date>2022-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Diverted negative triangularity plasmas on DIII-D: The benefit of high confinement without the liability of an edge pedestal</title>
<link>https://hdl.handle.net/1721.1/158657</link>
<description>Diverted negative triangularity plasmas on DIII-D: The benefit of high confinement without the liability of an edge pedestal
Marinoni, Alessandro; Austin, M.E.; Hyatt, A.W.; Saarelma, S.; Scotti, F.; Yan, Z.; Chrystal, C.; Coda, S.; Glass, F.; Hanson, J.M.; McLean, A.G.; Pace, D.C.; Paz-Soldan, C.; Petty, C.C.; Porkolab, Miklos; Schmitz, L.; Sciortino, Francesco; Smith, S.P.; Thome, K.E.; Turco, F.; DIII-D Team
Diverted discharges at negative triangularity on the DIII-D tokamak sustain normalized confinement and pressure levels typical of standard H-mode scenarios (H98y2~1, βN~3) without developing an edge pressure pedestal, despite the auxiliary power far exceeding the L → H power threshold expected from conventional scaling laws. The power degradation of confinement is substantially weaker than the ITER-89P scaling, resulting in a confinement factor that improves with increasing auxiliary power. The absence of the edge pedestal is beneficial in several aspects, such as eliminating the need for active mitigation or suppression of edge localized modes, low impurity retention and a reconstructed scrape-off layer heat flux width at the mid-plane that exceeds the ITPA multi-machine scaling law by up to 50%. Together with technological advantages granted by placing the divertor at larger radii, plasmas at Negative Triangularity without an edge pedestal feature both core confinement and power handling characteristics that are potentially suitable for operation in future fusion reactors.
Submitted for publication in Nuclear Fusion
</description>
<pubDate>Sun, 01 Aug 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158657</guid>
<dc:date>2021-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>WEST actively cooled load resilient ion cyclotron resonance heating system results</title>
<link>https://hdl.handle.net/1721.1/158656</link>
<description>WEST actively cooled load resilient ion cyclotron resonance heating system results
Hillairet, J.; Mollard, P.; Colas, L.; Helou, W.; Urbanczyk, G.; Bernard, J.-M.; Delaplanche, J.-M.; Durand, F.; Faure, N.; Garibaldi, P.; Lombard, G.; Bourdelle, C.; Desgranges, C.; Delmas, E.; Dumont, R.; Ekedahl, A.; Ferlay, F.; Goniche, M.; Guillemaut, C.; Hoang, G.T.; Maget, P.; Volpe, R.; Song, Y.; Yang, Q.; Chen, Z.; Wang, Y.; Xu, H.; Yuan, S.; Zhao, Y.; Durodie, F.; Lerche, E.; Ragona, R.; Bertelli, N.; Ono, M.; Shiraiwa, S.; Bobkov, V.; Klepper, C.; Lau, C.; Martin, E.; Lu, B.; Maggiora, R.; Milanesio, D.; Vulliez, K.; Wallace, Greg W.; WEST Team
Three identical new WEST ion cyclotron resonance heating (ICRH) antennas have been designed, assembled then commissioned on plasma from 2013 to 2019. The WEST ICRH system is both load-resilient and compatible with long-pulse operations. The three antennas have been successfully operated together on plasma in 2019 and 2020, with up to 5.8 MW of coupled power. The load resilience capability has been demonstrated and the antenna feedback controls for phase and matching have been developed. The breakdown detection systems have been validated and successfully protected the antennas. The use of ICRH in combination with lower hybrid has triggered the first high confinement mode transitions identified on WEST.
Submitted for publication in Nuclear Fusion
</description>
<pubDate>Sun, 01 Aug 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158656</guid>
<dc:date>2021-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>A novel measurement of marginal Alfven Eigenmode stability during high power auxiliary heating in JET</title>
<link>https://hdl.handle.net/1721.1/158655</link>
<description>A novel measurement of marginal Alfven Eigenmode stability during high power auxiliary heating in JET
Tinguely, R. Alex; Fil, N.; Puglia, P.G.; Dowson, S.; Porkolab, Miklos; Guillemot, V.; Podestà, M.; Baruzzo, M.; Dumont, R.; Fasoli, A.; Fitgerald, M.; Kazakov, Ye.O.; Nave, M.F.F.; Nocente, M.; Ongena, J.; Sharapov, S.E.; Stancar, Z.; JET Contributors
The interaction of Alfven Eigenmodes (AEs) and energetic particles is one of many important factors determining the success of future tokamaks. In JET, eight in-vessel antennas were installed to actively probe stable AEs with frequencies ranging 25-250 kHz and toroidal mode numbers |n| &lt; 20. During the 2019-2020 deuterium campaign, almost 7500 resonances and their frequencies f0, net damping rates \gamma &lt; 0, and toroidal mode numbers were measured in almost 800 plasma discharges. From a statistical analysis of this database, continuum and radiative damping are inferred to increase with edge safety factor, edge magnetic shear, and when including non-ideal effects. Both stable AE observations and their associated damping rates are found to decrease with |n|. Active antenna excitation is also found to be ineffective in H-mode as opposed to L-mode; this is likely due to the increased edge density gradient's effect on accessibility and ELM-related noise's impact on mode identification. A novel measurement is reported of a marginally stable, edge-localized Ellipticity-induced AE probed by the antennas during high-power auxiliary heating (ICRH and NBI) up to 25 MW. NOVA-K kinetic-MHD simulations show good agreement with experimental measurements of f0, \gamma, and n, indicating the dominance of continuum and electron Landau damping in this case. Similar experimental and computational studies are planned for the recent hydrogen and ongoing tritium campaigns, in preparation for the upcoming DT campaign.
Submitted for publication in Nuclear Fusion
</description>
<pubDate>Fri, 01 Oct 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158655</guid>
<dc:date>2021-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Measuring stopping power in warm dense matter plasmas at OMEGA</title>
<link>https://hdl.handle.net/1721.1/158654</link>
<description>Measuring stopping power in warm dense matter plasmas at OMEGA
Lahmann, Brandon; Saunders, A.M.; Döppner, T.; Frenje, Johan A.; Glenzer, S.H.; Gatu Johnson, Maria; Sutcliffe, G.; Zylstra, A.B.; Petrasso, Richard D.
A platform has been developed for accurately measuring the stopping power of high energy protons through warm dense matter (WDM) plasmas characterized by x-ray Thomson scattering. In this work stopping power measurements were successfully made through both WDM Beryllium and Boron plasmas. In the Boron experiments, an increase in stopping was observed over their cold target counter-parts. This increase in stopping was shown to agree well with models that account for the partial ionization of the plasma.
Submitted for publication in Plasma Physics and Controlled Fusion
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158654</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Microwave diagnostics damage by parametric decay instabilities during electron cyclotron resonance heating in ASDEX Upgrade</title>
<link>https://hdl.handle.net/1721.1/158653</link>
<description>Microwave diagnostics damage by parametric decay instabilities during electron cyclotron resonance heating in ASDEX Upgrade
Hansen, Soren K.; Jacobsen, A.S.; Willensdorfer, M.; Nielsen, S.K.; Stober, J.; Hofler, K.; Maraschek, M.; Fischer, R.; Dunne, M.
We present observations of microwave diagnostics damage in three discharges employing third-harmonic X-mode electron cylcotron resonance heating (ECRH) at the ASDEX Upgrade tokamak. In all cases, the diagnostics damage is explainable in terms of a parametric decay instability (PDI), where an X-mode ECRH wave decays to two trapped upper hybrid (UH) waves near half the ECRH frequency, followed by secondary instabilities, which generate strong microwave signals near multiples of half the ECRH frequency that cause the damage. Trapping of the UH waves near half the ECRH frequency is necessary to reduce the ECRH power required for exciting the PDIs to a level attainable at ASDEX Upgrade, and may occur when the second-harmonic UH resonance of the ECRH waves is present in a region of non-monotonic electron density, e.g. near the O-point of a magnetohydrodynamic mode or the plasma center. The diagnostics damage in the three discharges may be attributed to PDIs occurring near the O-point of a rotating mode, near the plasma center, and near the O-point of a locked mode, respectively. In the rotating mode case, the strong signals are shown to be quasi-periodic, with spikes occurring when the O-point of the mode passes through an ECRH beam, as expected. In the locked mode case, Thomson scattering profiles demonstrate the possibility of the primary PDI occurring based on experimental data for the first time under fusion-relevant conditions. Applying the framework used for ASDEX Upgrade to the X-mode ECRH scenarios planned for the early operation phase of ITER, the PDIs are found to be likely in connection with 170 GHz ECRH of half field scenarios and 104 GHz (or 110 GHz) ECRH of one third field scenarios. Finally, several strategies for mitigating diagnostics damage are proposed.
Submitted for publication in Plasma Physics and Controlled Fusion
</description>
<pubDate>Thu, 01 Jul 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158653</guid>
<dc:date>2021-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>The crucial role of diagnostics in achieving ignition on the National Ignition Facility (NIF)</title>
<link>https://hdl.handle.net/1721.1/158652</link>
<description>The crucial role of diagnostics in achieving ignition on the National Ignition Facility (NIF)
Kilkenny, J.D.; Batha, S.H.; Pak, A.; Landen, O.L.; Bradley, D.K.; Moore, A.S.; Gatu Johnson, Maria; Meezan, N.B.; Mackinnon, A.J.; Haan, S.W.; Regan, S.P.; Hsing, W.W.; Smalyul, V.A.
Well over 100 diagnostics can operate on the National Ignition Facility (NIF) as a result of several decades of development on NIF, and before that on Nova, OMEGA, and earlier LLNL lasers. A subset of these have guided the approach to achieving ignition on the NIF in 2022 [H. Abu-Shawareb et al. (Indirect Drive ICF Collaboration), Phys. Rev. Lett. 129(7), 075001 (2022)]. Achieving ignition on NIF has required many types of experiments with this core set of diagnostics, some constraining known unknowns and some revealing surprises—arguably unknown unknowns. Early design work realized that the extreme precision required for ignition on NIF would require fine-tuning by experiment, that is, measuring and adjusting known unknowns. Many examples are given where the use of the core set of ignition diagnostics in experimental arrangements called platforms demonstrated control of the key theoretical parameters defined as shape, adiabat, velocity, and mix. The direction of the adjustments to input conditions is found either by trend analysis or, in many cases, by observing from the diagnostic data the direction to make an adjustment. In addition, diagnostics have revealed some unexpected or neglected known issues, which degrade performance, or unexpected issues, unknown unknowns. Some of these factors had been previously considered, but underestimated or difficult to calculate at the time. The overall methodology can be described as a variant of Popper’s falsifiability philosophy [K. Popper, The Logic of Scientific Discovery (Hutchinson, 1974)]. This paper summarizes the role of ignition diagnostics in terms of falsification or validation of theory or experimental setup as well as uncovering unexpected issues. The journey to ignition started in the seventies with a 1-mm wavelength laser producing disastrous results. Diagnostics have guided us to the recent multi-decadal goal of demonstrating ignition and burn in the laboratory.
Submitted for publication in Physics of Plasmas
</description>
<pubDate>Fri, 01 Mar 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158652</guid>
<dc:date>2024-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Isotope effects on energy transport in the core of ASDEX-Upgrade tokamak plasmas: Turbulence measurements and model validation</title>
<link>https://hdl.handle.net/1721.1/158651</link>
<description>Isotope effects on energy transport in the core of ASDEX-Upgrade tokamak plasmas: Turbulence measurements and model validation
Molina Cabrera, Pedro A.; Rodriguez Fernandez, Pablo; Görler, T.; Bergmann, M.; Höfler, K.; Denk, S.S.; Bielajew, R.; Conway, G.D.; Yoo, C.; White, Anne E.; ASDEX Upgrade Team
Design and operation of future tokamak fusion reactors using a deuterium-tritium 50:50 mix requires a solid under- standing of how energy confinement properties change with ion mass. This study looks at how turbulence and energy transport change in L-mode plasmas in the ASDEX Upgrade tokamak when changing ion species between hydrogen and deuterium. For this purpose, both experimental turbulence measurements and modeling are employed. Local mea- surements of ion-scale (with wavevector of fluctuations perpendicular to the B-field k⊥ &lt;2 cm−1, k⊥ρs &lt;0.2 , where ρs is the ion sound Larmor radius using the deuterium ion mass) electron temperature fluctuations have been performed in the outer core (normalized toroidal flux ρTor = 0.65 − 0.8) using a multi-channel correlation electron cyclotron emission diagnostic (CECE). Lower root-mean-square perpendicular fluctuation amplitudes and radial correlation lengths have been measured in hydrogen versus deuterium. Measurements of the cross-phase angle between a normal-incidence re- flectometer and an ECE signal were made to infer the cross-phase angle between density and temperature fluctuations. The magnitude of the cross-phase angle was found larger (more out-of-phase) in hydrogen than in deuterium. TRANSP power balance simulations show a larger ion heat flux in hydrogen where the electron-ion heat exchange term is found to play an important role. These experimental observations were used as the basis of a validation study of both quasi- linear gyrofluid TGLF-SAT2 and nonlinear gyrokinetic GENE codes. Linear solvers indicate that, at long wavelengths (k⊥ρs &lt; 1), energy transport in the deuterium discharge is dominated by a mixed ion-temperature-gradient (ITG) and trapped-electron mode (TEM) turbulence while in hydrogen transport is exclusively and more strongly driven by ITG turbulence. The Ricci validation metric has been used to quantify the agreement between experiments and simulations taking into account both experimental and simulation uncertainties as well as up to five different observables accross different levels of the primacy hierarchy.
Submitted for publication in Physics of Plasmas
</description>
<pubDate>Tue, 01 Aug 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158651</guid>
<dc:date>2023-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Hydroscaling indirect-drive implosions on the National Ignition Facility</title>
<link>https://hdl.handle.net/1721.1/158650</link>
<description>Hydroscaling indirect-drive implosions on the National Ignition Facility
Baker, K.L.; Jones, O.; Weber, C.; Clark, D.; Patel, P.K.; Thomas, C.A.; Landen, O.L.; Nora, R.; Anderson, G.J.; Gaffney, J.; MacLaren, S.; Casey, D.T.; Döppner, T.; Dewald, E.; Tommasini, R.; Spears, B.K.; Salmonson, J.; Hohenberger, M.; Khan, S.; Zylstra, A.; Kritcher, A.; Amendt, P.; Smalyuk, V.; Lindl, J.; Young, C.; Ross, S.; Ho, D.; Hurricane, O.A.; Callahan, D.A.; Woods, T.; Milovich, J.L.; Berger, R.L.; Strozzi, D.; Bachmann, B.; Bionta, R.; Celliers, P.M.; Fittinghoff, D.; Hatarik, R.; Gatu Johnson, Maria; Meaney, K.; Millot, M.; Volegov, P.L.; Wilde, C.
A goal of the laser-based National Ignition Facility (NIF) is to increase the liberated fusion energy “yield” in inertial confinement fusion experiments well past the ignition threshold and the input laser energy. One method of increasing the yield, hydrodynamic scaling of current experiments, does not rely on improving compression or implosion velocity, but rather increases the scale of the implosion to increase hotspot areal density and confinement time. Indirect-drive (Hohlraum driven) implosions carried out at two target sizes, 12.5% apart, have validated hydroscaling expectations. Moreover, extending comparisons to the best-performing implosions at five different capsule sizes shows that their performance also agrees well with hydroscaling expectations even though not direct hydroscales of one another. In the future, by switching to a reduced loss Hohlraum geometry, simulations indicate that we can drive 20% larger-scale implosions within the current power and energy limitations on the NIF. At the demonstrated compression and velocity of these smaller-scale implosions, these 1.2  hydroscaled implosions should put us well past the ignition threshold.
Submitted for publication in Physics of Plasmas
</description>
<pubDate>Fri, 01 Oct 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158650</guid>
<dc:date>2021-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Enhanced Laser–Energy Coupling with Small-Spot Distributed Phase Plates (SG5-650) in OMEGA DT Cryogenic Target Implosions</title>
<link>https://hdl.handle.net/1721.1/158649</link>
<description>Enhanced Laser–Energy Coupling with Small-Spot Distributed Phase Plates (SG5-650) in OMEGA DT Cryogenic Target Implosions
Theobald, W.; Cao, D.; Shah, R.C.; Thomas, C.A.; Igumenshchev, I.V.; Bauer, K.A.; Betti, R.; Bonino, M.J.; Campbell, E.M.; Christopherson, A.R.; Churnetski, K.; Edgell, D.H.; Forrest, C.J.; Frenje, Johan A.; Gatu Johnson, Maria; Glebov, V.Yu.; Goncharov, V.N.; Gopalaswamy, V.; Harding, D.R.; Hu, S.X.; Ivancic, S.T.; Jacobs-Perkins, D.W.; Janezic, R.T.; Joshi, T.; Knauer, J.P.; Lees, A.; Luo, R.W.; Mannion, O.M.; Marshall, F.J.; Mohamed, Z.L.; Morse, S.F.B.; Patel, D.; Peebles, J.L.; Petrasso, Richard D.; Radha, P.B.; Rinderknecht, H.G.; Rosenberg, M.J.; Sampat, S.; Sangster, T.C.; Shmayda, W.T.; Shuldberg, C.M.; Shvydky, A.; Sorce, C.; Stoeckl, C.; Wittman, M.D.; Regan, S.P.
Cryogenic deuterium–tritium ice target implosions on OMEGA with new small-spot ("SG5-650") distributed phase plates (DPP's) achieved an (11 +/- 4)% increase in energy coupling compared to implosions with standard-spot DPP's by decreasing the ratio of the laser spot diameter to the target diameter from 0.93 to 0.75. The SG5-650 DPP's provide a focus spot size of 674 um, which is defi ned as the diameter that encircles 95% of the measured beam energy compared to 834 um for the SG5-850. The hydrodynamic effciency, defi ned as the ratio of the kinetic energy in the imploding shell to the laser energy, increased from 4.5% to 5.0% based on radiation-hydrodynamic calculations benchmarked to shell trajectory and bang-time measurements. The higher coupling came with a trade-off of an increased hot-electron production as well as increased hydrodynamic instabilities seeded by a larger mode-10 amplitude from the beam port geometry, both of which may have affected the fusion neutron production and areal density.
Submitted for publication in Physics of Plasmas
</description>
<pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158649</guid>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Modeling the spectral modification of lower hybrid wave in the presence of drift-wave type density fluctuation in the scrape-off-layer of the EAST tokamak</title>
<link>https://hdl.handle.net/1721.1/158648</link>
<description>Modeling the spectral modification of lower hybrid wave in the presence of drift-wave type density fluctuation in the scrape-off-layer of the EAST tokamak
Wu, C.B.; Ding, B.J.; Li, M.H.; Baek, Seung Gyou; Wallace, Greg M.; Li, Y.C.; Yan, G.H.
The spectrum change of lower hybrid (LH) waves caused by low-frequency density fluctuation in the scrape-off layer (SOL) is studied by applying the wave scattering model developed by Bonoli and Ott [Bonoli and Ott, Physics of Fluids 25, 359 (1982)] via a Monte Carlo method. Due to the influence of density fluctuation, the perpendicular component of the LH wave-vector can be rotated in the 2D perpendicular space, which will further change the ray trajectory of the LH wave. A ray-tracing model specific to this purpose is developed to evaluate the probability distribution of both poloidal refractive index (N_theta) and the parallel refractive index (n||) of the LH wave at the last closed flux surface (LCFS), assuming wave propagation through the turbulent SOL plasma from the launcher at the far SOL to the LCFS. In the presence of the drift-wave-type density fluctuations, the Monte-Carlo approach is adopted to characterize the scattering probability and the scattering angle of the perpendicular LH wave-vector. The scattering probability and the rotation angle are determined by the combined effect from the geometric optics approximation term and the E×B drift term in the LH tensor elements. The probability distributions of N|| and N_theta at the LCFS are studied using the EAST parameters as a function of wave frequency, the initial n||, and the polar injection position, which may influence the LHCD efficiency.
Submitted for publication in Physics of Plasmas
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158648</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Turbulent field fluctuations in gyrokinetic and fluid plasmas</title>
<link>https://hdl.handle.net/1721.1/158647</link>
<description>Turbulent field fluctuations in gyrokinetic and fluid plasmas
Mathews, Abhilash; Mandell, N.; Francisquez, M.; Hughes, Jerry W.; Hakim, A.
A key uncertainty in the design and development of magnetic confinement fusion energy reactors is predicting edge plasma turbulence. An essential step in overcoming this uncertainty is the validation in accuracy of reduced turbulent transport models. Drift-reduced Braginskii two-fluid theory is one such set of reduced equations that has for decades simulated boundary plasmas in experiment, but significant questions exist regarding its predictive ability. To this end, using a novel physics-informed deep learning framework, we demonstrate the first ever direct quantitative comparisons of turbulent field fluctuations between electrostatic two-fluid theory and electromagnetic gyrokinetic modelling with good overall agreement found in magnetized helical plasmas at low normalized pressure. This framework is readily adaptable to experimental and astrophysical environments, and presents a new technique for the numerical validation and discovery of reduced global plasma turbulence models.
Submitted for publication in Physics of Plasmas
</description>
<pubDate>Thu, 01 Jul 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158647</guid>
<dc:date>2021-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mildly relativistic collisionless shock formed by magnetic piston</title>
<link>https://hdl.handle.net/1721.1/158646</link>
<description>Mildly relativistic collisionless shock formed by magnetic piston
Moreno, Q.; Araudo, A.; Korneev, Ph.; Li, Chi-Kang; Tikhonchuk, V.T.; Ribeyre, X.; d'Humieres, E.; Weber, S.
By using particle-in-cell simulations, we study the collision of two plasma flows with one of them carrying a magnetic field. Ion interpenetration results in the formation of a magnetic piston with the magnetic field compression proportional to the density ratio of the colliding plasmas. The counterpropagating ions in the nonmagnetized plasma upstream from the piston excite the ion Weibel instability, which turns into magnetic turbulence. The thickness of the piston increases with time, and it turns into a reverse magnetized shock after less than one ion gyro period. In front of the piston, the time needed to decrease the nonmagnetized ion anisotropy using the magnetic turbulence is much larger than the ion gyroperiod in the piston. Consequently, particles are reflected by the piston, which acts as a wall initiating a transient phase. After several ion periods, the formation of this electromagnetic forward shock is, then, accelerated by the piston, and at large timescale, the dissipation of energy is eventually mediated only by the Weibel turbulence. We report here a new configuration of shocks, where a reverse magnetized and a forward electromagnetic shock coexist separated by a tangential discontinuity. Particle acceleration and heating in the two shock structures and relevance of this scenario of collisionless shock formation to laboratory experiments and astrophysical conditions are discussed.
Submitted for publication in Physics of Plasmas
</description>
<pubDate>Fri, 01 May 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158646</guid>
<dc:date>2020-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Principal factors in performance of indirect-drive laser fusion experiments</title>
<link>https://hdl.handle.net/1721.1/158645</link>
<description>Principal factors in performance of indirect-drive laser fusion experiments
Thomas, C.A.; Campbell, E.M.; Baker, K.L.; Casey, D.T.; Hohenberger, M.; Kritcher, A.L.; Spears, B.K.; Khan, S.F.; Nora, R.; Woods, D.T.; Milovich, J.L.; Berger, R.L.; Strozzi, D.; Ho, D.D.; Clark, D.; Bachmann, B.; Benedetti, L.R.; Bionta, R.; Celliers, P.M.; Fittinghoff, D.N.; Grim, G.; Hatarik, R.; Izumi, N.; Kyrala, G.; Ma, T.; Millot, M.; Nagel, S.R.; Patel, P.K.; Yeamans, C.; Nikroo, A.; Tabak, M.; Gatu Johnson, Maria; Volegov, P.L.; Finnegan, S.M.
Progress in inertial confinement fusion depends on the accurate interpretation of experiments that are complex and difficult to explain with simulations. Results could depend on small changes in the laser pulse or target or physics that are not fully understood or characterized. In this paper we discuss an x-ray-driven platform [K. Baker et al., Phys. Rev. Lett. 121, 135001 (2018)] with fewer sources of degradation, and find the fusion yield can be described as a physically motivated function of laser energy, target scale, and implosion symmetry. This platform and analysis could enable a more experimental approach to the study and optimization of implosion physics.
Submitted for publication in Physics of Plasmas
</description>
<pubDate>Mon, 01 Jun 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158645</guid>
<dc:date>2020-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Disruption halo current rotation scaling on Alcator C-Mod and HBT-EP</title>
<link>https://hdl.handle.net/1721.1/158644</link>
<description>Disruption halo current rotation scaling on Alcator C-Mod and HBT-EP
Saperstein, Alex R.; Tinguely, R. Alex; Granetz, R.S.; Levesque, J.P.; Maue, M.E.; Navrati, G.A.
Asymmetric halo currents (HCs) can exert large net forces on the vacuum vessel and other components during disruptions on tokamaks. The displacements caused by these forces can then be amplified if these asymmetric forces rotate at frequencies resonant with the vessel. This paper reports on the investigation of a recently proposed scaling law for the disruption HC rotation frequency that combines measurements on Alcator C-Mod with those on HBT-EP. We find that a new non-circular version of the scaling law ( &lt;f_rot&gt;*m/&lt;m&gt; \propto 1 B*T*(S/pi) ) takes into consideration the dependence of f_rot on the poloidal structure of the MHD instability (m) driving the asymmetry and describes the disruption-averaged rotation frequency on C-Mod. Disruption rotation is also found to be insensitive to the vertical position and impurity content of the plasma at the onset of the disruption.However, a stagnation in the time-evolution of f_rot is occasionally observed. Observations are consistent with the dominance of poloidal rotation during the disruption, which is motivated by the poloidal drift nature of the scaling law.
Submitted for publication in Physics of Plasmas
</description>
<pubDate>Thu, 01 Dec 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158644</guid>
<dc:date>2022-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Reaching a Burning Plasma and Ignition Using Smaller Capsules/Hohlraums, Higher Radiation Temperatures and Thicker Ablator/Ice on the National Ignition Facility</title>
<link>https://hdl.handle.net/1721.1/158643</link>
<description>Reaching a Burning Plasma and Ignition Using Smaller Capsules/Hohlraums, Higher Radiation Temperatures and Thicker Ablator/Ice on the National Ignition Facility
Baker, K.L.; Thomas, C.A.; Landen, O.L.; Haan, S.; Lindl, J.D.; Casey, D.T.; Young, C.; Nora, R.; Hurricane, O.A.; Callahan, D.A.; Jones, O.; Berzak Hopkins, L.; Khan, S.; Spears, B.K.; Le Pape, S.; Meezan, N.B.; Ho, D.D.; Döppner, T.; Hinkel, D.; Dewald, E.L.; Tommasini, R.; Hohenberger, M.; Weber, C.; Clark, D.; Woods, D.T.; Milovich, J.L.; Strozzi, D.; Kritcher, A.; Robery, H.F.; Ross, J.S.; Smalyuk, V.A.; Amendt, P.A.; Bachmann, B.; Benedetti, L.R.; Bionta, R.; Celliers, P.M.; Fittinghoff, D.; Goyon, C.; Hatarik, R.; Izumi, N.; Gatu Johnson, Maria; Kyrala, G.; Ma, T.; Meaney, K.; Millot, M.; Nagel, S.R.; Patel, P.K.; Turnbell, D.; Volegov, P.L.; Yeamans, C.; Wilde, C.
In indirect-drive implosions, the final core hot spot energy and pressure and hence neutron yield attainable in 1D increases with increasing laser peak power and hence radiation drive temperature at fixed capsule and hohlraum size. We present simple analytic scalings validated by 1D simulations that quantify the improvement in performance and use this to explain existing data and simulation trends.  Extrapolating to the 500 TW NIF peak power limit in a low gas-fill 5.4 mm diameter hohlraum based on existing high adiabat implosion data at 400 TW, 1.3 MJ and 1e16 yield, we find that a 2-3e17 yield (0.5 – 0.7 MJ) is plausible using only 1.8 MJ of laser energy.  Based on existing data varying DT fuel thickness and dopant areal density, further improvements should be possible by increasing DT fuel areal density, and hence confinement time and yield amplification.
Submitted for publication in Physics of Plasmas
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158643</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>The role of toroidal rotation in the very high energy confinement quality observed in super H-mode experiments on DIII-D</title>
<link>https://hdl.handle.net/1721.1/158642</link>
<description>The role of toroidal rotation in the very high energy confinement quality observed in super H-mode experiments on DIII-D
Ding, S.; Garofalo, A.M.; Jian, X.; Holland, C.; Grierson, B.A.; Soloman, W.M.; Marinoni, Alessandro; Knolker, M.; McClenaghan, J.
In this paper, we report the key role that toroidal rotation and the related ExB shear physics played in the very high energy confinement quality (H98y2&gt;1.5) of super H-mode experiments on DIII-D. Experiments show that the energy confinement quality decreases when toroidal rotation decreases due to the decreased externally controlled torque per particle. Meanwhile, the total pedestal pressure in the experiments remains very high during the rotation and confinement quality change. TGYRO transport modeling suggests the contribution from rotation in the ExB shear is responsible for the confinement quality in excess of standard H-mode (H98y2~1). CGYRO gyrokinetic simulations reveal the governing physics in the core plasma of super H-modes: significant up-shift of nonlinear the ITG critical gradient is observed when applying ExB shear physics in the modeling based on experimental data. The effects of other physical parameters and contribution from pedestal height, which may play minor roles in this study, are also discussed.
Submitted for publication in Physics of Plasmas
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158642</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Experimental observations of detached bow shock formation in the interaction of a laser-produced plasma with a magnetized obstacle</title>
<link>https://hdl.handle.net/1721.1/158641</link>
<description>Experimental observations of detached bow shock formation in the interaction of a laser-produced plasma with a magnetized obstacle
Levesque, Joseph M.; Liao, Andy S.; Hartigan, Patrick; Young, Rachel P.; Trantham, Matthews; Gray, Williams; Klein, Sallee; Manuel, Mario; Fiksel, Gennady; Katz, Joseph; Li, Chi-Kang; Birkel, Andrew; Tzeferacos, Petros; Kuranz, Carolyn C.
The magnetic field produced by planets with active dynamos, like the Earth, can exert sufficient pressure to oppose supersonic stellar wind plasmas, leading to the formation of a standing bow shock upstream of the magnetopause, or pressure-balance surface. Scaled laboratory experiments studying the interaction of an inflowing solar wind analog with a strong, external magnetic field are a promising new way to study magnetospheric physics and to complement existing models, although reaching regimes favorable for magnetized shock formation is experimentally challenging. This paper presents experimental evidence of the formation of a magnetized bow shock in the interaction of a supersonic, super-Alfvenic plasma with a strongly magnetized obstacle at the OMEGA laser facility. The solar wind analog is generated by the collision and subsequent expansion of two counter- propagating, laser-driven plasma plumes. The magnetized obstacle is a thin wire, driven with strong electrical currents. Hydrodynamic simulations using the FLASH code predict that the colliding plasma source meets the criteria for bow shock formation. Spatially resolved, optical Thomson scat- tering measures the electron number density, and optical emission lines provide a measurement of the plasma temperature, from which we infer the presence of a fast magnetosonic shock far upstream of the obstacle. Proton images provide a measure of large-scale features in the magnetic field topology, and reconstructed path-integrated magnetic field maps from these images suggest the formation of a bow shock upstream of the wire and as a transient magnetopause. We compare features in the reconstructed fields to two-dimensional MHD simulations of the system.
Submitted for publication in Physics of Plasmas
</description>
<pubDate>Thu, 01 Jul 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158641</guid>
<dc:date>2021-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Effect of cross-beam energy transfer on target-offset asymmetry in direct-drive inertial confinement fusion implosions</title>
<link>https://hdl.handle.net/1721.1/158640</link>
<description>Effect of cross-beam energy transfer on target-offset asymmetry in direct-drive inertial confinement fusion implosions
Anderson, K.S.; Forrest, C.J.; Mannion, O.M.; Marshall, F.J.; Shah, R.C.; Michel, D.T.; Marozas, J.A.; Radha, P.B.; Edgell, D.H.; Epstein, R.; Goncharov, V.N.; Knauer, J.P.; Gatu Johnson, Maria; Laffite, S.
The unintentional mispositioning of inertial confinement fusion (ICF) capsules from the center of laser beam convergence has long been shown in simulations to generate large ℓ = 1 asymmetry and significantly degrade implosion symmetry and fusion yields. Experimental yields on the OMEGA Laser System, however, have shown much less sensitivity to this initial target offset. This paper presents simulations of offset ICF implosions improved by including a physics model of cross-beam energy transfer (CBET), a mechanism of laser energy scattering from one beam to another. Room-temperature OMEGA implosion experiments with prescribed target offsets are simulated with and without CBET, illustrating that CBET mitigates the ℓ = 1 implosion asymmetry from target offset. Comparison of simulations to multiple complementary experimental observables indicates the addition of CBET physics in offset simulations is necessary to match experimental results.
Submitted for publication in Physics of Plasmas
</description>
<pubDate>Fri, 01 May 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158640</guid>
<dc:date>2020-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>A second order yield-temperature relation for accurate inference of burn-averaged quantities in multi-species plasmas</title>
<link>https://hdl.handle.net/1721.1/158639</link>
<description>A second order yield-temperature relation for accurate inference of burn-averaged quantities in multi-species plasmas
Kabadi, Neel V.; Adrian, Patrick J.; Bose, A.; Casey, D.T.; Frenje, Johan A.; Gatu Johnson, Maria; Lahmann, Brandon; Mannion, O.M.; Petrasso, Rrichard D.; Rinderknecht, H.G.; Séguin, Frederick H.; Sio, H.W.; Sutcliffe, G.D.; Zylstra, A.B.
Measured yields and ion temperatures inferred from the fusion product energy spectra can be used as metrics for the performance of an ICF implosion. This can be to infer species separation, thermal decoupling,  flows or other effects that can cause the inferred ion temperatures to deviate from the true underlying thermal temperature and the yield ratio to deviate from the expected value. Direct inference of the impact of these effects on observed temperatures and yields can be difficult to uncover due to underlying dependence on the shape and time evolution of the temperature and density pro les of the fusing plasma. Due to differences in the temperature dependence of the reactivities, different fusion products are emitted from different regions and times within the implosion. In order to properly account for this, a second order analytic expression relating the apparent temperatures and yield ratios is developed. This expression can be coupled to models of yield and/or temperature altering effects to infer their burn-averaged impact on an implosion. The second order expression shows significant improvement over lower order expressions in synthetic data studies. Demonstrations of its applications to synthetic data coupled with models of ion thermal decoupling and radial  flows are presented. In the case of thermal decoupling both  first and second order expressions show reasonable levels of accuracy. To consistently infer the amplitude of radial  flow with &lt;10% error the second order equation is required.
Submitted for publication in Physics of Plasmas
</description>
<pubDate>Thu, 01 Oct 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158639</guid>
<dc:date>2020-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Laser-direct-drive fusion target design with a high-Z gradient-density pusher shell</title>
<link>https://hdl.handle.net/1721.1/158638</link>
<description>Laser-direct-drive fusion target design with a high-Z gradient-density pusher shell
Hu, S.X.; Ceurvorst, L.; Peebles, J.L.; Mao, A.; Li, P.; Lu, Y.; Shvydky, A.; Goncharov, V.N.; Epstein, R.; Nichols, K.; Goshadze, R.M.N.; Ghosh, M.; Hinz, J.; Karasiev, V.V.; Zhang, S.; Shaffer, N.R.; Mihaylov, D.I.; Cappelletti, J.; Harding, D.R.; Li, Chi-Kang; Campbell, E.M.; Shah, R.C.; Collins, T.J.B.; Regan, S.P.; Deeney, C.
Laser-direct-drive fusion target designs with solid deuterium-tritium (DT) fuel, a high-Z gradient-density pusher shell (GDPS), and a Au-coated foam layer have been investigated through both 1D and 2D radiationhydrodynamic simulations. Compared with conventional low-Z ablators and DT-push-on-DT targets, these GDPS targets possess certain advantages of being instability-resistant implosions that can be high adiabat (α  8) and low hot-spot and pusher-shell convergence (CRhs ≈ 22 and CRPS ≈ 17), and have a low implosion velocity (vimp &lt; 3 × 107 cm/s). Using symmetric drive with laser energies of 1.9 to 2.5 MJ, 1D LILAC simulations of these GDPS implosions can result in neutron yields corresponding to 50−MJ energy, even with reduced laser absorption due to the cross-beam energy transfer (CBET) effect. Two-dimensional DRACO simulations show that these GDPS targets can still ignite and deliver neutron yields from 4 to ∼10 MJ even if CBET is present, while traditional DT-push-on-DT targets normally fail due to the CBET-induced reduction of ablation pressure. If CBET is mitigated, these GDPS targets are expected to produce neutron yields of &gt;20 MJ at a driven laser energy of ∼2 MJ. The key factors behind the robust ignition and moderate energy gain of such GDPS implosions are as follows: (1) The high initial density of the high-Z pusher shell can be placed at a very high adiabat while the DT fuel is maintained at a relatively low-entropy state; therefore, such implosions can still provide enough compression ρR &gt;1 g/cm2 for sufficient confinement; (2) the high-Z layer significantly reduces heat-conduction loss from the hot spot since thermal conductivity scales as ∼1/Z; and (3) possible radiation trapping may offer an additional advantage for reducing energy loss from such high-Z targets.
Submitted for publication in Physical Review. E, Statistical physics, plasmas, fluids, and related interdisciplinary topics
</description>
<pubDate>Sat, 01 Jul 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158638</guid>
<dc:date>2023-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Direct Measurements of DT Fuel Preheat from Hot Electrons in Direct-Drive Inertial Confinement Fusion</title>
<link>https://hdl.handle.net/1721.1/158637</link>
<description>Direct Measurements of DT Fuel Preheat from Hot Electrons in Direct-Drive Inertial Confinement Fusion
Christopherson, A.R.; Betti, R.; Forrest, C.J.; Howard, J.; Theobald, W.; Delettrez, J.A.; Rosenberg, M.J.; Solodov, A.A.; Stoeckl, C.; Patel, D.; Gopalaswamy, V.; Cao, D.; Peebles, J.L.; Edgell, D.H.; Seka, W.; Epstein, R.; Wei, M.S.; Gatu Johnson, Maria; Simpson, R.; Regan, S.P.; Campbell, E.M.
Hot electrons generated by laser-plasma instabilities degrade the performance of laser-fusion implosions by preheating the DT fuel and reducing core compression. The hot-electron energy deposition in the DT fuel has been directly measured for the first time by comparing the hard x-ray signals between DT-layered and mass-equivalent ablator-only implosions. The electron energy deposition profile in the fuel is inferred through dedicated experiments using Cu-doped payloads of varying thickness. The measured preheat energy accurately explains the areal-density degradation observed in many OMEGA implosions. This technique can be used to assess the viability of the direct-drive approach to laser fusion with respect to the scaling of hot-electron preheat with laser energy.
Submitted for publication in Physical Review Letters
</description>
<pubDate>Thu, 01 Jul 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158637</guid>
<dc:date>2021-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Weakly Magnetized, Hall Dominated Plasma Couette Flow</title>
<link>https://hdl.handle.net/1721.1/158636</link>
<description>Weakly Magnetized, Hall Dominated Plasma Couette Flow
Flanagan, K.; Milhone, J.; Egedal, J.; Endrizzi, D.; Olson, J.; Peterson, Ethan E.; Sassella, R.; Forest, C.B.
A novel plasma equilibrium in the high-β, Hall regime that produces centrally-peaked, high Mach number Couette flow is described. Flow is driven using a weak, uniform magnetic field and large, cross field currents. Large magnetic field amplification (factor 20) due to the Hall effect is observed when electrons are flowing radially inward, and near perfect field expulsion is observed when the flow is reversed. A dynamic equilibrium is reached between the amplified (removed) field and extended density gradients.
Submitted for publication in Physical Review Letters
</description>
<pubDate>Tue, 01 Sep 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158636</guid>
<dc:date>2020-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design of an inertial fusion experiment exceeding the Lawson criterion for ignition</title>
<link>https://hdl.handle.net/1721.1/158635</link>
<description>Design of an inertial fusion experiment exceeding the Lawson criterion for ignition
Kritcher, A.L.; Zylstra, A.B.; Callahan, D.A.; Hurricane, O.A.; Weber, C.R.; Clark, D.S.; Young, C.V; Ralph, J.E.; Casey, D.T.; Pak, A.; Landen, O.L.; Bachmann, B.; Baker, K.L.; Berzak Hopkins, L.; Bhandarkar, S.D.; Biener, J.; Bionta, R.M.; Birge, N.W.; Braun, T.; Briggs, T.M.; Celliers, P.M.; Chen, H.; Choate, C.; Divol, L.; Döppner, T.; Fittinghoff, D.; Edwards, M.J.; Gatu Johnson, Maria; Gharibyan, N.; Haan, S.; Hahn, K.D.; Hartouni, E.; Hinkel, D.E.; Ho, D.D.; Hohenberger, M.; Holder, J.P.; Huang, H.; Izumi, N.; Jeet, J.; Jones, O.; Kerr, S.M.; Khan, S.F.; Geppert Kleinrath, H.; Geppert Kleinrath, V.; Kong, C.; Lamb, K.M.; Le Pape, S.; Lemos, N.C.; Lindl, J.D.; MacGowan, B.J.; Mackinnon, A.J.; MacPhee, A.G.; Marley, E.V.; Meaney, K.; Millot, M.; Moore, A.S.; Newman, K.; Di Nicola, J.-M. G.; Nikroo, A.; Nora, R.; Patel, P.K.; Rice, N.G.; Rubery, M.S.; Sater, J.; Schlossberg, D.J.; Sepke, S.M.; Sequoia, K.; Shin, S.J.; Stadermann, M.; Stoupin, S.; Strozzi, D.J.; Thomas, C.A.; Tommasini, R.; Trosseille, C.; Tubman, E.R.; Volegov, P.L.; Wild, C.; Woods, D.T.; Yang, S.T.
We present the design of the first igniting fusion plasma in the laboratory by Lawson’s criterion that produced 1.37 MJ of fusion energy, Hybrid-E experiment N210808 (August 8, 2021) [Phys. Rev. Lett. 129, 075001 (2022)]. This design uses the indirect drive inertial confinement fusion approach to heat and compress a central “hot spot” of deuterium-tritium (DT) fuel using a surrounding dense DT fuel piston. Ignition occurs when the heating from absorption of α particles created in the fusion process overcomes the loss mechanisms in the system for a duration of time. This letter describes key design changes which enabled a ∼3–6× increase in an ignition figure of merit (generalized Lawson criterion) [Phys. Plasmas 28, 022704 (2021), Phys. Plasmas 25, 122704 (2018)]) and an eightfold increase in fusion energy output compared to predecessor experiments. We present simulations of the hot-spot conditions for experiment N210808 that show fundamentally different behavior compared to predecessor experiments and simulated metrics that are consistent with N210808 reaching for the first time in the laboratory “ignition.”
Submitted for publication in Physical Review E
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158635</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Hotspot Parameter Scaling with Velocity and Yield for High Adiabat Layered Implosions on the National Ignition Facility</title>
<link>https://hdl.handle.net/1721.1/158634</link>
<description>Hotspot Parameter Scaling with Velocity and Yield for High Adiabat Layered Implosions on the National Ignition Facility
Baker, K.L.; Thomas, C.A.; Casey, D.T.; Hohenberger, M.; Khan, S.; Spears, B.K.; Landen, O.L.; Nora, R.; Woods, T.; Milovich, J.L.; Berger, R.L.; Strozzi, D.; Weber, C.; Clark, D.; Hurricane, O.A.; Callahan, D.A.; Kritcher, A.; Bachmann, B.; Benedetti, R.; Bionta, R.; Celliers, P.M.; Fittinghoff, D.; Goyon, C.; Hatarik, R.; Izumi, N.; Gatu Johnson, Maria; Kyrala, G.; Ma, T.; Meaney, K.; Millot, M.; Nagel, S.R.; Patel, P.K.; Turnbull, D.; Volegov, P.L.; Yeamans, C.; Wilde, C.
This paper presents a study on hotspot parameters in indirect-drive inertially confined fusion implosions as they proceed through the self-heating regime. The implosions with increasing nuclear yield would reach the burning plasma regime, hotspot ignition and finally propagating burn and ignition. These implosions span a wide range of alpha heating from a yield amplification of 1.7 to 2.5. We show that the hotspot parameters are explicitly dependent on both yield and velocity and that by fitting to both of these quantities the hotspot parameters can be fit with a single power law in velocity. The yield scaling also enables the hotspot parameters extrapolation to higher yields. This is important as various degradation mechanisms can occur on a given implosion at fixed implosion velocity which can have a large impact on both yield and the hotspot parameters. The yield scaling also enables the experimental dependence of the hotspot parameters on yield amplification to be determined. The implosions reported have resulted in the highest yield(1.73x10^16+/-2.6%), yield amplification, pressure and implosion velocity yet reported on the National Ignition Facility.
Submitted for publication in Physical Review E
</description>
<pubDate>Mon, 01 Jun 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158634</guid>
<dc:date>2020-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Argon Pumpout by ICRF Waves in C-Mod L- and I-mode Plasmas</title>
<link>https://hdl.handle.net/1721.1/158633</link>
<description>Argon Pumpout by ICRF Waves in C-Mod L- and I-mode Plasmas
Rice, John E.; Lin, Y.; Perks, C.J.; Reinke, M.L.; Marmar, E.S.; Cao, N.; Gao, C.; Sciortino, Francesco; Wukitch, S.J.; Wright, John C.
Pumpout of argon ions by ICRF waves has been observed in C-Mod deuterium L- and I-mode plasmas that had a substantial hydrogen fraction. The effect is manifested by a reduction of core argon x-ray brightness up to a factor of 90% on time scales of tens of milliseconds following injection of ICRF power. For Ar^16+, the pumpout is strongest for hydrogen minority concentrations between 0.25 and 0.4, when the ICRF waves are not expected to result in minority heating. Modeling with the TORIC code suggests that the pumpout process occurs when the H/D mode conversion layer overlaps with the 2nd harmonic impurity resonance layer. The magnitude of the argon pumpout is independent of ICRF power above an apparent threshold of ~500 kW, independent of electron density and appears to decrease as the plasma current is increased. Potential application as a heavy impurity control tool in reactors is discussed.
Submitted for publication in Nuclear Fusion
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158633</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Creation and sustainment of wide pedestal quiescent H-mode with zero net neutral beam torque</title>
<link>https://hdl.handle.net/1721.1/158632</link>
<description>Creation and sustainment of wide pedestal quiescent H-mode with zero net neutral beam torque
Burrell, K.H.; Chen, Xi; Chrystal, C.; Ernst, Darin R.; Grierson, B.A.; Haskey, S.R.; Osborne, T.H.; Paz-Soldan, C.; Wilks, Theresa M.
Recent experiments on DIII-D have shown it is possible to create and sustain wide pedestal quiescent H-mode (QH-mode) plasmas with zero net torque from neutral beam injection (NBI) for the full discharge duration. Wide pedestal QH-mode has many of the features of the previously investigated QH-mode while having the advantage of increased edge pedestal pressure and excellent energy confinement time. Both QH-mode variants operate without edge localized modes. Accordingly, these new discharges demonstrate that significant input torque is not essential to the exploitation of wide pedestal QH-mode in future devices that are expected to have small or non-existent NBI torque. Developing operating conditions that allowed net zero torque access to wide pedestal QH-mode required implementing several techniques to avoid locked modes including minimizing intrinsic error fields, avoiding large sawteeth, and driving toroidal rotation via neoclassical toroidal viscosity.
Submitted for publication in Nuclear Fusion
</description>
<pubDate>Sat, 01 Feb 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158632</guid>
<dc:date>2020-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Evidence for suprathermal ion distribution in burning plasmas</title>
<link>https://hdl.handle.net/1721.1/158631</link>
<description>Evidence for suprathermal ion distribution in burning plasmas
Hartouni, E.P.; Moore, A.S.; Crilly, A.J.; Appelbe, B.D.; Amendt, P.A.; Baker, K.L.; Casey, D.T.; Clark, D.S.; Döppner, T.; Eckart, M.J.; Field, J.E.; Gatu Johnson, Maria; Grim, G.P.; Hatarik, R.; Jeet, J.; Kerr, S.M.; Kilkenny, J.; Kritcher, A.L.; Meaney, K.D.; Milovich, J.L.; Munro, D.H.; Nora, R.C.; Pak, A.E.; Ralph, J.E.; Robey, H.F.; Ross, J.S.; Schlossberg, D.J.; Sepke, S.M.; Spears, B.K.; Young, C.V.; Zylstra, A.B.
At the National Ignition Facility, inertial confinement fusion experiments aim to burn and ignite a hydrogen plasma to generate a net source of energy through the fusion of deuterium and tritium ions. The energy deposited by α-particles released from the deuterium–tritium fusion reaction plays the central role in heating the fuel to achieve a sustained thermonuclear burn. In the hydrodynamic picture, α-heating increases the temperature of the plasma, leading to increased reactivity because the mean ion kinetic energy increases. Therefore, the ion temperature is related to the mean ion kinetic energy. Here we use the moments of the neutron spectrum to study the relationship between the ion temperature (measured by the variance in the neutron kinetic energy spectrum) and the ion mean kinetic energy (measured by the shift in the mean neutron energy). We observe a departure from the relationship expected for plasmas where the ion relative kinetic energy distribution is Maxwell–Boltzmann, when the plasma begins to burn. Understanding the cause of this departure from hydrodynamic behaviour could be important for achieving robust and reproducible ignition.
Submitted for publication in Nature Physics
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158631</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Burning plasma achieved in inertial fusion</title>
<link>https://hdl.handle.net/1721.1/158630</link>
<description>Burning plasma achieved in inertial fusion
Zylstra, A.B.; Hurricane, O.A.; Callahan, D.A.; Kritcher, A.L.; Ralph, J.; Robey, H.F.; Ross, J.S.; Young, C.; Baker, K.; Casey, D.; Döppner, T.; Divol, L.; Hohenberger; Le Pape, S.; Pak, A.; Patel, P.; Tommasini, R.; Ali, S.; Bachmann, B.; Benedetti, R.; Berger, D.; Betti, R.; Bhandarker, S.; Bionta, R.; Birge, N.; Bond, E.; Bradley, D.; Braun, T.; Briggs, T.; Bruhn, M.; Gatu Johnson, Maria; Jones, O.; Kerr, S.; Khan, S.; Kilkenny, J.; Kim, Y.; Geppert Kleinrath, H.; Geppert Kleinrath, V.; Kline, J.; Kroll, J.; Kong, C.; Landen, O.L.; Larson, D.; Lemos, N.C.; Lindl, J.; Mackinnon, A.; MacGowan, B.; Maclaren, S.; MacPhee, A.; Mariscal, D.; Marley, E.; Masse, L.; Meaney, K.; Meezan, N.; Michel, P.; Millot, M.; Milovich, J.; Moody, J.; Moore, A.; Newman, K.; Nikroo, A.; Nora, R.; Pelz, L.; Peterson, L.; Rice, N.; Rinderknecht, H.; Rosen, M.; Rubery, M.; Salmonson, J.; Sater, J.; Schlossberg, D.; Schneider, M.; Sequoia, K.; Shin, S.; Smalyuk, V.; Spears, B.; Springer, P.; Stadermann, M.; Stoupin, S.; Strozzi, D.; Thomas C.; Tubman, E.; Town, R.; Weber, C.; Widmann, K.; Wild, C.; Wilde, C.; Woods, T.; Woodworth, B.; Van Wonterghem, B.; Volegov, P.; Yang, S.
The achievement of obtaining a burning plasma is a critical step toward self-sustaining fusion energy. A burning plasma is a fusion plasma where the alpha-particles created by the deuterium-tritium (DT) fusion reactions are the primary source of heating in the plasma, which is necessary to sustain and propagate the fusion reaction to enable high energy gain. After decades of fusion research, a burning plasma state has finally been achieved. Herein, we report upon the first burning-plasma experiments; this state was achieved using a strategy to increase the capsule spatial scale via two different implosion concepts, on the US National Ignition Facility. These experiments show energies from self-heating in excess of the mechanical work injected into the implosions satisfying several burning plasma metrics, the last experiment additionally shows that the fusion self-heating is greater than losses from radiation and heat conduction. These experiments triple the fusion yield performance and show significantly higher yield amplification from self-heating than prior results; remaining degradations can be reduced for even higher fusion performance.
Submitted for publication in Nature - International Weekly Journal of Science
</description>
<pubDate>Sat, 01 May 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158630</guid>
<dc:date>2021-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Intermediate energy proton irradiation: rapid, high-fidelity materials testing for fusion and fission energy systems</title>
<link>https://hdl.handle.net/1721.1/158629</link>
<description>Intermediate energy proton irradiation: rapid, high-fidelity materials testing for fusion and fission energy systems
Jepeal, Steven J.; Snead, Lance; Hartwig, Zachary S.
Fusion and advanced fission power plants require advanced nuclear materials to function under new, extreme environments. Understanding the evolution of mechanical and functional properties during radiation damage is essential to the design and  commercial deployment of these systems. The shortcomings of existing methods could be addressed by a new technique - intermediate energy proton irradiation (IEPI) - using beams of 10 - 30 MeV protons to rapidly and uniformly damage bulk material specimens before direct testing of engineering properties. IEPI is shown to achieve high fidelity to fusion and fission environments in both primary damage production and transmutation, often superior to nuclear reactor or typical (low-range) ion irradiation. Modeling demonstrates that high dose rates (0.1 - 1 DPA/per day) can be achieved in bulk material specimens (100 - 300 microns) with low temperature gradients and induced radioactivity. The capabilities of IEPI are demonstrated through a 12 MeV proton irradiation and tensile test of 250 micron thick tensile specimens of a nickel alloy (Inconel 718), reproducing neutron-induced data. These results demonstrate that IEPI enables high throughput assessment of materials under reactor-relevant conditions, positioning IEPI to accelerate the pace of engineering-scale radiation damage testing and allow for quicker and more effective design of nuclear energy systems.
Submitted for publication in Materials and Design
</description>
<pubDate>Tue, 01 Dec 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158629</guid>
<dc:date>2020-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Poloidal impurity asymmetries, flow and transport in conventional neoclassical  pedestals in the plateau and banana regimes</title>
<link>https://hdl.handle.net/1721.1/158628</link>
<description>Poloidal impurity asymmetries, flow and transport in conventional neoclassical  pedestals in the plateau and banana regimes
Bielajew, Rachel; Catto, Peter J.
Charge exchange recombination spectroscopy (CXRS) allows the poloidal variation of the impurity density, temperature, and flow and the poloidal variation to be measured in the pedestal when determining the poloidally varying radial electric field. At present, impurity neoclassical pedestal models avoid the complications of treating finite poloidal gyroradius effects by assuming the impurity charge number is large compared to the main ion charge number. These models are extended slightly by retaining the simplest limit of the impurity radial pressure gradient to demonstrate that no substantial effect occurs due to impurity diamagnetic effects. More importantly, the neoclassical model is significantly extended to obtain a more comprehensive treatment of the main ions in the plateau and banana regimes. A parallel impurity momentum equation is derived that is consistent with previous results in the banana regime and reduces to the proper large aspect ratio form required in the plateau regime. The implications for interpreting the CXRS measurements are discussed by writing all results in terms of the gradient drive and poloidal flow.
Submitted for publication in Journal of Plasma Physics
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158628</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Merging of the superbanana plateau and /squareroot/nu transport regimes in nearly quasisymmetric stellarators</title>
<link>https://hdl.handle.net/1721.1/158627</link>
<description>Merging of the superbanana plateau and /squareroot/nu transport regimes in nearly quasisymmetric stellarators
Catto, Peter J.; Tolman, Elizabeth Ann; Parra Diaz, Felix Ignacio
Alpha particle confinement is one of the most demanding issues for stellarators. It now seems clear that it is possible to design optimized stellarators that confine the background plasma at near tokamak radial transport levels. Moreover, adequate collisionless alpha particle confinement is possible in the core of a highly optimized stellarator. Here, the collisional confinement of barely trapped alphas in an optimized stellarator is considered by accounting for the resonance due to the reversal in direction of the drift within a flux surface and investigating the sensitive role of magnetic shear in keeping this resonance close to the passing boundary in some nearly quasisymmetric stellarator configurations. The treatment relies on a narrow collisional boundary layer formulation that combines the responses of both these resonant pitch angle alphas and the remaining barely trapped alphas. A novel merged regime treatment leads to explicit expressions for the energy diffusivity for both superbanana plateau (or resonant plateau) and √ ν transport in the large aspect ratio limit for a slowing down tail alpha distribution function, where ν is the effective pitch angle scattering collision frequency of the trapped alphas off the background ions. Depending on the details of the optimization scheme and the sign of the magnetic shear, modest magnetic shear can be used to reduce superbanana (or resonant) plateau transport to below the √ ν transport level. In addition, a quasilinear equation retaining spatial diffusion is derived for a general alpha distribution function that allows the radial alpha transport to modify the distribution so it is no longer isotropic in velocity space.
Submitted for publication in Journal of Plasma Physics
</description>
<pubDate>Thu, 01 Dec 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158627</guid>
<dc:date>2022-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Predictions of core plasma performance for the SPARC tokamak</title>
<link>https://hdl.handle.net/1721.1/158626</link>
<description>Predictions of core plasma performance for the SPARC tokamak
Rodriguez Fernandez, Pablo; Howard, Nathan T.; Greenwald, M.J.; Creely, A.J.; Hughes, Jerry W.; Wright, John C.; Holland, C.; Lin,Y.; Sciortino, Francesco; SPARC Team
SPARC is designed to be a high-field, medium-size tokamak aimed at achieving net energy gain with Ion Cyclotron Range-of-Frequencies (ICRF) as its primary auxiliary heating mechanism. Empirical predictions with conservative physics indicate that SPARC baseline plasmas would reach Q~11, well above its mission objective of Q&gt;2. To build confidence that SPARC will be successful, physics-based integrated modeling has also been performed. The TRANSP code coupled with the theory-based TGLF turbulence model and EPED predictions for pedestal stability find that Q~9 is attainable in standard H-mode operation and confirms Q&gt;2 operation is feasible even with adverse assumptions. In this analysis, ion cyclotron waves are simulated with the full wave TORIC code and alpha heating is modeled with the Monte-Carlo fast ion NUBEAM module. Detailed analysis of expected turbulence regimes with linear and nonlinear CGYRO simulations is also presented, demonstrating that profile predictions with the TGLF reduced model are in reasonable agreement.
Submitted for publication in Journal of Plasma Physics
</description>
<pubDate>Sat, 01 Feb 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158626</guid>
<dc:date>2020-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Statistical description of coalescing magnetic islands via magnetic reconnection</title>
<link>https://hdl.handle.net/1721.1/158625</link>
<description>Statistical description of coalescing magnetic islands via magnetic reconnection
Zhou, Muni; Wu, David H.; Loureiro, Nuno F.; Uzdensky, Dmitri A.
The physical picture of interacting magnetic islands provides a useful paradigm for certain plasma dynamics in a variety of physical environments, such as the solar corona, the heliosheath and the Earth’s magnetosphere. In this work, we derive an island kinetic equation to describe the evolution of the island distribution function (in area and in flux of islands) subject to a collisional integral designed to account for the role of magnetic reconnection during island mergers. This equation is used to study the inverse transfer of magnetic energy through the coalescence of magnetic islands in two dimensions. We solve our island kinetic equation numerically for three different types of initial distribution: Dirac delta, Gaussian and power-law distributions. The time evolution of several key quantities is found to agree well with our analytical predictions: magnetic energy decays as t  ̃−1, the number of islands decreases as t  ̃−1 and the averaged area of islands grows as t  ̃, where t  ̃ is the time normalised to the characteristic reconnection time scale of islands. General properties of the distribution function and the magnetic energy spectrum are also studied. Finally, we discuss the underlying connection of our island-merger models to the (self-similar) decay of magnetohydrodynamic turbulence.
Submitted for publication in Journal of Plasma Physics
</description>
<pubDate>Tue, 01 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158625</guid>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Scaling of L-mode heat flux for ITER and COMPASS-U divertors, based on five tokamaks</title>
<link>https://hdl.handle.net/1721.1/158624</link>
<description>Scaling of L-mode heat flux for ITER and COMPASS-U divertors, based on five tokamaks
Horacek, J.; Adamek, J.; Komm, M.; Seidl, J.; Vondracek, P.; Jardin, A.; Guillemaut, Ch.; Elmore, S.; Thornton; Jirakova, K.; Jaulmes, F.; Deng, G.; Gao, X.; Wang, L.; Ding, R.; Brunner, D.; LaBombard, Brian; Olsen, J.; Rasmussen, J.J.; Nielsen, A.H.; Naulin, V.; Ezzat, M.; Comacho, K.M.; Hron, M.; Matthews, G.F.; EUROfusionMSTI Team; JET Contributors; MAST-U Team
This contribution aims to improve existing scalings of the L-mode power decay length Lambda_q_OMP, especially for plasma configurations with strike points at the ITER-relevant location—closed vertical divertor targets. We propose 13 new Lambda_q_OMP scalings based on data from the tokamaks JET, EAST, MAST, Alcator C-mod and COMPASS, and validate them against the output of the 2D turbulence code HESEL. The analysis covers 500 divertor heat flux profiles (obtained by probes or IR cameras), measured in L-mode discharges with varying 12 global plasma parameters (all well predictable). We find that the two previously published scalings (Eich 2013 J. Nucl. Mat. 438 S72) and (Scarabosio 2013 J. Nucl. Mat. 438 S426), which were based on outer target data from AUG and JET, describe the JET, C-mod and COMPASS profiles well. This holds not only at the outer horizontal and vertical targets, but surprisingly also at the inner vertical targets. In contrast, EAST, HESEL and especially MAST data are poorly described by these two scalings. We therefore derive 13 new scalings, which account for 85–92 % of the measured Lambda_q_OMP variability across all five tokamaks. Although each of the scalings is based on a different parameter combination, their predictions for the ITER and COMPASS-Upgrade tokamaks are very similar. Just before the L-H transition in the ITER baseline scenario, the presented scalings predict values Lambda_q_OMP = 3.0 +/-0.5 mm. For the COMPASS-Upgrade tokamak, all the scalings predict Lambda_q_OMP = 2.1 +/- 0.5 mm with a single exception of the scaling based on the stored plasma energy which predicts only 1.2 mm for both tokamaks. We encourage the reader to use as many of these scalings as possible, depending on available data. In attached plasma and using significant assumptions, our results imply steady-state surface-perpendicular heat flux around 10 MW/m^2 for ITER, and 20 MW/m^2 for COMPASS-Upgrade.
Submitted for publication in Nuclear Fusion
</description>
<pubDate>Fri, 01 May 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158624</guid>
<dc:date>2020-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Review of progress and challenges of key mechanical issues in high-field superconducting magnets</title>
<link>https://hdl.handle.net/1721.1/158623</link>
<description>Review of progress and challenges of key mechanical issues in high-field superconducting magnets
Zhou, You-He; Park, Dongkeun; Iwasa, Yukikazu
The development of modern science and technology requires high magnetic fields exceeding 25T. Second-generation high-temperature superconducting wires, i.e. REBCO (REBa2Cu3O7-x, RE refers to Y, Gd, Dy, Eu and other rare-earth elements) coated conductors (CCs), have become the first choice for high-field magnet construction because of their high irreversible magnetic field. The mechanical stresses caused by manufacturing, thermal mismatch and Lorenz forces closely influence electromagnetic performance during operation for REBCO CCs. In addition, the recently studied screen currents have effects on the mechanical characteristics of high-field REBCO magnets. In this review, the experimental and main theoretical works on critical current degradation, delamination and fatigue, and shear investigations on REBCO CCs, are reviewed at first. Then, research progress on the screening-current effect in the development of high-field superconducting magnets is introduced. Finally, the key mechanical problems facing the future development of high-field magnets based on REBCO CCs are prospected.
Submitted for publication in National Science Review
</description>
<pubDate>Sun, 01 Jan 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158623</guid>
<dc:date>2023-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>The structure of 3-D collisional magnetized bow shocks in pulsed-power-driven plasma flows</title>
<link>https://hdl.handle.net/1721.1/158622</link>
<description>The structure of 3-D collisional magnetized bow shocks in pulsed-power-driven plasma flows
Datta, R.; Russell, D.R.; Tang, I.; Clayson, T.; Suttle, L.G.; Chittenden, J.P.; Lebedev, S.V.; Hare, Jack D.
We investigate 3D bow shocks in a highly collisional magnetized aluminum plasma, generated during the ablation phase of an exploding wire array on the MAGPIE facility (1.4 MA, 240 ns). Ablation of plasma from the wire array generates radially diverging, supersonic (MS ∼ 7), super- Alfvénic (MA &gt; 1) magnetized flows with frozen-in magnetic flux (RM ≫1). These flows collide with an inductive probe placed in the flow, which serves both as the obstacle that generates the magnetized bow shock, and as a diagnostic of the advected magnetic field. Laser interferometry along two orthogonal lines of sight is used to measure the line-integrated electron density. A detached bow shock forms ahead of the probe, with a larger opening angle in the plane parallel to the magnetic field than in the plane normal to it. Since the resistive diffusion length of the plasma is comparable to the probe size, the magnetic field decouples from the ion fluid at the shock front and generates a hydrodynamic shock, whose structure is determined by the sonic Mach number, rather than the magnetosonic Mach number of the flow. 3D simulations performed using the resistive magnetohydrodynamic (MHD) code GORGON confirm this picture, but under-predict the anisotropy observed in the shape of the experimental bow shock, suggesting that non-MHD mechanisms may be important for modifying the shock structure.
Submitted for publication in Journal of Plasma Physics
</description>
<pubDate>Thu, 01 Dec 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158622</guid>
<dc:date>2022-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Data augmentation for disruption prediction via robust surrogate models</title>
<link>https://hdl.handle.net/1721.1/158621</link>
<description>Data augmentation for disruption prediction via robust surrogate models
Rath, Katharina; Rügamer, David; Bischl, Bernd; von Toussaint, Udo; Rea, Cristina; Maris, Andrew D.; Granetz, Robert; Albert, Christopher G.
The goal of this work is to generate large statistically representative datasets to train machine learning models for disruption prediction provided by data from few existing discharges. Such a comprehensive training database is important to achieve satisfying and reliable prediction results in artificial neural network classifiers. Here, we aim for a robust augmentation of the training database for multivariate time series data using Student-t process regression. We apply Student-t process regression in a state space formulation via Bayesian filtering to tackle challenges imposed by outliers and noise in the training data set and to reduce the computational complexity. Thus, the method can also be used if the time resolution is high. We use an uncorrelated model for each dimension and impose correlations afterwards via coloring transformations. We demonstrate the efficacy of our approach on plasma diagnostics data of three different disruption classes from the DIII-D tokamak. To evaluate if the distribution of the generated data is similar to the training data, we additionally perform statistical analyses using methods from time series analysis, descriptive statistics, and classic machine learning clustering algorithms.
Submitted for publication in Journal of Plasma Physics
</description>
<pubDate>Sun, 01 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158621</guid>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards Fast and Accurate Predictions of Radio Frequency Power Deposition and Current Profile via Data-driven Modeling</title>
<link>https://hdl.handle.net/1721.1/158620</link>
<description>Towards Fast and Accurate Predictions of Radio Frequency Power Deposition and Current Profile via Data-driven Modeling
Wallace, Greg M.; Bai, Z.; Sadre, R.; Perciano, T.; Bertelli, N.; Shiraiwa, S.; Bethel, E.W.; Wright, John C.
Three machine learning techniques (multilayer perceptron, random forest, and Gaussian process) provide fast surrogate models for lower hybrid current drive (LHCD) simulations. A single GENRAY/CQL3D simulation without radial diffusion of fast elec- trons requires several minutes of wall-clock time to complete, which is acceptable for many purposes, but too slow for integrated modeling and real-time control applications. The machine learning models use a database of 16,000+ GENRAY/CQL3D simulations for training, validation, and testing. Latin hypercube sampling methods ensure that the database covers the range of 9 input parameters (ne0, Te0, Ip, Bt, R0, n||, Zeff , Vloop, PLHCD) with sufficient density in all regions of parameter space. The surrogate models reduce the inference time from minutes to ∼ms with high accuracy across the input parameter space.
Submitted for publication in Journal of Plasma Physics
</description>
<pubDate>Fri, 01 Apr 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158620</guid>
<dc:date>2022-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Interaction of radio frequency waves with cylindrical density filaments -- scattering and radiation pressure</title>
<link>https://hdl.handle.net/1721.1/158619</link>
<description>Interaction of radio frequency waves with cylindrical density filaments -- scattering and radiation pressure
Valvis, Spyridon I.; Ram, Abhay K.; Hizanidi, Kyriakos
The propagation of radio frequency (RF) waves in tokamaks can be affected by filamentary structures, or blobs, that are present in the edge plasma and the scrape-off layer. The difference in the permittivity between the surrounding plasma and interior of a filament leads to reflection, refraction, and diffraction of the waves. This, in turn, can affect the power flow into the core of the plasma and reduce the efficiency of heating and/or current generation.  The scattering of RF waves -- lower hybrid, helicon, and ion cyclotron waves -- by a single cylindrical filament,  embedded in a background plasma, is studied using a full-wave analytical theory developed previously [A. K. Ram and K. Hizanidis, Phys. Plasmas \textbf{23}, 022504-1--022504-17 (2016)]. The theory assumes that the plasma in and around a filament is homogeneous and cold. A detailed scattering analysis reveals a variety of common features that exist among the three distinctly different RF waves. These common attributes can be inferred intuitively based on an examination of the cold plasma dispersion relation. The physical intuition is a useful step to understanding experimental observations on scattering, as well as results from simulations that include general forms of edge plasma turbulence. While a filament can affect the propagation of RF waves, the radiation force exerted by the waves can influence the filament.  The force on a filament is determined using the Maxwell stress tensor. In 1905, Poynting was the first to evaluate and measure the radiation force on an interface separating   two different dielectric media [J. H. Poynting, Phil. Mag. \textbf{9}, 393-406 (1905)]. For ordinary light propagating in vacuum and incident on a glass surface, Poynting noted that the surface is ``pulled'' towards the vacuum. In a magnetized cold plasma, there are two independent wave modes. Even if only one of these modes is excited by an RF antenna, a filament will couple power to the other mode -- a consequence of electromagnetic boundary conditions. This facet of scattering results in the radiation force having more diversified attributes than those in Poynting's seminal contribution.  The direction of the force depends on the polarization of the incident wave and on the mode structure of the waves inside and in the vicinity of a filament. It can either pull the filament toward the RF source or push it away. For slow lower hybrid waves, filaments are pulled in regardless of whether they are more or less dense compared to the ambient plasma. For fast helicon and ion cyclotron waves, the direction of the force depends on  the plasma and wave parameters; in particular, on the ambient density. For all three waves, the radiation force is large enough to impact the motion of a filament and  could be measured experimentally. This suggests a possibility of modifying the edge turbulence using RF waves.
Submitted for publication in Journal of Plasma Physics
</description>
<pubDate>Sun, 01 Aug 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158619</guid>
<dc:date>2021-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Lower hybrid current drive in a tokamak for correlated passes through resonance</title>
<link>https://hdl.handle.net/1721.1/158618</link>
<description>Lower hybrid current drive in a tokamak for correlated passes through resonance
Catto, Peter J.
Standard quasilinear descriptions are based on the constant magnetic field form of the quasilinear operator so improperly treat the trapped electron modifications associated with tokamak geometry. Moreover, successive poloidal transits of the Landau resonance during lower hybrid current drive in a tokamak are well correlated, and these geometrical details must be properly retained to account for the presence of trapped electrons that do not contribute to the driven current. The recently derived quasilinear operator in tokamak geometry accounts for these features and finds that the quasilinear diffusivity is proportional to a delta function with a transit or bounce averaged argument (rather than a local Landau resonance condition). The new quasilinear operator is combined with the Cordey (Nucl. Fusion, vol. 16, 1976, pp. 499–507) eigenfunctions to properly derive a rather simple and compact analytic expression for the trapped electron modifications to the driven lower hybrid current and the efficiency of the current drive.
Submitted for publication in Journal of Plasma Physics
</description>
<pubDate>Mon, 01 Feb 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158618</guid>
<dc:date>2021-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Bootstrap current and parallel ion velocity in imperfectly optimized stellarators</title>
<link>https://hdl.handle.net/1721.1/158617</link>
<description>Bootstrap current and parallel ion velocity in imperfectly optimized stellarators
Catto, Peter J.; Helander, Per
A novel derivation of the parallel ion velocity, and the bootstrap and Pfirsch-Schlüter currents in an imperfectly optimized (that is, almost omnigenous) stellarator magnetic field, \vec B, is presented.  It is shown that, when the conventional radially local form of the drift kinetic equation is employed, the flow velocity and the bootstrap current acquire a spurious contribution proportional to ω /ν , where ω denotes the \vec E × \vec B rotation frequency (due to the radial electric field  E ) and ν the collision frequency. This contribution is particularly large in the squareroot ν regime and at smaller collisionalities, where  ω /ν &gt; 1 , and is presumably present in most numerical calculations, but it disappears if a more accurate drift kinetic equation is used.
Submitted for publication in Journal of Plasma Physics
</description>
<pubDate>Sun, 01 Sep 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158617</guid>
<dc:date>2019-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>The Very High n Rydberg Series of Ar^16+ in Alcator C-Mod Tokamak Plasmas</title>
<link>https://hdl.handle.net/1721.1/158616</link>
<description>The Very High n Rydberg Series of Ar^16+ in Alcator C-Mod Tokamak Plasmas
Rice, John E.; Sciortino, Francesco; Gu, M.; Cao, N.; Hughes, Jerry W.; Irby, J.H.; Marmar, E.S.; Mordijck, S.; Reinke, M.L.; Reksoatmodjo, R.
X-ray transitions of the very high-n Rydberg series in Ar^16+ have been observed from Alcator C-Mod tokamak plasmas. Individual emission lines up to 1s16p - 1s^2 have been resolved and the central chord line brightnesses with principal quantum number n between 7 and 16 are generally found to decay as 1/n^alpha, with alpha slightly larger than 3. In the plasma periphery, emission from 1s9p - 1s^2 and 1s10p - 1s^2 are found to be significantly enhanced relative to this decrease, indicative of selected population of these levels through charge exchange between background neutral deuterium in the ground state and Ar^17+. An unresolved feature between the wavelengths of 1s27p - 1s^2 and 1s30p - 1s^2 is also present, which arises through charge exchange with neutral deuterium in the n^* = 3 excited state. The brightnesses of transitions populated by charge exchange are spatially up/down asymmetric, with an excess on the side of the magnetic surface X-point. The relative brightness of the unresolved very high-n feature compared to 1s7p - 1s^2 is found to increase with electron temperature and decrease with electron density. Simulations of line emission just on the long wavelength side of the Ar^16$  ionization limit indicate that the principal quantum number decay exponent is closer to alpha = 4 at very high n. The brightness dependence on n below 16 is in excellent agreement with calculations from the Flexible Atomic Code package.
Submitted for publication in Journal of Physics B: Atomic, Molecular and Optical Physics
</description>
<pubDate>Thu, 01 Jul 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158616</guid>
<dc:date>2021-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>X-ray Observations of Ne-like Xe and Satellites from C-Mod Tokamak Plasmas</title>
<link>https://hdl.handle.net/1721.1/158615</link>
<description>X-ray Observations of Ne-like Xe and Satellites from C-Mod Tokamak Plasmas
Rice, John E.; Fournier, K.B.; Kemp, G.E.; Bitter, M.; Cao, N.; Delgado-Aparicio, L.; Hill, K.; Hubbard, Amanda E.; Hughes, Jerry W.; Reinke, M.L.
X-ray spectra in the wavelength range from 2.70 to 2.76 A from xenon (Z = 54) in near neon-like charge states have been observed in  Alcator C-Mod tokamak plasmas. The 3D (2p^6 - (2p^5)_{3/2}3d_{5/2}, 2720.4 mA) and 3F (2p^6 - (2p^5)_{1/2}3s_{1/2}, 2729.0 mA) transitions from neon-like Xe^{44+}  have been identified, along with nearby Na-, Mg- and Al-like satellites. The intensity ratio of 3D to the Mg-like satellite near 2.74 A increases strongly with electron temperature in the range from 3 to 4 keV.
Submitted for publication in Journal of Physics B: Atomic, Molecular and Optical Physics
</description>
<pubDate>Mon, 01 Jul 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158615</guid>
<dc:date>2019-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Helicon and lower hybrid current drive comparisons in tokamak geometry</title>
<link>https://hdl.handle.net/1721.1/158614</link>
<description>Helicon and lower hybrid current drive comparisons in tokamak geometry
Catto, Peter J.; Zhou, Muni
The parallel current driven by applied helicon waves is evaluated in tokamak geometry along with the radio frequency (rf) power absorbed by the passing electrons. The results are compared to the corresponding expressions for lower hybrid current drive. The efficiency of both current drive schemes is found to be the same in the single wave frequency, single mode number limit. The evaluation of the parallel currents is performed using an adjoint technique and tokamak geometry is retained by using an eigenfunction expansion appropriate for a transit averaged long mean free path treatment of electrons making correlated poloidal passes through the applied rf fields.
Submitted for publication in Journal of Plasma Physics
</description>
<pubDate>Wed, 01 Mar 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158614</guid>
<dc:date>2023-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Scattering of radio frequency waves by randomly modulated density interfaces in the edge of fusion plasmas</title>
<link>https://hdl.handle.net/1721.1/158613</link>
<description>Scattering of radio frequency waves by randomly modulated density interfaces in the edge of fusion plasmas
Papadopoulos, A.D.; Glytus, E.N.; Ram, Abhay K.; Hizanidi, K.
In the scrape-off layer and the edge region of a tokamak, the plasma is strongly turbulent and scatters the radio frequency (RF) electromagnetic waves that propagate through this region. It is important to know, whether used for diagnostics or for heating and current drive, the spectral properties of these scattered RF waves. The spectral changes influences the interpretation of the diagnostic-data obtained and the current and heating profi les. A full-wave, 3D electromagnetic code ScaRF (see Papadopoulos et al. 2019) has been developed for studying the RF wave propagation through turbulent plasma. ScaRF is a  finite-difference frequency-domain (FDFD) method for solving Maxwell's equations. The magnetized plasma is de fined through the cold plasma, anisotropic permittivity tensor. As a result, ScaRF can be used to study the scattering of any cold plasma RF wave. It can be for the study of scattering of electron cyclotron waves in ITER-type and medium-sized tokamaks such as TCV, ASDEX-U, DIII-D. For the case of medium-sized tokamaks, there's experimental evidence that drift waves and rippling modes are present in the edge region (see Ritz et al. 1984). Hence, we study the scattering of RF waves by periodic density interfaces (plasma gratings) in the form of a superposition of spatial modes with varying periodicity and random amplitudes (see Papadopoulos et al. 2019). The power reflection coefficient (a random variable) is calculated for different realizations of the density interface. In this work, the uncertainty of the power reflection coefficient is rigorously quanti fied by use of the Polynomial Chaos Expansion (see Xiu &amp; Karniadakis 2002) method in conjunction with the Smolyak sparse grid integration (see Papadopoulos et al. 2018) (PCE-SG). The PCE-SG method is proven accurate and much more efficient  (roughly 2-orders of magnitude shorter execution time) compared to alternative methods such as the Monte Carlo (MC) approach.
Submitted for publication in Journal of Plasma Physics
</description>
<pubDate>Sat, 01 May 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158613</guid>
<dc:date>2021-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Multisatellite MMS Analysis of Electron Holes in the Earth's Magnetotail: Origin, Properties, Velocity Gap, and Transverse Instability</title>
<link>https://hdl.handle.net/1721.1/158612</link>
<description>Multisatellite MMS Analysis of Electron Holes in the Earth's Magnetotail: Origin, Properties, Velocity Gap, and Transverse Instability
Lotekar, A.; Vasko, I.Y.; Mozer, F.S.; Hutchinson, Ian H.; Artemyev, A.V.; Bale, S.D.; Bonnell, J.W.; Ergun, R.; Giles, B.; Khotyaintsev, Yu. V.; Lindqvist, P.-A.; Russell, C.T.; Strangeway, R.
We present a statistical analysis of more than 2400 bipolar electrostatic solitary waves measured aboard at least three MMS spacecraft in the Earth's magnetotail. These bipolar solitary waves are interpreted in terms of electron holes, because of positive electrostatic potentials. The multi- spacecraft interferometry is used to estimate the velocity of propagation of the electron holes and address their origin and properties. The electron hole velocities in the plasma rest frame are in the range from just a few km/s, that is much smaller than ion thermal velocity VTi, up to 20,000 km/s, which is comparable to electron thermal velocity VTe. We argue that fast electron holes with velocities larger than about 0.1 VTe are produced by bump-on-tail instabilities, while the most of slow electron holes with velocities below about 0.05 VTe is predominantly produced by warm bi- stream instabilities. We have identified a gap in the distribution of electron hole velocities between about VTi and 2VTi, which is considered to be an evidence for recently simulated self-acceleration process [Zhou and Hutchinson, 2018] or / and ion Landau damping of electron holes. In accordance with previous measurements, the amplitudes and parallel spatial scales of the electron holes are typically D d| | 10 D and 10-3 Te e0 0.1 Te. We show that electron hole amplitudes are below a threshold of the transverse electron hole instability and highly likely restricted by the nonlinear saturation criterion of electron streaming instabilities seeding electron hole formation. The transverse instability and nonlinear saturation criterion are suggested to restrict electron hole amplitudes as e0 me 2d2| |, where = min(, 1.5 ce), where  is the increment of instabilities seeding electron hole formation, while ce is electron cyclotron frequency.
Submitted for publication in Journal of Geophysical Research: Space Physics
</description>
<pubDate>Sat, 01 Aug 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158612</guid>
<dc:date>2020-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Acoustic MEMS Sensor Array for Quench Detection of CICC Superconducting Cables</title>
<link>https://hdl.handle.net/1721.1/158611</link>
<description>Acoustic MEMS Sensor Array for Quench Detection of CICC Superconducting Cables
Takayasu, Makoto
A novel quench detection method using microelectro- mechanical system (MEMS) sensor technology has been investigated in use for high temperature superconducting (HTS) conductors such REBCO tape cables. The sensor array along a superconducting cable, such as a cable-in-conduit-conductor (CICC), is installed in a cooling channel. It will allow sensitive and quick detection for a local quench of a superconducting cable. This work has confirmed that a quench of a single REBCO tape can be detected in liquid nitrogen by a MEMS piezoelectric microphone sensor. The quench detection design utilizing a MEMS sensor array method is discussed for the case of a toroidal field (TF) magnets of a fusion Tokamak device.
Submitted for publication in IEEE Transactions on Applied Superconductivity
</description>
<pubDate>Sun, 01 Sep 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158611</guid>
<dc:date>2019-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Exploring MDSplus data-acquisition software and custom devices</title>
<link>https://hdl.handle.net/1721.1/158610</link>
<description>Exploring MDSplus data-acquisition software and custom devices
Santoro, Fernando; Stillerman, Joshua; Lane Walsh, Stephen; Fredian, Thomas
MDSplus is a software tool designed for data acquisition, storage, and analysis of complex scientific experiments. Over the years, MDSplus has primarily been used for data management for fusion experiments. This paper demonstrates that MDSplus can be used for a much wider variety of systems and experiments. We present a step-by-step tutorial describing how to create a simple experiment, manage the data, and analyze it using MDSplus and Python. To this end, a custom example device was developed to be used as the data source. This device was built on an opensource electronic hardware platform, and it consists of a microcontroller and two sensors. We read data from these sensors, store it in MDSplus, and use JupyterLab to visualize and process it. This project and code demo are available on the GitHub site at this URL: https://github.com/santorofer/MDSplusAndCustomeDevices
Submitted for publication in Fusion Engineering and Design
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158610</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>ARC reactor materials: Activation analysis and optimization</title>
<link>https://hdl.handle.net/1721.1/158609</link>
<description>ARC reactor materials: Activation analysis and optimization
Bocci, B.; Hartwig, Zachary S.; Segantin, S.; Testoni, R.; Whyte, D.; Zucchetti, M.
Nowadays, Fusion Energy is one of the most important sources under study. During the last years, different designs of fusion reactors were considered. At the MIT, an innovative design was created: ARC, the Affordable Robust Compact reactor. It takes advantage of the innovative aspects of recent progress in fusion technology, such as high temperature superconductors, that permit to decrease the dimensions of the machine, reaching at the same time high magnetic fields. Our main goal is the low-activation analysis of possible structural materials for the vacuum vessel, which is designed as a single-piece placed between the first-wall and the tank that contains the breeding blanket. Due to its position, the vacuum vessel is subject to high neutron flux, which can activate it and cause the reduction of the component lifetime and decommissioning problems. The activation analysis was done also for the liquid breeder FLiBe, compared with Lithium-Lead. Codes used for the low-activation analysis were MCNP and FISPACT-II. The first one is based on a neutronics model and for each component a certain neutron flux is evaluated. For FISPACT-II, the main input is the composition of the analyzed material, the neutron flux and the irradiation time. Results from FISPACT-II are the time behavior of specific activity, contact dose rate. To assess suitable structural materials for the vacuum vessel, low-activation properties were considered. Vanadium alloys turn out to be one of the best alternatives to the present material, Inconel-718. Finally, isotopic tailoring and elemental substitution methods were applied. Here, the composition of each alloy is analyzed and critical isotopes or elements are eliminated or reduced. After the modifications, new simulations are done, and those leading to significant improvements in the final results are highlighted.
Submitted for publication in Fusion Engineering and Design
</description>
<pubDate>Sat, 01 Feb 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158609</guid>
<dc:date>2020-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>W0.5TaTiVCr-based composite reinforced with W-mesh for Fusion Plasma-Facing Applications</title>
<link>https://hdl.handle.net/1721.1/158608</link>
<description>W0.5TaTiVCr-based composite reinforced with W-mesh for Fusion Plasma-Facing Applications
Waseem, Owais Ahmed; Ryu, Ho Jin
We present research into tungsten (W) alloy-based composites reinforced with W-mesh. Due to low activation and higher strength properties, W0.5TaTiVCr was used as a matrix material. Layers of W-mesh (Wmesh) were embedded in W0.5TaTiVCr for improving ductility and toughness. We employed elemental powder mixing and spark plasma sintering (SPS) at 1600 °C for sample preparation, which is a simpler method as compared to chemical vapor infiltration and hot isostatic pressing. The microstructural analysis shows W-mesh that is well-bonded with the W0.5TaTiVCr matrix, which exhibits multiple phases and BCC structure. The room temperature compressive fracture strain of W0.5TaTiVCr/Wmesh composites show an improvement from ~3.5% to ~15.8% due to increase in Wmesh concentration from 10 wt% to 50 wt%, whereas the compressive yield strength changes from ~1900 MPa to ~1700 MPa (at room temperature) and ~1200 MPa to ~950 MPa (at 1200 °C). The W0.5TaTiVCr matrix alone shows ~7.7 MPa·m1/2 fracture strain, and the addition of 10 wt%Wmesh in W0.5TaTiVCr results in more than a two-fold increase in fracture toughness (up to ~20 MPa·m1/2), which suggests a potential use of this material in fusion reactors.
Submitted for publication in Functional Composites and Structures
</description>
<pubDate>Sun, 01 Mar 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158608</guid>
<dc:date>2020-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Qubit Lattice Algorithm Simulations of Maxwell’s Equations for Scattering from Anisotropic Dielectric Objects</title>
<link>https://hdl.handle.net/1721.1/158607</link>
<description>Qubit Lattice Algorithm Simulations of Maxwell’s Equations for Scattering from Anisotropic Dielectric Objects
Vahala, George; Soe, Min; Vahala, Linda; Ram, Abhay K.; Koukoutsis, Efstratios; Hizanidis, Kyriakos
A Dyson map explicitly determines the appropriate basis of electromagnetic fields which yields a unitary representation of the Maxwell equations in an inhomogeneous medium. A qubit lattice algorithm (QLA) is then developed perturbatively to solve this representation of Maxwell equations. QLA consists of an interleaved unitary sequence of collision operators (that entangle on lattice-site qubits) and streaming operators (that move this entanglement throughout the lattice). External potential operators are introduced to handle gradients in the refractive indices, and these operators are typically non-unitary, but sparse matrices. By also interleaving the external potential operators with the unitary collide-stream operators one achieves a QLA which conserves energy to high accuracy. Some two dimensional simulations results are presented for the scattering of a one-dimensional (1D) pulse off a localized anisotropic dielectric object.
Submitted for publication in Computers &amp; Fluids
</description>
<pubDate>Sun, 01 Jan 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158607</guid>
<dc:date>2023-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Magnetogenesis in a Collisionless Plasma: From Weibel Instability to Turbulent Dynamo</title>
<link>https://hdl.handle.net/1721.1/158606</link>
<description>Magnetogenesis in a Collisionless Plasma: From Weibel Instability to Turbulent Dynamo
Zhou, Muni; Zhdankin, Vladimir; Kunz, Matthew W.; Loureiro, Nuno F.; Uzdensky, Dmitri A.
We report on a first-principles numerical and theoretical study of plasma dynamo in a fully kinetic framework. By applying an external mechanical force to an initially unmagnetized plasma, we develop a self-consistent treatment of the generation of "seed" magnetic fields, the formation of turbulence, and the inductive amplification of fields by the fluctuation dynamo. Driven large-scale motions in an unmagnetized, weakly collisional plasma are subject to strong phase mixing, which leads to the development of thermal pressure anisotropy. This anisotropy triggers the Weibel instability, which produces filamentary "seed" magnetic fields on plasma-kinetic scales. The plasma is thereby magnetized, enabling efficient stretching and folding of the fields by the plasma motions and the development of Larmor-scale kinetic instabilities such as the firehose and mirror. The scattering of particles off the associated microscale magnetic fluctuations provides an effective viscosity, regulating the field morphology and turbulence. During this process, the seed field is further amplified by the fluctuation dynamo until energy equipartition with the turbulent flow is reached. By demonstrating that equipartition magnetic fields can be generated from an initially unmagnetized plasma through large-scale turbulent flows, this work has important implications for the origin and amplification of magnetic fields in the intracluster and intergalactic mediums.
Submitted for publication in Astrophysical Journal
</description>
<pubDate>Sat, 01 Jun 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158606</guid>
<dc:date>2024-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Coherent high-power RF wakefield generation by electron bunch trains in a metamaterial structure</title>
<link>https://hdl.handle.net/1721.1/158605</link>
<description>Coherent high-power RF wakefield generation by electron bunch trains in a metamaterial structure
Lu, Xueying; Picard, Julian F.; Shapiro, Michael A.; Mastovsky, Ivan; Temkin, Richard J.; Conde, Manoel; Power, John G.; Shao, Jiahang; Wisniewski, Eric E.; Peng, Maowanghui; Seok, Jimin; Doran, Scott; Jing, Chunguang
We present an experimental study of coherent high-power wakefield generation in a metamaterial (MTM) structure at 11.7 GHz by 65 MeV electron bunch trains at the Argonne Wakefield Accelerator (AWA), following a previous experiment, the Stage-I experiment, at AWA. Both the Stage-II experiment, reported in this paper, and the Stage- I experiment were conducted using MTM structures, which are all-metal periodic structures with the period much smaller than the wavelength. Differences between the two experiments include: (1) Structure length (Stage-I 8 cm, Stage-II 20 cm); (2) Number of bunches used to excite the structure (Stage-I with 2 bunches, up to 85 nC of total charge; Stage-II with 8 bunches, up to 224 nC of total charge); (3) Highest peak power measured (Stage-I 80 MW in a 2 ns pulse, Stage-II 380 MW in a 10 ns pulse). The high-power radiofrequency (RF) pulses were generated by reversed Cherenkov radiation of the electron beam due to the negative group velocity in the MTM structures. Because the radiation is coherent, a train of bunches with a proper spacing can build up to achieve a high peak power. The observed output power levels are very promising for future applications in direct collinear wakefield acceleration or in transfer to a second accelerator for two beam acceleration.
Submitted for publication in Applied Physics Letters
</description>
<pubDate>Sun, 01 Dec 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158605</guid>
<dc:date>2019-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Experiments to explore the influence of pulse shaping at the National Ignition Facility</title>
<link>https://hdl.handle.net/1721.1/158604</link>
<description>Experiments to explore the influence of pulse shaping at the National Ignition Facility
Thomas, C.A.; Campbell, E.M.; Baker, K.L.; Casey, D.T.; Hohenberger, M.; Kritcher, A.L.; Spears, B.K.; Khan, S.F.; Nora, R.; Woods, D.T.; Milovich, J.L.; Berger, R.L.; Strozzi, D.; Ho, D.D.; Clark, D.; Bachmann, B.; Benedetti, L.R.; Bionta, R.; Celliers, P.M.; Fittinghoff, D.N.; Grim, G.; Hatarik, R.; Izumi, N.; Kyrala, G.; Ma, T.; Millot, M.; Nagel, S.R.; Patel, P.K.; Yeamans, C.; Nikroo, A.; Tabak, M.; Gatu Johnson, Maria; Volegov, P.L.; Finnegan, S.M.
The shaping of the drive pulse in time is a key tool in the design of fusion experiments that use inertia to confine burning plasmas. It is directly related to the adiabat and compressibility of the DT fuel, and the characteristics of the laser and target that are needed to ignite. With this in mind, we have performed experiments at the National Ignition Facility that test small changes in the shape of the pulse. In contrast to theory, we find implosions at lower adiabats can have reduced yield and areal density. We discuss implications to performance and the mechanism(s) that could be responsible.
Submitted for publication in Physics of Plasmas
</description>
<pubDate>Mon, 01 Jun 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158604</guid>
<dc:date>2020-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Symmetry tuning and high energy coupling for an Al capsule in a Au rugby hohlraum on NIF</title>
<link>https://hdl.handle.net/1721.1/158603</link>
<description>Symmetry tuning and high energy coupling for an Al capsule in a Au rugby hohlraum on NIF
Ping, Y.; Smalyuk, A.; Amendt, P.; Khan, S.; Tommasini, R.; Dewald, E.; Field, J.E.; Graziani, F.; Hartouni, E.; Johnson, S.; Landen, O.L.; Lindl, J.; MacPhee, A.; Nikroo, A.; Nora, R.; Prisbrey, S.; Ralph, J.; Seugling, R.; Strozzi, D.; Tipton, R.E.; Wang, Y.M.; Kim, Y.; Loomis, E.; Meaney, K.D.; Merritt, E.; Montgomery, D.; Kabadi, Neel V.; Lahmann, Brandon; Petrasso, Richard D.
Experiments on imploding an Al capsule in a Au rugby hohlraum with up to 1.5 MJ laser drive were performed on the National Ignition Facility (NIF). The capsule diameter was 3.0 mm with ∼ 1 MJ drive and 3.4 mm with ∼ 1.5 MJ drive. Effective symmetry tuning by modifying the rugby hohlraum shape was demonstrated, and good shell symmetry was achieved for 3.4 mm capsules at a convergence of ∼10. The nuclear bang time and the shell velocity from simulations agree with experimental data, indicating ∼500 kJ coupling with 1.5 MJ drive, or ∼30% efficiency. The peak velocity reached above 300 km/s for a 120 µm-thick Al capsule. The laser backscatter inside the low-gas-fill rugby hohlraum was very low (&lt;4%) at both scales. The high energy coupling allows implosion designs with increased adiabat which in turn increases the tolerance to detrimental effects of instabilities and asymmetries. These encouraging experimental results open new opportunities for both the mainline single-shell scheme and the double-shell design toward ignition.
Submitted for publication in Physics of Plasmas
</description>
<pubDate>Mon, 01 Jun 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158603</guid>
<dc:date>2020-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Evidence of non-Maxwellian ion velocity distributions in spherical shock-driven implosions</title>
<link>https://hdl.handle.net/1721.1/158602</link>
<description>Evidence of non-Maxwellian ion velocity distributions in spherical shock-driven implosions
Mannion, O.; Taitano, W.T.; Appelbe, B.D.; Crilly, A.J.; Forrest, C.J.; Glebov, V. Yu.; Knauer, J.P.; McKenty, P.W.; Mohamed, Z.L.; Stoeckl, C.; Keenan, B.D.; Chittenden, J.P.; Adrian, Patrick J.; Frenje, Johan A.; Kabadi, Neel V.; Gatu Johnson, Maria; Regan, S.P.
The ion velocity distribution functions of thermonuclear plasmas generated by spherical laser direct drive implosions are studied using deuterium-tritium (DT) and deuterium-deuterium (DD) fusion neutron energy spectrum measurements. A hydrodynamic Maxwellian plasma model accurately describes measurements made from lower temperature (&lt; 10 keV), hydrodynamic like plasmas, but is insufficient to describe measurements made from higher temperature more kinetic like plasmas. The high temperature measurements are more consistent with Vlasov-Fokker-Planck (VFP) simulation results which predict the presence of a bimodal plasma ion velocity distribution near peak neutron production. These measurements provide direct experimental evidence of non-Maxwellian ion velocity distributions in spherical shock driven implosions and provide useful data for benchmarking kinetic VFP simulations.
Submitted for publication in Physical Review. E, Statistical physics, plasmas, fluids, and related interdisciplinary topics
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158602</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Diagnosing the Origin and Impact of Low-mode Asymmetries in Ignition Experiments at the National Ignition Facility</title>
<link>https://hdl.handle.net/1721.1/158601</link>
<description>Diagnosing the Origin and Impact of Low-mode Asymmetries in Ignition Experiments at the National Ignition Facility
Casey, D.; MacGowan, B.; Hurricane, O.; Landen, O.; Nora, R.; Haan, S.; Kritcher, A.; Zylstra, A.; Ralph, J.; Dewald, E.; Hohenberger, M.; Pak, A.; Springer, P.; Weber, C.; Milovich, J.; Divol, L.; Hartouni, E.; Bionta, R.; Hahn, K.; Schlossberg, D.; Moore, A.; Gatu Johnson, Maria
Inertial confinement fusion ignition requires high inflight shell velocity, good energy coupling between the hotspot and shell, and high areal-density at peak compression. Three-dimensional asymmetries caused by imperfections in the drive symmetry or target can grow and damage the coupling and confinement. Recent high-yield experiments have shown that low-mode asymmetries are a key degradation mechanism and contribute to variability. We show the experimental signatures and impacts of asymmetry change with increasing implosion yield given the same initial cause. This work has implications for improving robustness to a key degradation in ignition experiments.
Submitted for publication in Physical Review E
</description>
<pubDate>Sun, 01 Oct 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158601</guid>
<dc:date>2023-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Predictive modeling of NSTX discharges with the updated multi-mode anomalous transport module</title>
<link>https://hdl.handle.net/1721.1/158600</link>
<description>Predictive modeling of NSTX discharges with the updated multi-mode anomalous transport module
Rafiq, Tariq; Wilson, Christopher; Clauser, Cesar F.; Schuster, Eugenio; Weiland, Jan; Anderson, Johan; Kaye, Stanley M.; Pankin, Alexei; LeBlanc, Benoit P.; Bell, Ronald E.
The objective of this study is twofold: firstly, to demonstrate the consistency between the anomalous transport results produced by updated Multi-Mode Model (MMM) version 9.04 and those obtained through gyrokinetic simulations; and secondly, to showcase MMM's ability to predict electron and ion temperature profiles in low aspect ratio, high beta NSTX discharges. MMM encompasses a range of transport mechanisms driven by electron and ion temperature gradients, trapped electrons, kinetic ballooning, peeling, microtearing, and drift resistive inertial ballooning modes. These modes within MMM are being verified through corresponding gyrokinetic results. The modes that potentially contribute to ion thermal transport are stable in MMM, aligning with both experimental data and findings from linear CGYRO simulations. The isotope effects on these modes are also studied and higher mass is found to be stabilizing, consistent with the experimental trend. The electron thermal power across the flux surface is computed within MMM and compared to experimental measurements and nonlinear CGYRO simulation results. Specifically, the electron temperature gradient modes (ETGM) within MMM account for 2.0 MW of thermal power, consistent with experimental findings. It is noteworthy that the ETGM model requires approximately 5.0 ms of computation time on a standard desktop, while nonlinear CGYRO simulations necessitate 8.0 hours on 8 K cores. MMM proves to be highly computationally efficient, a crucial attribute for various applications, including real-time control, tokamak scenario optimization, and uncertainty quantification of experimental data.
Submitted for publication in Nuclear Fusion
</description>
<pubDate>Mon, 01 Jan 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158600</guid>
<dc:date>2024-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Predictions of improved confinement in SPARC via energetic particle turbulence stabilization</title>
<link>https://hdl.handle.net/1721.1/158599</link>
<description>Predictions of improved confinement in SPARC via energetic particle turbulence stabilization
Di Siena, A.; Rodriguez Fernandez, Pablo; Howard, Nathan T.; Bañón Navarro, A.; Bilato, R.; Görler, T.; Poli, E.; Merlo, G.; Wright, John C.; Greenwald, M.; Jenko, F.
The recent progress in high-temperature superconductor technologies has led to the design and construction of SPARC, a compact tokamak device expected to reach plasma breakeven with up to 25MW of external ion cyclotron resonant heating (ICRH) power. This manuscript presents local (flux-tube) and radially global gyrokinetic GENE (Jenko et al 2000 Phys. Plasmas 7 1904) simulations for a reduced-field and current H-mode SPARC scenario showing that supra-thermal particles - generated via ICRH - strongly suppress ion-scale turbulent transport by triggering a fast ion-induced anomalous transport barrier (F-ATB). The trigger mechanism is identified as a wave- particle resonant interaction between the fast particle population and plasma micro-instabilities (Di Siena et al 2021 Phys. Rev. Lett. 125 025002). By performing a series of global simulations employing different profiles for the thermal ions, we show that the fusion gain of this SPARC scenario could be substantially enhanced by up to ∼ 80% by exploiting this fast ion stabilizing mechanism. A study is also presented to further optimize the energetic particle profiles, thus possibly leading experimentally to an even more significant fusion gain.
Submitted for publication in Nuclear Fusion
</description>
<pubDate>Sun, 01 Jan 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158599</guid>
<dc:date>2023-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Experiments on excitation of Alfvén eigenmodes by alpha-particles with bump-on-tail distribution in JET DTE2 plasmas</title>
<link>https://hdl.handle.net/1721.1/158598</link>
<description>Experiments on excitation of Alfvén eigenmodes by alpha-particles with bump-on-tail distribution in JET DTE2 plasmas
Sharapov, S.E.; Oliver, H.J.C.; Garcia, J.; Keeling, D.L.; Dreval, M.; Goloborod'ko, V.; Kazakov, Ye. O.; Kiptily, V.G.; Stancar, Z.; Bonofiglo, P.J.; Coelho, R.; Craciunescu, T.; Ferreira, J.; Figueiredo, A.; Fil, N.; Fitzgerald, M.; Nabais, F.; Nocente, M.; Puglia, P.G.; Rivero-Rodriguez, J.; Rodrigues, P.; Salewski, M.; Tinguely, R. Alex; Zakharov, L.E.; JET contributors
Dedicated experiments were performed in JET DTE2 plasmas for obtaining an α-particle bump-on-tail (BOT) distribution aiming at exciting Alfvén Eigenmodes (AEs). NBI-only heating with modulated power was used so that fusion-born α-particles were the only ions present in the MeV energy range in these DT plasmas. The beam power modulation on a time scale shorter than the α-particle slowing down time was chosen for modulating the α-particle source and thus sustaining a BOT in the α-particle distribution. High-frequency modes in the TAE frequency range and multiple short-lived modes in a wider frequency range have been detected in these DT discharges with interferometry, soft X-ray cameras, and reflectometry. The modes observed were localised close to the magnetic axis, and were not seen in the Mirnov coils. Analysis with the TRANSP and Fokker-Planck FIDIT codes confirms that α-particle distributions with bump-on-tail in energy were achieved during some time intervals in these discharges though no clear correlation was found between the times of the high-frequency mode excitation and the BOT time intervals. The combined MHD and kinetic modelling studies show that the high-frequency mode in the TAE frequency range is best fitted with a TAE of toroidal mode number n= 9. This mode is driven mostly by the on-axis beam ions while the smaller drive due to the pressure gradient of α-particles allows overcoming the marginal stability and exciting the mode [H.J.C. Oliver et al. Toroidal Alfvén eigenmodes observed in low power JET deuterium-tritium plasmas, to be submitted to Nuclear Fusion (2023)]. The observed multiple short-lived modes in a wider frequency range are identified as the on-axis kinetic Alfvén eigenmodes predicted in [M.N. Rosenbluth, P.H. Rutherford, Phys. Rev. Lett. 34 (1975) 1428].
Submitted for publication in Nuclear Fusion
</description>
<pubDate>Sun, 01 Oct 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158598</guid>
<dc:date>2023-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Inference of Main Ion Particle Transport Coefficients with Experimentally Constrained Neutral Ionization during Edge Localized Mode Recovery on DIII-D</title>
<link>https://hdl.handle.net/1721.1/158597</link>
<description>Inference of Main Ion Particle Transport Coefficients with Experimentally Constrained Neutral Ionization during Edge Localized Mode Recovery on DIII-D
Rosenthal, Aaron M.; Hughes, Jerry W.; Laggner, Florian M.; Odstrcil, Tomas; Bortolon, Alessandro; Wilks, Theresa M.; Sciortino, Francesco
The plasma and neutral density dynamics after an Edge Localized Mode (ELM) are investigated and utilized to infer the plasma transport coefficients for the density pedestal. The LLAMA diagnostic provides sub-millisecond profile measurements of the ionization and neutral density and shows significant poloidal asymmetries in both. Exploiting the absolute calibration of the LLAMA diagnostic allows quantitative comparison to the electron and main ion density profiles determined by charge-exchange recombination, Thomson scattering and interferometry. Separation of diffusion and convection contributions to the density pedestal transport are investigated through flux gradient methods and time-dependent forward modeling with Bayesian inference by adaptation of the Aurora transport code and IMPRAD framework to main ion particle transport. Both methods suggest time- dependent transport coefficients and are consistent with an inward particle pinch on the order of 1 m s^{−1} and diffusion coefficient of 0.05 m^2 s^{−1} in the steep density gradient region of the pedestal. While it is possible to recreate the experimentally observed phenomena with no pinch in the pedestal, low diffusion in the core and high outward convection in the near scrape-off layer are required without an inward pedestal pinch.
Submitted for publication in Nuclear Fusion
</description>
<pubDate>Tue, 01 Nov 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158597</guid>
<dc:date>2022-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Examination of stiff ion temperature gradient mode physics in simulations of DIII-D H-mode transport</title>
<link>https://hdl.handle.net/1721.1/158596</link>
<description>Examination of stiff ion temperature gradient mode physics in simulations of DIII-D H-mode transport
Holland, C.; Luce, T.; Grierson, B.A.; Smith, S.P.; Marinoni, Alessandro; Burrell, K.H.; Petty, C.C.; Bass, E.M.
A systematic evaluation of gyrokinetic and gyrofluid model predictions of ion temperature gradient (ITG) stability and transport using parameters from DIII-D high confinement mode (H-mode) plasmas has been performed. The nonlinear CGYRO code is used to make the gyrokinetic predictions, and the quasilinear TGLF model for the corresponding gyrofluid predictions. The assessments are made at three radii (normalized toroidal flux ρtor = 0.4, 0.55, and 0.7) in three different plasma scenarios with varying levels of neutral beam heating and torque. For each of the nine cases (3 radii × 3 scenarios) considered, ITG turbulence is found to be the dominant long-wavelength instability and transport mechanism. The inclusions of both transverse magnetic fluctuations and dynamic fast beam ions are stabilizing for all cases considered, with strongest effects seen at ρor = 0.4 where the fast ion population and normalized plasma pressure β = 2μ0nT/B2 are highest. The further inclusion of parallel magnetic fluctuations does not have a meaningful impact on the ITG turbulence in these scenarios, but does destabilize (in combination with fast ions) new high-frequency instabilities at ρtor = 0.4 in the high power scenarios. In each case the linear and nonlinear ITG critical gradients are predicted to be lower than the measured ITG scale lengths and their associated uncertainties. Inclusion of equilibrium flow shear in the transport predictions generally leads to an upshift in effective critical gradient rather than a qualitative change in the predicted stiffness, with stronger responses typically seen in the gyrokinetic predictions than in the gyrofluid results. However, in most cases these upshifted gradients still remain below the measured values and their uncertainties. Although the predicted critical gradients are below the measured gradients, both models predicted flux-matching gradients consistent with measured values in six of the nine cases considered, with no clear systematic over- or underprediction. Thus, while the experimental ion temperature profiles do not appear to be closely pinned to the ITG critical gradient, both gyrokinetic and gyrofluid models are able to accurately match the measured gradients reasonably well in most cases.
Submitted for publication in Nuclear Fusion
</description>
<pubDate>Sat, 01 May 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158596</guid>
<dc:date>2021-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Shattered pellet penetration in low and high energy plasmas on DIII-D</title>
<link>https://hdl.handle.net/1721.1/158595</link>
<description>Shattered pellet penetration in low and high energy plasmas on DIII-D
Raman, R.; Sweeney, Ryan; Moyer, R.A.; Eidietis, N.W.; Shiraki, D.; Herfindal, J.L.; Sachdev, J.; Hollmann, E.M.; Jardin, S.C.; Baylor, L.R.; Wilcox, R.; Carlstrom, T.; Osborne, T.; Eldon, D.; Menard, J.E.; Luncford, R.; Grierson, B.
Shattered pellet injection (SPI) has been adopted as the baseline disruption mitigation system for ITER, as the radiative payload penetration into DIII-D plasmas from SPI is superior to those using the massive gas injection (MGI) method. Because of the substantial differences in the energy content of ITER plasma and those in present experiments, reliable 3D MHD modeling, benchmarked against present experiments is needed to project to ITER plasmas. In support of these needs, the depth of SPI fragment penetration in DIII-D plasmas was investigated by injecting SPI into two discharges with vastly different energy content and pedestal height. 400 Torr-L pure Ne fragmented pellets at a velocity of about 200 m s−1 were injected into a 0.2 MJ L-mode discharge and a 2 MJ super H-mode discharge. Results show deep penetration of SPI fragments into low-energy plasmas in DIII-D. SPI fragment penetration is reduced as the plasma energy content increases, with some discharges exhibiting penetration that is confined to the outer regions of the plasma. The injected SPI fragments are also spread out over a distance of about 20 cm, which results in some fragments arriving near the end of or after the thermal quench is over.
Submitted for publication in Nuclear Fusion
</description>
<pubDate>Sat, 01 Feb 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158595</guid>
<dc:date>2020-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Deficiencies in compression and yield in x-ray-driven implosions</title>
<link>https://hdl.handle.net/1721.1/158594</link>
<description>Deficiencies in compression and yield in x-ray-driven implosions
Thomas, C.A.; Campbell, E.M.; Baker, K.L.; Casey, D.T.; Hohenberger, M.; Kritcher, A.L.; Spears, B.K.; Khan, S.F.; Nora, R.; Woods, D.T.; Milovich, J.L.; Berger, R.L.; Strozzi, D.; Ho, D.D.; Clark, D.; Bachmann, B.; Benedetti, L.R.; Bionta, R.; Celliers, P.M.; Fittinghoff, D.N.; Grim, G.; Hatarik, R.; Izumi, N.; Kyrala, G.; Ma, T.; Millot, M.; Nagel, S.R.; Patel, P.K.; Yeamans, C.; Nikroo, A.; Tabak, M.; Gatu Johnson, Maria; Volegov, P.L.; Finnegan, S.M.
This paper analyzes x-ray–driven implosions that are designed to be less sensitive to 2-D and 3-D effects in hohlraum and capsule physics. Key performance metrics including the burn-averaged ion temperature, hot-spot areal density, and fusion yield are found to agree with simulations where the design adiabat (internal pressure) is multiplied by a factor of 1.4. These results motivate the development of a simple model for interpreting experimental data, which is then used to quantify how improvements in compression could help achieve ignition.
Submitted for publication in Physics of Plasmas
</description>
<pubDate>Sun, 01 Jul 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158594</guid>
<dc:date>2018-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Oblate electron holes are not attributable to anisotropic shielding</title>
<link>https://hdl.handle.net/1721.1/158593</link>
<description>Oblate electron holes are not attributable to anisotropic shielding
Hutchinson, Ian H.
Shielding mechanisms' influence on the ratio of perpendicular to   parallel scale lengths of multidimensional plasma electron hole   equilibria are analyzed theoretically and computationally. It is   shown that the ``gyrokinetic'' model, invoking perpendicular   polarization, is based on a misunderstanding and cannot explain the   observational trend that greater transverse extent accompanies lower   magnetic field. Instead, the potential in the wings of the hole,   outside the region of trapped-electron depletion, has isotropic   shielding giving $\phi\propto {\rm e}^{-r/L}/r$, with the shielding   length $L$ equal to the Debye length for holes much slower than the   electron thermal speed. Particle in cell simulations confirm the   analysis. Trapped electron charge distribution anisotropy must therefore instead underlie the oblate shape of electron holes.
Submitted for publication in Physics of Plasmas
</description>
<pubDate>Mon, 01 Feb 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158593</guid>
<dc:date>2021-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Alpha heating of indirect-drive layered implosions on the National Ignition Facility</title>
<link>https://hdl.handle.net/1721.1/158592</link>
<description>Alpha heating of indirect-drive layered implosions on the National Ignition Facility
Baker, K.L.; MacLaren, S.; Jones, O.; Spears, B.K.; Patel, P.K.; Nora, R.; Divol, L.; Landen, O.L.; Anderson, G.J.; Gaffney, J.; Kruse, M.; Hurricane, O.A.; Callahan, D.A.; Christopherson, A.R.; Salmonson, J.; Hartouni, E.P.; Döppner, T.; Dewald, E.; Tommasini, R.; Thomas, C.A.; Weber, C.; Clark, D.; Casey, D.T.; Hohenberger, M.; Khan, S.; Woods, T.; Milovich, J.L.; Berger, R.L.; Strozzi, D.; Kritcher, A.; Bachmann, B.; Benedetti, R.; Bionta, R.; Celliers, P.M.; Fittinghoff, D.; Hatarik, R.; Izumi, N.; Gatu Johnson, Maria; Kyrala, G.; Ma, T.; Meaney, K.; Millot, M.; Nagel, S.R.; Pak, A.; Volegov, P.L.; Yeamans, C.; Wilde, C.
In order to understand how close current layered implosions in indirect-drive inertial confinement fusion are to ignition, it is necessary to measure the level of alpha heating present. To this end, pairs of experiments were performed that consisted of a low-yield tritium–hydrogen–deuterium (THD) layered implosion and a highyield deuterium–tritium (DT) layered implosion to validate experimentally current simulation-based methods of determining yield amplification. The THD capsules were designed to reduce simultaneously DT neutron yield (alpha heating) and maintain hydrodynamic similarity with the higher yield DT capsules. The ratio of the yields measured in these experiments then allowed the alpha heating level of the DT layered implosions to be determined. The level of alpha heating inferred is consistent with fits to simulations expressed in terms of experimentally measurable quantities and enables us to infer the level of alpha heating in recent high-performing implosions.
Submitted for publication in Physical Review. E, Statistical physics, plasmas, fluids, and related interdisciplinary topics
</description>
<pubDate>Sat, 01 May 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158592</guid>
<dc:date>2021-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Uncovering turbulent plasma dynamics via deep learning from partial observations</title>
<link>https://hdl.handle.net/1721.1/158591</link>
<description>Uncovering turbulent plasma dynamics via deep learning from partial observations
Mathews, Abhilash; Francisquez, M.; Hughes, Jerry W.; Hatch, D.R.; Zhu, B.; Rogers, B.N.
One of the most intensely studied aspects of magnetic confinement fusion is edge plasma turbulence which is critical to reactor performance and operation. Drift-reduced Braginskii two-fluid theory has for decades been widely applied to model boundary plasmas with varying success. Towards better understanding edge turbulence in both theory and experiment, we demonstrate that a novel multi-network physics-informed deep learning framework constrained by partial differential equations can accurately learn turbulent fields consistent with the two-fluid theory from partial observations of electron pressure which is not otherwise possible using conventional equilibrium models. This technique presents a novel paradigm for the advanced design of plasma diagnostics and validation of magnetized plasma turbulence theories in challenging thermonuclear environments.
Submitted for publication in Physical Review. E, Statistical physics, plasmas, fluids, and related interdisciplinary topics
</description>
<pubDate>Thu, 01 Apr 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158591</guid>
<dc:date>2021-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Proof-of-Principle Experiment on the Dynamic Shell Formation for Inertial Confinement Fusion</title>
<link>https://hdl.handle.net/1721.1/158590</link>
<description>Proof-of-Principle Experiment on the Dynamic Shell Formation for Inertial Confinement Fusion
Igumenshchev, I.V.; Theobald, W.; Stoeckl, C.; Shah, R.C.; Bishel, D.T.; Goncharov, V.N.; Bonino, M.J.; Campbell, E.M.; Ceurvorst, L.; Chin, D.A.; Collins, T.J.B.; Fess, S.; Harding, D.R.; Sampat, S.; Shaffer, N.R.; Shvydky, A.; Smith, E.A.; Trickey, W.T.; Waxer, L.J.; Colaïtis, A.; Liotard, R.; Adrian, Patrick J.; Atzeni, S.; Barbato, F.; Savino, L.; Alfonso, N.; Haid, A.; Do, Mi
In the dynamic-shell (DS) concept [V. N. Goncharov et al., Novel Hot-Spot Ignition Designs for Inertial Confinement Fusion with Liquid-Deuterium-Tritium Spheres, Phys. Rev. Lett. 125, 065001 (2020).] for laser-driven inertial confinement fusion the deuterium-tritium fuel is initially in the form of a homogeneous liquid inside a wetted-foam spherical shell. This fuel is ignited using a conventional implosion, which is preceded by a initial compression of the fuel followed by its expansion and dynamic formation of a highdensity fuel shell with a low-density interior. This Letter reports on a scaled-down, proof-of-principle experiment on the OMEGA laser demonstrating, for the first time, the feasibility of DS formation. A shell is formed by convergent shocks launched by laser pulses at the edge of a plasma sphere, with the plasma itself formed as a result of laser-driven compression and relaxation of a surrogate plastic-foam ball target. Three x-ray diagnostics, namely, 1D spatially resolved self-emission streaked imaging, 2D self-emission framed imaging, and backlighting radiography, have shown good agreement with the predicted evolution of the DS and its stability to low Legendre mode perturbations introduced by laser irradiation and target asymmetries.
Submitted for publication in Physical Review Letters
</description>
<pubDate>Fri, 01 Apr 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158590</guid>
<dc:date>2022-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Energy Flow in Thin Shell Implosions and Explosions</title>
<link>https://hdl.handle.net/1721.1/158589</link>
<description>Energy Flow in Thin Shell Implosions and Explosions
Ruby, J.J.; Rygg, J.R.; Chin, D.A.; Gaffney, J.A.; Adrian, Patrick J.; Forrest, C.J.; Glebov, Y.Yu.; Kabadi, Neel V.; Nilson, P.M.; Ping, Y.; Stoeckl, C.; Collins, G.W.
Energy flow and balance in convergent systems beyond petapascal energy densities controls the fate of late-stage stars and the potential for controlling thermonuclear inertial fusion ignition. Timeresolved x-ray self-emission imaging combined with a Bayesian inference analysis is used to describe the energy flow and the potential information stored in the rebounding spherical shock at 0.22 petaPascal (2.2 Gbar or billions of atmospheres pressure). This analysis, together with a simple mechanical model, describes the trajectory of the shell and the time history of the pressure at the fuel-shell interface, ablation pressure, and energy partitioning including kinetic energy of the shell and internal energy of the fuel. The techniques used here provide a fully self-consistent uncertainty analysis of integrated implosion data, a thermodynamic-path independent measurement of pressure in the petaPascal range, and can be used to deduce the energy flow in a wide variety of implosion systems to petapascal energy densities.
Submitted for publication in Physical Review Letters
</description>
<pubDate>Thu, 01 Oct 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158589</guid>
<dc:date>2020-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Constraining Physical Models at Gigabar Pressures</title>
<link>https://hdl.handle.net/1721.1/158588</link>
<description>Constraining Physical Models at Gigabar Pressures
Ruby, J.J.; Rygg, J.R.; Chin, D.A.; Gaffney, J.A.; Adrian, Patrick J.; Bishel, D.; Forrest, C.J.; Glebov, Y.Yu.; Kabadi, Neel V.; Nilson, P.M.; Ping, Y.; Stoeckl, C.; Collins, G.W.
High-energy-density (HED) experiments in convergent geometry are able to test physical models at pressures beyond hundreds of millions of atmospheres. The measurements from these experiments are generally highly integrated and require unique analysis techniques to procure quantitative information. This work describes a methodology to constrain the physics in convergent HED experiments by adapting the methods common to many other fields of physics. As an example, a mechanical model of an imploding shell is constrained by data from a thin-shelled direct-drive exploding-pusher experiment on the OMEGA Laser System using Bayesian inference, resulting in the reconstruction of the shell dynamics and energy transfer during the implosion. The model is tested by analyzing synthetic data from a 1-D hydrodynamics code and is sampled using a Markov chain Monte Carlo to generate the posterior distributions of the model parameters. The goal of this work is to demonstrate a general methodology that can be used to draw conclusions from a wide variety of HED experiments.
Submitted for publication in Physical Review E
</description>
<pubDate>Thu, 01 Oct 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158588</guid>
<dc:date>2020-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Prospects of core–edge integrated no-ELM and small-ELM scenarios for future fusion devices</title>
<link>https://hdl.handle.net/1721.1/158587</link>
<description>Prospects of core–edge integrated no-ELM and small-ELM scenarios for future fusion devices
Viezzer, E.; Austin, M.E.; Bernert, M.; Burrell, K.H.; Cano-Megias, P.; Chen, X.; Cruz-Zabala, D.J.; Coda, S.; Faitsch, M.; Fevrier, O.; Gil, L.; Giroud, C.; Happel, T.; Harrer, G.F.; Hubbard, Amanda E.; Hughes, Jerry W.; Kallenbach, A.; Labit, B.; Merle, A.; Meyer, H.; Paz-Soldan, C.; Oyola, P.; Sauter, O.; Siccinio, M.; Silvagni, D.; Solano, E.R.; EUROfusion WPTE and ASDEX Upgrade Teams
One of our grand challenges towards fusion energy is the achievement of a high-performance plasma core coupled to a boundary solution. The high confinement mode (H-mode) provides such a high-performance fusion core due to the build-up of an edge transport barrier leading to a pedestal. However, it usually features type-I edge localized modes (ELMs) which pose a threat for long-duration plasma operation in future fusion devices as they induce large energy fluences onto the plasma facing components and typically are projected to damage the first wall. For future fusion devices, the integration of a stationary no-ELM regime with a power exhaust solution is indis- pensable. Several no-ELM and small-ELM regimes have extended their operational space in the past years, with the ultimate goal of providing an alternative core-edge solution to ITER and EU-DEMO. Prominent no-ELM or small-ELM alternatives include the I-mode, QH-mode, EDA H-mode, quasi-continuous exhaust (QCE) and ‘grassy’ ELM regimes, X-point radiator scenarios and negative triangularity L-mode. The state-of-the-art, including access conditions and main signatures, of these alternative regimes is reviewed. Many of these regimes partly match the operational space of ITER and EU-DEMO, however, knowledge gaps remain. Besides compatibility with divertor detachment and a radiative mantle, these include extrapolations to high Q operations, low core collisionality, high Greenwald fractions, impurity transport, amongst others. The knowledge gaps and possible strategies to close these gaps to show their applicability to ITER and EU-DEMO are discussed.
Submitted for publication in Nuclear Materials and Energy
</description>
<pubDate>Mon, 01 Aug 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158587</guid>
<dc:date>2022-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Scaling laws for electron kinetic effects in tokamak scrape-off layer plasmas</title>
<link>https://hdl.handle.net/1721.1/158586</link>
<description>Scaling laws for electron kinetic effects in tokamak scrape-off layer plasmas
Power, D.; Mijin, S.; Wigram, Mike; Militello, F.; Kingham, R.
Tokamak edge (scrape-off layer (SOL)) plasmas can exhibit non-local transport in the direction parallel to the magnetic field due to steep temperature gradients. This effect along with its consequences has been explored at equilibrium for a range of conditions, from sheath-limited to detached, using the 1D kinetic electron code SOL-KiT, where the electrons are treated kinetically and compared to a self-consistent fluid model. Line-averaged suppression of the kinetic heat flux (compared to Spitzer-Härm) of up to 50% is observed, contrasting with up to 98% enhancement of the sheath heat transmission coefficient, γe. Simple scaling laws in terms of basic SOL parameters for both effects are presented. By implementing these scalings as corrections to the fluid model, we find good agreement with the kinetic model for target electron temperatures. It is found that the strongest kinetic effects in γe are observed at low-intermediate collisionalities, and tend to increase (keeping upstream collisionality fixed) at increasing upstream densities and temperatures. On the other hand, the heat flux suppression is found to increase monotonically as upstream collisionality decreases. The conditions simulated encompass collisionalities relevant to current and future tokamaks.
Submitted for publication in Nuclear Fusion
</description>
<pubDate>Sat, 01 Apr 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158586</guid>
<dc:date>2023-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Editorial Foreward: Special Issue of Papers arising from the 18th International Workshop on H-mode Physics and Transport Barriers (Princeton, USA, 2022)</title>
<link>https://hdl.handle.net/1721.1/158585</link>
<description>Editorial Foreward: Special Issue of Papers arising from the 18th International Workshop on H-mode Physics and Transport Barriers (Princeton, USA, 2022)
Hughes, Jerry W.
This Special Issue of Nuclear Fusion collects papers from the 18th International Workshop on H-mode Physics and Transport Barriers, known more commonly as the 'H-mode Workshop', which was jointly hosted from 20–23 September 2022 by Princeton Plasma Physics Laboratory, Princeton University, Massachusetts Institute of Technology and General Atomics. The workshop was held as a hybrid event, with the on-site activities based at Princeton's Andlinger Center in Princeton, New Jersey, USA. It was the latest in a series of nominally biennial workshops beginning in 1987 and which have been hosted in a number of world locations (San Diego, Gut Ising, Abingdon, Naka, Princeton, Kloster Seeon, Oxford, Toki, St. Petersburg, Tsukuba, Fukuoka, Garching, Shanghai).
Submitted for publication in Nuclear Fusion
</description>
<pubDate>Sat, 01 Apr 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158585</guid>
<dc:date>2023-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Unitary Quantum Lattice Simulations for Maxwell Equations in Vacuum and in Dielectric Media</title>
<link>https://hdl.handle.net/1721.1/158584</link>
<description>Unitary Quantum Lattice Simulations for Maxwell Equations in Vacuum and in Dielectric Media
Vahala, George; Valhala, Linda; Soe, Min; Ram, Abhay K.
Utilizing the similarity between the spinor representation of the Dirac equation and the Maxwell equations that has been recognized since the early days of relativistic quantum mechanics, a quantum lattice (QLA) representation of unitary collision-stream operators of Maxwell’s equations is derived for both homogeneous and inhomogeneous media.  A second order accurate 4-spinor scheme is developed and tested successfully for two dimensional (2D) propagation of a Gaussian pulse in a uniform medium while for normal (1D) incidence of an electromagnetic Gaussian wave packet onto a dielectric interface requires 8-component spinors.  In particular, the well-known phase change, field amplitudes and profile widths are recovered by the QLA asymptotic profiles without the imposition of electromagnetic boundary conditions at the interface.  The QLA simulations yield the time-dependent electromagnetic fields as the wave packet enters and straddles the dielectric boundary.  QLA involves unitary interleaved non-commuting collision and streaming operators that can be coded onto a quantum computer – the non-commutation being the very reason why one perturbatively recovers the Maxwell equations.
Submitted for publication in Journal of Physics
</description>
<pubDate>Fri, 01 May 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158584</guid>
<dc:date>2020-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Combinatorial development of the low-density high-entropy alloy Al10Cr20Mo20Nb20Ti20Zr10 having gigapascal strength at 1000 C</title>
<link>https://hdl.handle.net/1721.1/158583</link>
<description>Combinatorial development of the low-density high-entropy alloy Al10Cr20Mo20Nb20Ti20Zr10 having gigapascal strength at 1000 C
Waseem, Owais Ahmed; JinRyu, Ho
A pseudo-ternary combinatorial approach to AlxTayVzCr20Mo20Nb20Ti20Zr10 revealed the composition of refractory high-entropy alloys characterized by outstanding high-temperature yield strength. Compression testing of Al10Cr20Mo20Nb20Ti20Zr10 disclosed yield strengths of 1206 MPa at 1000 °C, one of the highest values reported for refractory high-entropy alloys. Ta-containing AlxTayVzCr20Mo20Nb20Ti20Zr10 presented a lower high-temperature strength, while characterization of Al10Cr20Mo20Nb20Ti20Zr10 showed C14 Al2Zr- and NbCr2-type hexagonal Laves intermetallics, with a hardness of ∼10.5 GPa (higher than that of the body centered cubic phase, at ∼9 GPa). The stronger bonds between Al and transition metals appear to give rise to extraordinary load-bearing capabilities in Al10Cr20Mo20Nb20Ti20Zr10, at high temperatures. Owing to this rare combination of relatively low density (6.96 g/cm3) and remarkable high-temperature strength, Al10Cr20Mo20Nb20Ti20Zr10 has emerged as a potential material for high-temperature structural applications.
Submitted for publication in Journal of Alloys and Compounds
</description>
<pubDate>Sun, 01 Mar 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158583</guid>
<dc:date>2020-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Prediction of DIII-D Pedestal Structure from Externally Controllable Parameters</title>
<link>https://hdl.handle.net/1721.1/158582</link>
<description>Prediction of DIII-D Pedestal Structure from Externally Controllable Parameters
Zeger, Emi U.; Laggner, Florian M.; Bortolon, Alessandro; Rea, Cristina; Meneghini, Orso; Saarelma, Samuli; Sammuli, Brian S.; Smith, Sterling P.; Zhao, Jinjin
The sharp increase of pressure at the edge of a high confinement mode (H-mode) plasma, the pedestal, strongly impacts overall plasma performance. Predicting the pedestal is a necessity to control and optimize tokamak operations. An experimental data-driven machine learning (ML) approach is presented that predicts the pedestal heights and widths of electron density (ne) and electron temperature (Te) profiles as well as the separatrix ne from externally controllable parameters such as the plasma shape, heating method and power, and gas puff rate and integrated gas puff. The OMFIT framework was used with DIII-D data to efficiently, robustly, and automatically build a database of pedestal parameters to train machine learning models.  Database creation was enabled by the search engine tool for DIII-D data, TokSearch, which parallelizes data fetching, enabling fast searches through basic signals of thousands of DIII-D shots and selection of relevant time intervals. Principal Component Analysis (PCA) separated the database into three clusters that represent classes of plasma shapes that are regularly used in DIII-D. The most important parameters for setting the pedestal structure were plasma current (Ip), toroidal magnetic field (Bφ), neutral beam heating power (PNBI) and shaping quantities. The Deep Jointly Informed Neural Networks (DJINN) algorithm was applied to identify suitable neural network (NN) architectures that appropriately capture the features of the pedestal database. Separate NNs were implemented for each pedestal parameter, and ensembling methods were used to improve the prediction accuracy and allowed estimation of the prediction uncertainty. The pedestal predictions of the test dataset lie within the measurement uncertainties of the pedestal parameters. The NN outperformed simple Linear Regression (LR) analysis, indicating non-linear dependencies in the pedestal structure. The presented achievements illustrate a promising path for future research, using feature extraction to infer experimental trends and thereby improve pedestal models as well as deploying NN for a fast pedestal prediction in DIII-D scenario development.
Submitted for publication in IEEE Transactions on Plasma Science
</description>
<pubDate>Fri, 01 Oct 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158582</guid>
<dc:date>2021-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Exploring the Effects of Partial Electrical Connectors on HTS Coils: A Case Study of Insulated Coil and Paraffin-Impregnated NI Coil</title>
<link>https://hdl.handle.net/1721.1/158581</link>
<description>Exploring the Effects of Partial Electrical Connectors on HTS Coils: A Case Study of Insulated Coil and Paraffin-Impregnated NI Coil
Lee, Wooseung; Yang, Hongmin; Park, Dongkeun; Hwang, Young Jin; Im, Chaemin; Kim, Jaemin; Hahn, Seungyong; Lee, SangGap
This study explores the influence of the Partial-Electrical-Connector (PEC) on High-Temperature Superconducting (HTS) coils. The PEC method emerges as a promising alternative to the conventional No-Insulation (NI) technique, establishing a direct current path between turns through partially soldered metal foils, such as copper, on the coil surface. This innovative approach achieves comparable performance to NI without necessitating a complete path between turns, offering advantages even in insulated or paraffin-impregnated coils. For the investigation, an insulated HTS coil and two NI HTS coils with paraffin impregnation are prepared. These coils undergo testing under overcurrent conditions, and their performance is compared with PEC-applied samples. The results demonstrate that coils with PEC application exhibit definitive current bypass characteristics. This finding highlights the potential of PEC to effectively create current bypass paths in both insulated and paraffin-impregnated coils.
Submitted for publication in IEEE Transactions on Applied Superconductivity
</description>
<pubDate>Fri, 01 Dec 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158581</guid>
<dc:date>2023-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Conceptual Design of a Portable, Solid-Nitrogen-Cooled 0.5-T/560-mm Point-of-Care MRI Magnet</title>
<link>https://hdl.handle.net/1721.1/158580</link>
<description>Conceptual Design of a Portable, Solid-Nitrogen-Cooled 0.5-T/560-mm Point-of-Care MRI Magnet
Park, Dongkeun; Bascuñán, Juan; Lee, Wooseung; Iwasa, Yukikazu
We describe the conceptual design of a portable, liquid-helium-free, all-REBCO, 0.5-T/560-mm point-of-care magnetic resonance imaging (MRI) magnet. It is free from an external power supply and a refrigeration system during operation. In our portable MRI magnet, we use a detachable “cryocirculator” that circulates, in a closed circuit, cold working fluid, and most importantly for portability, it can be readily coupled to or decoupled from the magnet, in contrast, a conventional cryocooler is mechanically attached to the magnet. Another unique feature of our system is a volume of solid nitrogen (SN2) in the cold chamber that adds enough thermal mass to the magnet in the 30–36-K operating temperature range, enabling it to maintain its field over a period of, for this system,≥10 hours, plenty enough for this portable MRI system, uncoupled from its cryocirculator, to perform its mission before it needs recooling.
Submitted for publication in IEEE Transactions on Applied Superconductivity
</description>
<pubDate>Mon, 01 Aug 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158580</guid>
<dc:date>2022-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Hot-Spot Modeling of REBCO NI Pancake Coil: Analytical and Experimental Approaches</title>
<link>https://hdl.handle.net/1721.1/158579</link>
<description>Hot-Spot Modeling of REBCO NI Pancake Coil: Analytical and Experimental Approaches
Lee, Wooseung; Park, Dongkeun; Choi, Yoonhyuck; Li, Yi; Bascuana, Juan; Iwasa,Yukikazu
The No-Insulation (NI) winding provides intrinsic bypassing current paths that enable self-protection from overheating. The self-protection of the NI coil is one of the most promising protection techniques for the high field hightemperature superconductor (HTS) magnet applications. Since the additional paths are valid for an HTS magnet with a thinner matrix, the self-protection mechanism is applicable even for the higher current density magnet with reduced matrix thickness inside the HTS tape. However, reducing the matrix can cause damage to the magnet by producing excessive heat during the quench. This research introduces a new modeling method to investigate the hot-spot characteristics in the REBCO NI pancake coil. The model is also validated with a sample NI HTS coil experiment result. Radial direction Normal Zone Propagation (NZP) velocity of the sample coil is estimated based on the suggested model. The calculated radial direction NZP velocity is applied to calculate the center field drop of the NI HTS coil, and the result is well-matched with the experiment result.We also introduce one example of the model applications. The maximum current density that will not exceed a given reference temperature in the adiabatic cooling condition is estimated using the model.
Submitted for publication in IEEE Transactions on Applied Superconductivity
</description>
<pubDate>Sun, 01 Aug 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158579</guid>
<dc:date>2021-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Hot-Spot Modeling of REBCO NI Pancake Coil: Analytical and Experimental Approaches</title>
<link>https://hdl.handle.net/1721.1/158578</link>
<description>Hot-Spot Modeling of REBCO NI Pancake Coil: Analytical and Experimental Approaches
Lee, Wooseung; Park, Dongkeun; Choi, Yoonhyuck; Li, Yi; Bascuana, Juan; Iwasa,Yukikazu
The No-Insulation (NI)winding provides intrinsic bypassing current paths that enable self-protection fromoverheating. The self-protection of the NI coil is one of the most promising protection techniques for the high field high-temperature superconductor (HTS) magnet applications. Since the additional paths are valid for an HTS magnet with a thinner matrix, the self-protection mechanism is applicable even for the higher current density magnet with reduced matrix thickness inside the HTS tape. However, reducing the matrix can cause damage to the magnet by producing excessive heat during the quench. This research introduces a new modeling method to investigate the hot-spot characteristics in the REBCO NI pancake coil. Themodel is also validated with a sample NI HTS coil experiment result. Radial direction Normal Zone Propagation (NZP) velocity of the sample coil is estimated based on the suggested model. The calculated radial direction NZP velocity is applied to calculate the center field drop of the NI HTS coil, and the result is well-matched with the experiment result.We also introduce one example of the model applications. The maximum current density that will not exceed a given reference temperature in the adiabatic cooling condition is estimated using the model.
Submitted for publication in IEEE Transactions on Applied Superconductivity
</description>
<pubDate>Tue, 01 Dec 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158578</guid>
<dc:date>2020-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design of a Magnet and Gradient Coils for a Tabletop Liquid-Helium-Free, Persistent-Mode 1.5-T MgB2 Osteoporosis MRI</title>
<link>https://hdl.handle.net/1721.1/158577</link>
<description>Design of a Magnet and Gradient Coils for a Tabletop Liquid-Helium-Free, Persistent-Mode 1.5-T MgB2 Osteoporosis MRI
Park, Dongkeun; Choi, Yoonhyuck; Li, Yi; Lee, Wooseung; Tanaka, Hiromi; Bascuñán, Juan; Ackerman, Jerome L.; Tanaka, Hideki; Iwasa, Yukikazu
We have finalized the design of a full-scale tabletop 1.5-T/90-mm MgB2 finger MRI magnet system for osteoporosis screening based on our preliminary test results of small coils and superconducting joints.The magnet will operate in persistent mode at 10 K with an additional 5 K temperature margin. The magnet design which includes six main coils and an iron shield satisfies the required specification of a field intensity of 1.5T, homogeneity of≤5 ppm over a 20-mm diameter of spherical volume, and a fringe field of ≤5 gauss at 0.5 m in radius from the magnet center. An active protectionmethod using external heaters will be applied to prevent a local hot spot in the MgB2 windings from being overheated when quench occurs. Active shield transverse and axial gradient coils for this tabletop osteoporosis MRI, having primary and shield coil pairs, are designed to minimize stray fields that can induce eddy currents on nearby metal surface and thus imaging artifacts. This paper covers design and analysis of: 1) the main coils and iron shield; 2) coil former; 3) quench protection; and 4) active shield gradient coils.We also discuss design changes of the cryostat and equipment plan for the overall system. The magnet system will be completed and then, equipped with other MRI hardware components including an in-house-made gradient coil assembly and RF coils for demonstration of 1.5-T finger MRI in 2020.
Submitted for publication in IEEE Transactions on Applied Superconductivity
</description>
<pubDate>Mon, 01 Apr 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158577</guid>
<dc:date>2019-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Editorial: Using high energy density plasmas for nuclear experiments relevant to nuclear astrophysics</title>
<link>https://hdl.handle.net/1721.1/158576</link>
<description>Editorial: Using high energy density plasmas for nuclear experiments relevant to nuclear astrophysics
Gatu Johnson, Maria; Hale, Gerald; Paris, Mark; Wiescher, Michael; Zylstra, Alex
Editorial on the Research Topic Using high energy density plasmas for nuclear experiments relevant to nuclear astrophysics
Submitted for publication in Frontiers in Physics
</description>
<pubDate>Wed, 01 Mar 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158576</guid>
<dc:date>2023-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Simulations of divertor heat flux width using transport code with cross-field drifts under the BOUT++ framework</title>
<link>https://hdl.handle.net/1721.1/158575</link>
<description>Simulations of divertor heat flux width using transport code with cross-field drifts under the BOUT++ framework
Li, N.M; Xu, X.Q.; Hughes, Jerry W.; Terry, James L.; Sun, J.Z.; Wang, D.Z.
The fluid transport code [trans-electric field (Er) module] under the BOUT++ framework has been used to simulate divertor heat flux width and boundary Er with all drifts and the sheath potential in the scrape-off layer. The calculated steady state radial Er in the pedestal region has been compared with that of experimental measurements from the Alcator C-Mod tokamak. The magnitude and shape of Er are similar to those of the experimental data. In order to understand the relative role of cross-field drifts vs turbulent transport in setting the heat flux width, four C-Mod enhanced Dα H-mode discharges with a lower single null divertor configuration should be simulated. BOUT++ transport simulations with cross-field drifts included yield similar heat flux width λq to that of experimental measurements (within a factor of 2) from both the probe and the surface thermocouple diagnostics and show a similar trend with plasma current to that of the Eich experimental scaling. The simulations show that both drifts and turbulent transport compete to determine the heat flux width. The magnetic drifts play a dominant role in setting the divertor heat-flux width, while the E × B drift decreases the heat flux width by 10%–25%, leading to improved agreement with the experiment relative to Goldston’s model. A turbulence diffusivity scan (χ) identifies two distinct regimes: a drift dominant regime when χ is small and a turbulence dominant regime when χ is large. The Goldston heuristic drift model yields a lower limit of the width λq.
Submitted for publication in AIP Advances
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158575</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Overview of multiscale turbulence studies covering ion-to-electron scales in magnetically confined fusion plasma</title>
<link>https://hdl.handle.net/1721.1/158574</link>
<description>Overview of multiscale turbulence studies covering ion-to-electron scales in magnetically confined fusion plasma
Maeyama, Shinya; Tokuzawa, Tokihiko; Howard, Nathan T.; Citrin, Jonathan; Watanabe, Tomo-Hiko
Turbulent transport in magnetically confined fusion plasma has conventionally been analyzed at the ion gyroradius scale based on the microturbulence theory. However, ion-scale turbulence analysis sometimes fails to predict the turbulent transport flux observed experimentally. Microturbulence at the electron gyroradius scale and cross-scale interactions between disparate-scale turbulences are possible mechanisms to resolve this issue. This overview discusses the recent progress in multiscale turbulence studies and presents future perspectives from recent experimental, theoretical, and numerical investigations. The following aspects are highlighted: (1) the importance of electron-scale effects in experiments, (2) the physical mechanisms of cross-scale interactions, (3) modeling electron-scale effects in quasilinear transport models, and (4) the impacts of cross-scale interactions on burning plasmas. Understanding multiscale turbulence is necessary to improve performance prediction and explore optimal operations for future burning plasmas
Submitted for publication in Nuclear Fusion
</description>
<pubDate>Fri, 01 Dec 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158574</guid>
<dc:date>2023-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Using the Stix finite element RF code to investigate operation optimization of the ICRF antenna on Alcator C-Mod</title>
<link>https://hdl.handle.net/1721.1/158573</link>
<description>Using the Stix finite element RF code to investigate operation optimization of the ICRF antenna on Alcator C-Mod
Migliore, Christina; Wright, John C.; Stowell, M.; Bonoli, Paul T.
As the Ion Cyclotron Radio Frequency range (ICRF) heating becomes more favorable in fusion devices, the urgency of predicting and mitigating impurity generation that arises from it becomes more pressing. In the ICRF regime, rectified Radio Frequency (RF) sheaths are known to form at antenna and material edges that influence negative effects like sputtering and a decrease in heating efficiency. Methods to mitigate the formation of these RF sheaths through RF image currents cancellation have been experimentally studied. A power-phasing scan done on Alcator C-Mod in which the amount of power on the two inner straps (Pin) versus the total 4 straps (Ptot) was varied showed a minimization of enhanced potentials between Pin/Ptot ∼ 0.7–0.9 while impurities were minimized for Pin/Ptot ∼ 0.5–0.8. New capabilities in the realm of representing the RF sheath numerically now allow for these experiments to be simulated. Given the size of the sheath relative to the scale of the device, it can be approximated as a Boundary Condition (BC). A new parallelized cold-plasma wave equation solver called Stix implements a non-linear sheath impedance model BC formulated by Myra et al (2015 Phys. Plasmas 22 062507) through the method of finite elements using the MFEM library [http://mfem.org]. It is seen that Stix shows qualitative agreement with the measured C-Mod enhanced potentials.
Submitted for publication in Nuclear Fusion
</description>
<pubDate>Tue, 01 Aug 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158573</guid>
<dc:date>2023-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Dependence of the boundary heat flux width on core and edge profiles in Alcator C-Mod</title>
<link>https://hdl.handle.net/1721.1/158572</link>
<description>Dependence of the boundary heat flux width on core and edge profiles in Alcator C-Mod
Ballinger, S.B.; Brunner, D.; Hubbard, Amanda E.; Hughes, Jerry W.; Kuang, Adam Q.; LaBombard, Brian; Terry, James L.; White, Anne E.
This work presents new evidence that the heat flux width, λ q , in the Alcator C-Mod tokamak scales with the edge electron pressure, as observed in the ASDEX Upgrade (AUG) tokamak (Silvagni et al 2020 Plasma Phys. Control. Fusion 62 045015), but the scaling with volume-averaged pressure, p¯, from the plasma stored energy, found by Brunner et al (2018 Nucl. Fusion 58 094002), is a better predictor of λ q in Alcator C-Mod than the edge electron pressure. These previous studies, which find that λ q decreases with increasing plasma pressure, imply that a high performance core at high pressure will lead to challenging heat and particle exhaust due to very small λ q . This concern has led to our significant enlargement of the C-Mod database with the electron density, temperature, and pressure profile data from the Thomson scattering and electron cyclotron emission diagnostics. Using the C-Mod database augmented with new profile data, we find that λ q decreases with increasing edge electron pressure as λq∝ pe,95-0.26, similar to results from AUG, and showing the strength of cross-machine comparisons. We also find that λq∝ pe,core-0.56, consistent with the original finding from C-Mod that the heat flux width scales as p¯-0.48 (Brunner et al 2018 Nucl. Fusion 58 094002). The scalings of λ q with separatrix pressure and gradient scale length are found to match the AUG results qualitatively. The C-Mod scalings with edge plasma quantities have more scatter than the p¯ scaling, and, importantly, show different trends for H-modes relative to L- and I-mode. Investigating the source of this discrepancy presents an opportunity for further study that may improve our ability to predict the heat flux width in different confinement scenarios in the pursuit of optimizing core-edge performance in future reactors.
Submitted for publication in Nuclear Fusion
</description>
<pubDate>Sat, 01 Jan 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158572</guid>
<dc:date>2022-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Cross-code comparison of the edge codes SOLPS-ITER, SOLEDGE2D and UEDGE in modelling a low-power scenario in the DTT</title>
<link>https://hdl.handle.net/1721.1/158571</link>
<description>Cross-code comparison of the edge codes SOLPS-ITER, SOLEDGE2D and UEDGE in modelling a low-power scenario in the DTT
Moscheni, M.; Meineri, C.; Wigram, Mike; Carati, C.; De Marchi, E.; Greenwald, M.; Innocente, P.; LaBombard, Brian; Subba, F.; Wu, H.; Zanino, R.
As reactor-level nuclear fusion experiments are approaching, a solution to the power exhaust issue in future fusion reactors is still missing. The maximum steady-state heat load that can be exhausted by the present technology is around 10 MW m−2. Different promising strategies aiming at successfully managing the power exhaust in reactor-relevant conditions such that the limit is not exceeded are under investigation, and will be tested in the Divertor Tokamak Test (DTT) experiment. Meanwhile, the design of tokamaks beyond the DTT, e.g. EU-DEMO/ARC, is progressing at a high pace. A strategy to work around the present lack of reactor-relevant data consists of exploiting modelling to reduce the uncertainty in the extrapolation in the design phase. Different simulation tools, with their own capabilities and limitations, can be employed for this purpose. In this work, we compare SOLPS-ITER, SOLEDGE2D and UEDGE, three state-of-the-art edge codes heavily used in power exhaust studies, in modelling the same DTT low-power, pure-deuterium, narrow heat-flux-width scenario. This simplified, although still reactor-relevant, testbed eases the cross-comparison and the interpretation of the code predictions, to identify areas where results differ and develop understanding of the underlying causes. Under the conditions investigated, the codes show encouraging agreement in terms of key parameters at both targets, including peak parallel heat flux (1%–45%), ion temperature (2%–19%), and inner target plasma density (1%–23%) when run with similar input. However, strong disagreement is observed for the remaining quantities, from 30% at outer mid-plane up to a factor 4–5 at the targets. The results primarily reflect limitations of the codes: the SOLPS-ITER plasma mesh not reaching the first wall, SOLEDGE2D not including ion-neutral temperature equilibration, and UEDGE enforcing a common ion-neutral temperature. Potential improvements that could help enhance the accuracy of the code models for future applications are also discussed.
Submitted for publication in Nuclear Fusion
</description>
<pubDate>Mon, 01 Nov 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158571</guid>
<dc:date>2021-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Development and experimental qualification of novel disruption prevention techniques on DIII-D</title>
<link>https://hdl.handle.net/1721.1/158570</link>
<description>Development and experimental qualification of novel disruption prevention techniques on DIII-D
Barr, J.L.; Sammuli, B.; Humphreys, D.A.; Olofsson, E.; Du, X.D.; Rea, Cristina; Wehner, W.P.; Boyer, M.D.; Eidietis, N.W.; Granetz, R.; Hyatt, A.; Liu, T.; Logan, N.; Munaretto, S.; Strait, E.; Wang, Z.R.; The DIII-D Team
Novel disruption prevention solutions spanning a range of control regimes are being developed and tested on DIII-D to enable ITER success. First, a new real-time control algorithm has been developed and tested for regulating nearness to stability limits and maintaining safety-margins. Its first application has been for reliable prevention of vertical displacement events (VDEs) by adjusting plasma elongation (κ) and the inner-gap between the plasma and inner-wall in response to real- time open-loop VDE growth rate (γ) estimators. VDEs were robustly prevented up to average open-loop growth rates of 800 rad/s with initial tunings, with only applying shape modification when near safety limits. Second, the disruption risk during fast, emergency shutdown after large tearing and locked modes can be significantly improved by transitioning to a limited topology during shutdown. More than 50% of emergency limited shutdowns after locked modes reach a final normalized current I N &lt; 0.3 before terminating, scaling to the 3 MA ITER requirement. This is in contrast to diverted shutdowns, the majority of which disrupt at I N &gt; 0.8. Despite improvements, these results highlight the critical importance of early prevention. Third, a novel emergency shut down method has been developed which excites instabilities to form a warm, helical core post-thermal quench. The current quench extends to ~100ms and avoids VDEs and runaway electron generation. Novel real-time machine learning disruption prediction has been integrated with the DIII-D proximity controller, and a real- time compatible multi-mode MHD spectroscopy technique has been developed. Results presented here were enabled by a focused effort, the Disruption Free Protocol, in DIII-D’s 2019-20 campaign to complement disruption prevention experiments with a large piggy-back program. In addition to testing novel techniques, it is estimated to have helped avoid 32 potential disruptions in piggyback operations with rapid, early shutdowns after large rotating n=1 or locked modes.
Submitted for publication in Nuclear Fusion
</description>
<pubDate>Sun, 01 Aug 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158570</guid>
<dc:date>2021-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Dimensionless Parameter Scaling of Intrinsic Torque in C-Mod Enhanced Confinement Plasmas</title>
<link>https://hdl.handle.net/1721.1/158569</link>
<description>Dimensionless Parameter Scaling of Intrinsic Torque in C-Mod Enhanced Confinement Plasmas
Rice, John E.; Cao, N.M.; Tala, T.; Chrystal, C.; Greenwald, M.J.; Hughes, Jerry W.; Marmar, E.S.; Reinke, M.L.; Rodriguez Fernandez, Pablo; Salmi, A.
A dimensionless parameter dependence study of intrinsic torque has been performed on a database of H- and I-mode plasmas from the Alcator C-Mod tokamak. The torque was determined by comparing intrinsic angular momentum density profiles just before and just after L-H and L-I transitions.  The intrinsic torque has been found to scale as beta_N^1.5 rho_*^-1.0 nu_$^0.1, with the parameter ranges 0.3 &lt; beta_N &lt;1.5, 0.004 &lt; rho_* &lt; 0.011 and 0.04 &lt; nu_* &lt; 0.9. Comparison with results from other tokamaks suggests that the intrinsic torque should be normalized by some measure of the device size.
Submitted for publication in Nuclear Fusion
</description>
<pubDate>Tue, 01 Sep 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158569</guid>
<dc:date>2020-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>A semi-supervised machine learning detector for physics events in tokamak discharges</title>
<link>https://hdl.handle.net/1721.1/158568</link>
<description>A semi-supervised machine learning detector for physics events in tokamak discharges
Montes, Kevin J.; Rea, Cristina; Tinguely, R. Alex; Sweeney, Ryan; Zhu, Jinxiang; Granetz, Robert
Databases of physics events have been used in various fusion research applications, including the development of scaling laws and disruption avoidance algorithms, yet they can be time-consuming and tedious to construct. This paper presents a novel application of the label spreading semi-supervised learning algorithm to accelerate this process by detecting distinct events in a large dataset of discharges, given few manually labeled examples. A high detection accuracy (&gt;85%) for H-L back transitions and initially rotating locked modes is demonstrated on a dataset of hundreds of discharges from DIII-D with manually identified events for which only 3 discharges are initially labeled by the user. Lower yet reasonable performance (~75%) is also demonstrated for the core radiative collapse, an event with a much lower prevalence in the dataset. Additionally, analysis of the performance sensitivity indicates that the same set of algorithmic parameters is optimal for each event. This suggests that the method can be applied to detect a variety of other events not included in this paper, given that the event is well described by a set of 0D signals robustly available on many discharges. Procedures for analysis of new events are demonstrated, showing automatic event detection with increasing fidelity as the user strategically adds manually labeled examples. Detections on Alcator C-Mod and EAST are also shown, demonstrating the potential for this to be used on a multi-tokamak dataset.
Submitted for publication in Nuclear Fusion
</description>
<pubDate>Sun, 01 Nov 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158568</guid>
<dc:date>2020-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Electron acceleration in laboratory-produced turbulent collisionless shocks</title>
<link>https://hdl.handle.net/1721.1/158567</link>
<description>Electron acceleration in laboratory-produced turbulent collisionless shocks
Fiuza, F.; Swadling, G.F.; Grassi, A.; Rinderknecht, H.G.; Higginson, D.P.; Ryutov, D.D.; Bruulsema, C.; Drake, R.P.; Funk, S.; Glenzer, S.; Gregori, G.; Li, Chi-Kang; Pollock, B.B.; Remington, B.A.; Ross, J.S.; Rozmus, W.; Sakawa, Y.; Spitkovsky, A.; Wilks, S.; Park, H.-S.
Astrophysical collisionless shocks are among the most powerful particle accelerators in the Universe. Generated by violent interactions of supersonic plasma flows with the interstellar medium, supernova remnant shocks are observed to amplify magnetic fields and accelerate electrons and protons to highly relativistic speeds. In the well-established model of diffusive shock acceleration, relativistic particles are accelerated by repeated shock crossings. However, this requires a separate mechanism that pre-accelerates particles to enable shock crossing. This is known as the ‘injection problem’, which is particularly relevant for electrons, and remains one of the most important puzzles in shock acceleration6. In most astrophysical shocks, the details of the shock structure cannot be directly resolved, making it challenging to identify the injection mechanism. Here we report results from laser-driven plasma flow experiments, and related simulations, that probe the formation of turbulent collisionless shocks in conditions relevant to young supernova remnants. We show that electrons can be effectively accelerated in a first-order Fermi process by small-scale turbulence produced within the shock transition to relativistic non-thermal energies, helping overcome the injection problem. Our observations provide new insight into electron injection at shocks and open the way for controlled laboratory studies of the physics underlying cosmic accelerators.
Submitted for publication in Nature Physics
</description>
<pubDate>Thu, 01 Aug 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158567</guid>
<dc:date>2019-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Evolution in microstructure and hardness of Titanium-Zirconium-Molybdenum (TZM) alloy after depth marker implantation for erosion diagnostic in fusion devices</title>
<link>https://hdl.handle.net/1721.1/158566</link>
<description>Evolution in microstructure and hardness of Titanium-Zirconium-Molybdenum (TZM) alloy after depth marker implantation for erosion diagnostic in fusion devices
Waseem, Owais Ahmed; Woller, Kevin Benjamin
A depth marker by ion implantation has been used for analysis of erosion on plasma-facing materials in fusion experiments. To assess the impact of ion implantation on the surface properties of these materials under investigation, Titanium-Zirconium-Molybdenum (TZM) alloy was irradiated with 4.8 MeV F3+ ions up to 1.04x1017 cm-2 at 330oC, achieving a depth marker centroid at ∼1.5 μm. After implantation, there is no significant change in microstructure and surface roughness under these implantation conditions, which were used for samples positioned on the high field side in the Experimental Advanced Superconducting Tokamak (EAST) device for erosion analysis. Nanoindentation measurements indicate an increase in hardness from ∼5.5 GPa to ∼6.6 GPa in the first 300 nm of the surface, within the eroded zone over a full years experimental campaign. Within 1.5 μm of the surface, where the damage from the ion beam is expected to be generated, the microstructure of implanted TZM shows a large number of dislocation lines (i.e. 1.0x109 dislocations/mm2), determined from TEM analysis, which can account for the increase in hardness of implanted TZM. These changes due to the ion implantation, though minor, should be considered when using ion implanted depth markers for erosion measurements of plasma-facing materials.
Submitted for publication in Materials Chemistry and Physics
</description>
<pubDate>Tue, 01 Sep 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158566</guid>
<dc:date>2020-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Development of Compact Tokamak Fusion Reactor Use Cases to Inform Future Transport Studies</title>
<link>https://hdl.handle.net/1721.1/158565</link>
<description>Development of Compact Tokamak Fusion Reactor Use Cases to Inform Future Transport Studies
Holland, C.; Bas, E.M.; Orlov, D.M.; McClenaghan, J.; Lyons, B.; Grierson, B.A.; Jian, X.; Howard, Nathan T.; Rodriguez-Fernandez, Pablo
The OMFIT STEP [O. Meneghini et al., Nucl. Fusion 10 1088 (2020)] workflow has been used to develop inductive and steady-state H-mode core plasma scenario use cases for a B0 = 8T, R0 = 4m machine in order to help guide and inform future higher- fidelity studies of core transport and confinement in compact tokamak reactors. Both use cases are designed to produce 200 MW or more of net electric power in an up- down symmetric plasma with minor radius a = 1.4 m, elongation κ = 2.0, triangularity δ = 0.5, and effective charge Zeff ≃ 2. Additional considerations based on the need for compatibility of the core with reactor-relevant power exhaust solutions and external actuators were used to guide and constrain the use case development. An extensive characterization of core transport in both scenarios is presented, the most important feature of which is the extreme sensitivity of the results to the quantitative stiffness level of the transport model used as well as the predicted critical gradients. This sensitivity is shown to arise from different levels of transport stiffness exhibited by the models, combined with the gyroBohm-normalized fluxes of the predictions being an order of magnitude larger than other H-mode plasmas. Additionally, it is shown that although heating in both plasmas is predominantly to the electrons and collisionality is low, the plasmas remain sufficiently well-coupled for the ions to carry a significant fraction of the thermal transport. As neoclassical transport is negligible in these conditions, this situation inherently requires long-wavelength ion gyroradius-scale turbulence to be the dominant transport mechanism in both plasmas. These results are combined with other basic considerations to propose a simple heuristic model of transport in reactor-relevant plasmas, along with simple metrics to quantify coupling and core transport properties across burning and non-burning plasmas.
Submitted for publication in Journal of Plasma Physics
</description>
<pubDate>Tue, 01 Aug 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158565</guid>
<dc:date>2023-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Impact of lithium wall conditioning and wave-frequency on high density lower hybrid current drive experiment on EAST</title>
<link>https://hdl.handle.net/1721.1/158564</link>
<description>Impact of lithium wall conditioning and wave-frequency on high density lower hybrid current drive experiment on EAST
Baek, Seung Gyou; Li, M.H.; Wallace, Greg M.; Bonoli, Paul T.; Choi, W.; Ding, B.J.; Gao, W.; Gong, X.; Li, Y.C.; Lin, S.; Meng, L.; Poli, F.; Shiraiwa, S.; Wang, M.; Wang, Y.F.; Wu, C.B.; Wang, L.; Zang, Q.; Zhao, H.
A series of dedicated lower hybrid current drive (LHCD) experiments on EAST shows that lithium wall conditioning extends LH current drive and heating up to the line-averaged density of  n ̅_e≈ 4x1019 m-3 for both 2.45 and 4.6 GHz. Current drive at such a high density is crucial for the development of long-pulse non-inductive scenarios on EAST. With lithiation, the LH power injection of 1.5 MW at 2.45 GHz resulted in a drop of loop voltage of ~ 0.3 V, which is a comparable loop voltage drop observed with 1.1 MW at 4.6 GHz. The observed decrease in loop voltage is attributed mostly to the RF heating effect. Another LHCD experiment suggests that lithium wall coating has a more significant impact on the scrape-off-layer (SOL) properties than changes in the Greenwald fraction. LHCD at 2.45 GHz still suffers from a loss of efficiency. Enhanced power ionization in front of the launcher may cause the onset of density-dependent wave instabilities. The rise in the midplane SOL density may also accelerate a transition in the divertor regime, leading to additional ionization and collisional losses in the X-point divertor plasma. Ray-tracing modeling supports that a lower wave frequency is more prone to collisional power loss. The experiments confirm that lithiation is a useful tool to control the SOL plasma, and suggest that density control in front of the launcher may be critical to mitigating power loss mechanisms in the plasma boundary.
Submitted for publication in Journal of Nuclear Materials
</description>
<pubDate>Wed, 01 Jul 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158564</guid>
<dc:date>2020-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design considerations for an ultrahigh-bandwidth Phase Contrast Imaging system applied to fusion grade devices</title>
<link>https://hdl.handle.net/1721.1/158563</link>
<description>Design considerations for an ultrahigh-bandwidth Phase Contrast Imaging system applied to fusion grade devices
Marinoni, Alessandro; Rost, Jon C.; Porkolab, Miklos
The PCI diagnostic is an internal reference interferometer that creates an image of absolutely calibrated electron density fluctuations integrated along the line of sight of the probing light beam. While conventional PCI diagnostics installed on fusion experiments worldwide employ light of wavelength equal to 10.59 μm, the same system using light at 1.55 μm wavelength would extend the spectral response in wave-number and frequency by factors of seven and over one hundred, respectively, thereby potentially providing quantitative measurements of the internal structure of density perturbations induced by either turbulent or radio-frequency waves, simultaneously covering ion to electron gyro-radius scales up to the GHz frequency region. Based on a previously developed 1.55 μm PCI prototype system, constraints to the design for such a diagnostic in fusion grade devices are presented and compared to those faced with the conventional method.
Submitted for publication in Journal of Instrumentation
</description>
<pubDate>Sun, 01 Oct 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158563</guid>
<dc:date>2023-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Quantifying experimental edge plasma evolution via multidimensional adaptive Gaussian process regression</title>
<link>https://hdl.handle.net/1721.1/158562</link>
<description>Quantifying experimental edge plasma evolution via multidimensional adaptive Gaussian process regression
Mathews, Abhilash; Hughes, Jerry W.
The edge density and temperature of tokamak plasmas are strongly correlated with energy and particle confinement and their quantification is fundamental to understanding edge dynamics. These quantities exhibit behaviours ranging from sharp plasma gradients and fast transient phenomena (e.g. transitions between low and high confinement regimes) to nominal stationary phases. Analysis of experimental edge measurements therefore require robust fitting techniques to capture potentially stiff spatiotemporal evolution. Additionally, fusion plasma diagnostics inevitably involve measurement errors and data analysis requires a statistical framework to accurately quantify uncertainties. This paper outlines a generalized multidimensional adaptive Gaussian process routine capable of automatically handling noisy data and spatiotemporal correlations. We focus on the edge-pedestal region in order to underline advancements in quantifying time-dependent plasma profiles including transport barrier formation on the Alcator C-Mod tokamak.
Submitted for publication in IEEE Transactions on Plasma Science
</description>
<pubDate>Thu, 01 Oct 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158562</guid>
<dc:date>2020-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Self-Protection Characteristic Comparison Between No-Insulation, Metal-as-Insulation, and Surface-Shunted-Metal-as-Insulation REBCO Coils</title>
<link>https://hdl.handle.net/1721.1/158561</link>
<description>Self-Protection Characteristic Comparison Between No-Insulation, Metal-as-Insulation, and Surface-Shunted-Metal-as-Insulation REBCO Coils
Kim, Junseong; Park, Dongkeun; Dong, Fangliang; Lanzrath, Andrew; Lee, Wooseung; Bascuñán, Juan; Iwasa, Yukikazu
The metal tape co-winding or a metal-as-insulation (MI) winding method is an excellent way to improve the mechanical properties and reduce the average current density, thereby decreasing the stress in high-field REBCO magnet without completely losing the benefits of the no-insulation (NI) winding method. However, the MI winding increases the resistance between turns, which is known as characteristic resistance. The increased characteristic resistance can reduce the bypass current during abnormal transition situation, such as quench, which may not be desirable from a magnet protection point of view. To take advantage of both the MI and NI winding, one possible solution to reduce characteristic resistance of the MI winding coils is to add a shunt on top of the winding surface of the coil. We call this method surface-shunted-metal-as-insulation (SSMI). In this presentation, we compare the characteristic resistances and their correlated selfprotecting characteristics between NI, MI, and SSMI. We present the test results of single pancake coils which wound using different winding methods (NI, MI, and SSMI) with same winding pressure of 20 N. In particular, we investigated how the SSMImethod affects the characteristic resistance.
Submitted for publication in IEEE Transactions on Applied Superconductivity
</description>
<pubDate>Mon, 01 May 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158561</guid>
<dc:date>2023-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Self-Protection Characteristic Comparison Between No-Insulation, Metal-as-Insulation, and Surface-Shunted-Metal-as-Insulation REBCO Coils</title>
<link>https://hdl.handle.net/1721.1/158560</link>
<description>Self-Protection Characteristic Comparison Between No-Insulation, Metal-as-Insulation, and Surface-Shunted-Metal-as-Insulation REBCO Coils
Kim, Junseong; Park, Dongkeun; Dong, Fangliang; Lanzrath, Andrew; Lee, Wooseung; Bascuñán, Juan; Iwasa, Yukikazu
The metal tape co-winding or a metal-as-insulation (MI) winding method is an excellent way to improve the mechanical properties and reduce the average current density, thereby decreasing the stress in high-field REBCO magnet without completely losing the benefits of the no-insulation (NI) winding method. However, the MI winding increases the resistance between turns, which is known as characteristic resistance. The increased characteristic resistance can reduce the bypass current during abnormal transition situation, such as quench, which may not be desirable from a magnet protection point of view. To take advantage of both the MI and NI winding, one possible solution to reduce characteristic resistance of the MI winding coils is to add a shunt on top of the winding surface of the coil. We call this method surface-shunted-metal-as-insulation (SSMI). In this presentation, we compare the characteristic resistances and their correlated selfprotecting characteristics between NI, MI, and SSMI. We present the test results of single pancake coils which wound using different winding methods (NI, MI, and SSMI) with same winding pressure of 20 N. In particular, we investigated how the SSMImethod affects the characteristic resistance.
Submitted for publication in IEEE Transactions on Applied Superconductivity
</description>
<pubDate>Tue, 01 Nov 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158560</guid>
<dc:date>2022-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Sudden-Discharging Quench Dynamics in a No-Insulation Superconducting Coil</title>
<link>https://hdl.handle.net/1721.1/158559</link>
<description>Sudden-Discharging Quench Dynamics in a No-Insulation Superconducting Coil
Dong, Fangliang; Park, Dongkeun; Kim, Junseong; Bacuñán, Juan; Iwasa, Yukikazu
It is generally agreed that no-insulation (NI) hightemperature superconducting (HTS) magnets do not quench because of the turn-to-turn energy-releasing bypass unique to NI. However, these magnets, especially with high operating current and low ambient thermal capacity, still occur unexpected quenches when the current through the magnets suddenly drops to zero (i.e., the sudden-discharging quench). Here, we report this kind of quench, which is different from that widely-reported quench happening during charging (i.e., the energizing quench). Here, a demonstrative coil with 655-turns, 350 A operating current, and 4 K conduction cooling, is used to prove this sudden-discharging quench, and a simulationmodel is built to reveal the quench dynamics. Results show the turn-to-turn heat triggers the initial partial quench in the inner coil turns and then the induced overcurrent spreads out the quench like an avalanche to the outer coil turns.
Submitted for publication in IEEE Transactions on Applied Superconductivity
</description>
<pubDate>Mon, 01 Aug 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158559</guid>
<dc:date>2022-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Observations of multi-ion physics and kinetic effects in a surrogate to the solar CNO reactions</title>
<link>https://hdl.handle.net/1721.1/158558</link>
<description>Observations of multi-ion physics and kinetic effects in a surrogate to the solar CNO reactions
Jeet, J.; Zylstra, A.B.; Gatu Johnson, Maria; Kabadi, Neel V.; Adrian, Patrick J.; Forrest, C.; Glebov, V.
The ‘CNO process’ occurs in heavier stars with finite metallicity in which hydrogen burning is catalyzed in the presence of 12C. These reactions are more strongly dependent on temperature than the pp cycle reactions, and thus the CNO cycle dominates only in massive stars. For these types of reactions to be studied at ICF facilities such as OMEGA, an implosion platform using heavier nuclei in the fuel and capable of creating ion temperatures on the order of at least 20 keV is required. A potential route to reach these conditions is to take advantage of kinetic effects in low-convergence shock-driven ‘exploding pusher’ implosions. In this experiment, shots were conducted at the OMEGA laser facility using the surrogate reaction 13C + D. Its cross section is substantially higher than the actual astrophysical CNO reactions. The yield of this reaction in these implosions was much lower than expected. Physical explanations are discussed, with significant species stratification the likely explanation.
Submitted for publication in High Energy Density Physics
</description>
<pubDate>Sun, 01 Oct 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158558</guid>
<dc:date>2023-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Brazing characteristics, microstructure, and wettability of laser powder bed fusion additive manufactured GRCop-84 compared to CuCrZr and OFC, and brazing to titanium-zirconium-molybdenum alloy limiters</title>
<link>https://hdl.handle.net/1721.1/158557</link>
<description>Brazing characteristics, microstructure, and wettability of laser powder bed fusion additive manufactured GRCop-84 compared to CuCrZr and OFC, and brazing to titanium-zirconium-molybdenum alloy limiters
Seltzman, Andrew H.; Wukitch, S.J.
Laser Powder Bed Fusion (L-PBF) of Glenn Research Copper 84 (GRCop-84), a Cr2Nb (8 at. % Cr, 4 at. % Nb) precipitation hardened alloy, produces a fully dense, high conductivity alloy with a yield strength of 500 MPa and ultimate tensile strength (UTS) of 740 MPa with 20% elongation; superior to other competing copper alloys. Braze wetting characteristics of GRCop-84 with Ag-Cu-X, and Au-Cu brazes were similar to CuCrZr, but less than oxygen free copper. No difference in wetting was observed between infill and surface contour areas in L-PBF GRCop-84. Wet sanding to 240 grit (Ra=0.24 µm) was considered the optimal surface condition. Silver diffusing through GRCop-84 depleted Cr2Nb precipitates from the copper grain and deposited agglomerations of coarsened precipitates within silver-rich regions of intergranular diffusion once a density threshold was reached. Microstructure modification was minimized with 50Au-50Cu braze implying that silver caused precipitate coarsening and agglomeration, and not high temperature exposure. Coarsened precipitates were observed on the surface within braze pools implying a contribution to braze wetting. Palcusil-25, Ticusil, CuSil-ABA, and 50Au-50Cu brazes were suitable for brazing to unplated Titanium-Zirconium-Molybdenum (TZM), while sulfamate nickel plating to allows wetting with CuSil or other non-active brazes. Vacuum brazing techniques were developed to join a 1 mm thick layer of TZM to the front of additive manufactured GRCop-84 waveguides considering the brazing characteristics of both GRCop-84, TZM, and internal stress from the difference in coefficient in thermal expansion.
Submitted for publication in Fusion Engineering and Design
</description>
<pubDate>Fri, 01 Jul 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158557</guid>
<dc:date>2022-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Towards the first plasma-electron screening experiment</title>
<link>https://hdl.handle.net/1721.1/158556</link>
<description>Towards the first plasma-electron screening experiment
Casey, Daniel T.; Weber, Chris R.; Zylstra, Alex B.; Cerjan, Charlie J.; Hartouni, Ed; Hohenberger, Matthias; Divol, Laurent; Dearborn, David S.; Kabadi, Neel V.; Lahmann, Brandon; Gatu Johnson, Maria; Frenje, Johan A.
The enhancement of fusion reaction rates in a thermonuclear plasma by electron screening of the Coulomb barrier is an important plasma-nuclear effect that is present in stellar models but has not been experimentally observed. Experiments using inertial confinement fusion (ICF) implosions may provide a unique opportunity to observe this important plasma-nuclear effect. Herein, we show that experiments at the National Ignition Facility (NIF) have reached the relevant physical regime, with respect to the density and temperature conditions, but the estimated impacts of plasma screening on nuclear reaction rates are currently too small and need to be increased to lower the expected measurement uncertainty. Detailed radiation hydrodynamics simulations show that practical target changes, like adding readily available high-Z gases, and significantly slowing the inflight implosion velocity, while maintaining inflight kinetic energy, might be able to push these conditions to those where plasma screening effects may be measurable. We also perform synthetic data exercises to help understand where the anticipated experimental uncertainties will become important. But challenges remain, such as the detectability of the reaction products, non-thermal plasma effects, species separation, and impacts of spatial and temporal gradients. This work lays the foundation for future efforts to develop an important platform capable of the first plasma electron screening observation.
Submitted for publication in Frontiers in Physics
</description>
<pubDate>Thu, 01 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158556</guid>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Transport of High-energy Charged Particles through Spatially Intermittent Turbulent Magnetic Fields</title>
<link>https://hdl.handle.net/1721.1/158555</link>
<description>Transport of High-energy Charged Particles through Spatially Intermittent Turbulent Magnetic Fields
Chen, L.E.; Bott, A.F.A.; Tzeferacos, P.; Rigby, A.; Bell, A.; Bingham, R.; Graziani, C.; Katz, J.; Koenig, M.; Li, Chi-Kang; Petrasso, Richard D.; Park, H.-S.; Ross, J.S.; Ryu, D.; White, T.G.; Reville, B.; Matthews, J.; Meinecke, J.; Miniati, F.; Zweibel, E.G.; Sarkar, S.; Schekochihin, A.A.; Lamb, D.Q.; Froula, D.H.; Gregori, G.
Identifying the sources of the highest energy cosmic rays requires understanding how they are deflected by the stochastic, spatially intermittent intergalactic magnetic field. Here we report measurements of energetic charged- particle propagation through a laser-produced magnetized plasma with these properties. We characterize the diffusive transport of the particles experimentally. The results show that the transport is diffusive and that, for the regime of interest for the highest energy cosmic rays, the diffusion coefficient is unaffected by the spatial intermittency of the magnetic field.
Submitted for publication in Astrophysical Journal
</description>
<pubDate>Sun, 01 Dec 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158555</guid>
<dc:date>2019-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Understanding LOC/SOC Phenomenology in Tokamaks</title>
<link>https://hdl.handle.net/1721.1/158554</link>
<description>Understanding LOC/SOC Phenomenology in Tokamaks
Rice, John E.; Citrin, J.; Cao, N.M.; Diamond, P.H.; Fable, E.; Greenwald, M.; Grierson, B.A.
Phenomenology of Ohmic energy confinement saturation in tokamaks is reviewed. Characteristics of the linear Ohmic confinement (LOC) and saturated Ohmic confinement (SOC) regimes are documented and transformations in all transport channels across the LOC/SOC transition are described, including rotation reversals, ``non-local'' cut-off and  density peaking, in addition to dramatic changes in fluctuation intensity. Unification of results from nearly 20 devices indicates that the LOC/SOC transition occurs at a critical value of the product of the density, edge safety factor and device major radius,  and that this product increases with toroidal magnetic field. Comparison with gyro-kinetic simulations suggests that the effects of sub-dominant TEMs are important in the LOC regime while ITG  mode turbulence dominates with SOC.
Submitted for publication in Nuclear Fusion
</description>
<pubDate>Sat, 01 Feb 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158554</guid>
<dc:date>2020-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design of inertial fusion implosions reaching the burning plasma regime</title>
<link>https://hdl.handle.net/1721.1/158553</link>
<description>Design of inertial fusion implosions reaching the burning plasma regime
Kritcher, A.L.; Young, C.V.; Robey, H.F.; Weber, C.R.; Zylstra, A.B.; Hurricane, O.A.; Callahan, D.A.; Ralph, J.E.; Ross, J.S.; Baker, K.L.; Casey, D.T.; Clark, D.S.; Döeppner, T.; Divol, L.; Hohenberger, M.; Le Pape, S.; Pak, A.E.; Patel, P.K.; Tommasini, R.; Ali, S.J.; Amendt, P.A.; Atherton, J.; Bachmann, B.; Bailey, D.; Benedetti, L.R.; Berzak Hopkins, L.; Betti, R.; Bhandarkar, S.D.; Bionta, R.M.; Birge, N.W.; Bond, E.J.; Bradley, D.K.; Braun, T.; Briggs, T.M.; Bruhn, M.W.; Celliers, P.M.; Chang, B.; Chapman, T.; Chen, H.; Choate, C.; Christopherson, A.R.; Crippen, J.W.; Dewald, E.L.; Dittrich, T.R.; Edwards, M.J.; Farmer, W.A.; Field, J.E.; Fittinghoff, D.; Frenje, Johan A.; Gaffney, J.; Gatu Johnson, Maria; Glenzer, S.H.; Grim, G.P.; Haan, S.; Hahn, K.D.; Hall, G.N.; Hammel, B.A.; Harte, J.; Hartouni, E.; Heebner, J.E.; Hernandez, V.J.; Herrmann, H.; Herrmann, M.C.; Hinkel, D.E.; Ho, D.D.; Holder, J.P.; Hsing, W.W.; Huang, H.; Humbird, K.D.; Izumi, N.; Jeet, J.; Jones, O.; Kerbel, G.D.; Kerr, S.M.; Khan, S.F.; Kilkenny, J.; Kim, Y.; Geppert Kleinrath, H.; Geppert Kleinrath, V.; Kline, J.L.; Kong, C.; Koning, J.M.; Kroll, J.J.; Landen, O.L.; Langer, S.; Larson, D.; Lemos, N.C.; Lindl, J.D.; Ma, T.; MacGowan, B.J.; Mackinnon, A.J.; MacLaren, S.A.; MacPhee, A.G.; Marinak, M.M.; Mariscal, D.A.; Marley, E.V.; Masse, L.; Meaney, K.; Meezan, N.B.; Michel, P.A.; Millot, M.A.; Milovich, J.L.; Moody, J.D.; Moore, A.S.; Morton, J.W.; Newman, K.; Di Nicola, J.-M. G.; Nikroo, A.; Nora, R.; Patel, M.V.; Pelz, L.J.; Peterson, J.L.; Ping, Y.; Pollock, B.B.; Ratledge, M.; Rice, N.G.; Rinderknecht, H.; Rosen, M.; Rubery, M.S.; Salmonson, J.D.; Sater, J.; Schiaffino, S.; Schlossberg, D.J.; Schneider, M.B.; Schroeder, C.R.; Scott, H.A.; Sepke, S.M.; Sequoia, K.; Sherlock, M.W.; Shin, S.; Smalyuk, V.A.; Spears, B.K.; Springer, P.T.; Stadermann, M.; Stoupin, S.; Strozzi, D.J.; Suter, L.J.; Thomas, C.A.; Town, R.P.J.; Tubman, E.R.; Volegov, P.L.; Widmann, K.; Wild, C.; Wilde, C.H.; Van Wonterghem, B.M.; Woods, D.T.; Woodworth, B.N.; Yamaguchi, M.; Yang, S.T.; Zimmerman, G.B.
One of the last remaining milestones in fusion research before reaching ignition is creating a burning plasma state, where alpha particles from deuterium-tritium (DT) fusion reactions redeposit their energy as the dominant source of heating in the plasma. The indirect-drive inertial confinement fusion approach at the National Ignition Facility (NIF) uses a laser-generated radiation cavity (hohlraum) to spherically implode DT fuel to high temperatures and densities in a central ”hot spot”. Here, we deliver more energy to the hot spot than ever before, while maintaining the extreme pressures required for inertial confinement, by increasing the size of the implosion compared to previous experiments. We develop more efficient hohlraums, to drive these larger implosions within NIF’s current laser energy and power capability and control symmetry by moving energy between laser beams and by changing the shape of the hohlraum. These designs resulted in record fusion powers of 1.5 petawatts, greater than the input power of the laser, and 170 kJ of fusion energy. Radiation hydrodynamics simulations show alpha particle heating as the dominant term in the hot spot energy balance, e.g. a burning plasma state. This work is expected to motivate future studies of burning plasmas and improve predictive capability by providing a benchmark for modeling used to understand the proximity to ignition.
Submitted for publication in Nature Physics
</description>
<pubDate>Sat, 01 May 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158553</guid>
<dc:date>2021-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Collisional effects on resonant particles in quasilinear theory</title>
<link>https://hdl.handle.net/1721.1/158552</link>
<description>Collisional effects on resonant particles in quasilinear theory
Catto, Peter J.
A careful examination of the effects of collisions on resonant wave-particle interactions leads to an alternate interpretation and deeper understanding of the quasilinear operator originally formulated by Kennel and Engelmann (Phys. Fluids vol. 9, 1966, pp. 2377- 2388) for collisionless, magnetized plasmas, and widely used to model radio frequency heating and current drive. The resonant and nearly resonant particles are particularly sensitive to collisions that pitch angle scatter them out of and into resonance. As a result, the resonant particle-wave interactions occur in the center of a narrow collisional boundary when the collision frequency nu is very small compared to the wave frequency omega. The diffusive nature of the pitch angle scattering combined with the wave-particle resonance condition enhances the collision frequency by (omega/nu)2/3 &gt;&gt;1, resulting in an effective resonant particle collision time of tau_int ~ (nu /omega)2/3 nu &lt;&lt;1/ nu . A rigorous collisional boundary layer analysis generalizes the standard quasilinear operator to a form that is fully consistent with Kennel-Englemann, but allows replacing the delta function appearing in the diffusivity with a simple integral (having the appropriate delta function limit) retaining the new physics associated with the narrow boundary layer, while preserving the entropy production principle. The limitations of the collisional boundary layer treatment are also estimated, and indicate that substantial departures from Maxwellian are not permitted.
Submitted for publication in Journal of Plasma Physics
</description>
<pubDate>Sat, 01 Feb 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158552</guid>
<dc:date>2020-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Combinatorial synthesis and analysis of AlxTayVz-Cr20Mo20Nb20Ti20Zr10 and Al10CrMoxNbTiZr10 refractory high-entropy alloys: Oxidation behavior</title>
<link>https://hdl.handle.net/1721.1/158551</link>
<description>Combinatorial synthesis and analysis of AlxTayVz-Cr20Mo20Nb20Ti20Zr10 and Al10CrMoxNbTiZr10 refractory high-entropy alloys: Oxidation behavior
Waseem, Owais Ahmed; Ryu, Ho Jin
The combinatorial development of refractory high-entropy alloy AlxTayVz-Cr20Mo20Nb20Ti20Zr10 (AlxTayVz-Q) was carried out, and microstructural analysis was performed. The homogenized AlxTayVz-Q revealed a body-centered cubic structure with intermetallic phases. High-temperature oxidation analysis of AlxTayVz-Q for 1 h at 1000 °C using thermogravimetric analysis (TGA) revealed volatile oxidation of the alloy. Therefore, in an effort to improve the oxidation resistance of the alloy, the composition was modified to Al10CrMoxNbTiZr10 and analyzed. The TGA analysis revealed enhanced oxidation resistance of Al10CrNbTiZr10 (Mo-0), and a weight gain of only 1 mg/cm2 after oxidation for 1 h at 1000 °C in air, owing to the formation of the protective oxides of Al and Cr. The Mo-x samples were subjected to prolonged oxidation (for 50 h) at 1000 °C in air. After 50 h of oxidation, the Mo-0 sample showed a weight gain of ∼24 mg/cm2 and remained intact. The energy dispersive spectroscopy analysis of the oxide scale formed after 50 h of oxidation revealed CrNbO4, Al2O3, and AlTiO5, which account for the enhanced oxidation resistance of Mo-0 and forecasts its potential for high-temperature applications.
Submitted for publication in Journal of Alloys and Compounds
</description>
<pubDate>Wed, 01 Jul 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158551</guid>
<dc:date>2020-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Ultra-rapid, physics-based development pathway for reactor-relevant RF antenna materials</title>
<link>https://hdl.handle.net/1721.1/158550</link>
<description>Ultra-rapid, physics-based development pathway for reactor-relevant RF antenna materials
Wallace, Greg M.; Botica Artalejo, E.; Short, M.P.; Woller, K.B.
This paper presents a rapid, atomistically-informed, experimental development pathway for fusion reactor-relevant radio frequency (RF) antenna materials in the Cu-Cr-(Nb,Al,Zr) composition system, with the goal of improving upon GRCop-84. RF antennas in a tokamak fusion reactor will face a unique set of challenges as both structural and functional materials. The desired material must simultaneously achieve and maintain high electrical conductivity, high strength, high thermal conductivity, resist high temperatures, possess low nuclear activation, and incur low damage due to neutron bombardment. The GRCop-84 alloy serves as a starting point for iterative improvement, with the desire to reduce or eliminate Nb from the material to minimize nuclear activation. The rapid development pathway makes use of a multi-target combinatorial thick film sputtering process to produce full ternary phase diagrams on a Si wafer substrate. Transient grating spectroscopy (TGS), a laser-ultrasonic method, will determine spatially-varying thermo-elastic properties, while four terminal electrical conductivity measurements will map out the best per- forming regions of the sample for in-depth study at larger length scales. High energy proton and self-ion irradiation emulates the effects of neutron damage on the thermal/electric properties. With rapid turnaround time (∼days) in terms of mapping radiation damage-induced material property changes in the full ternary system, these techniques allow rapid iteration towards an optimal material, testing hundreds of nearby compositions in the time it took to test one. Focused testing of larger, single composition samples (produced in an arc furnace or by laser sintering) provides data on structural and high power RF properties, and validates our thick-film based workflow.
Submitted for publication in IEEE Transactions on Plasma Science
</description>
<pubDate>Fri, 01 Apr 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158550</guid>
<dc:date>2022-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>A 20-K, 600-W, Cryocooler-Based, Supercritical Helium Circulation System for the SPARC Toroidal Field Model Coil Program</title>
<link>https://hdl.handle.net/1721.1/158549</link>
<description>A 20-K, 600-W, Cryocooler-Based, Supercritical Helium Circulation System for the SPARC Toroidal Field Model Coil Program
Michael, Philip C.; Golfinopoulos, Theodore; Ihloff, Ernest; Zhukovsky, Alexander; Schweiger, Shane; Fry, Vincent; O'Shea, Colin; Watterson, Amy; Nash, Daniel; Vieira, Rui F.; Doody, Jeffrey; Barnett, Raheem; Voirin, Erik A.; Bartoszek, Larry; Lations, Ricahrd F.; Hartwig, Zachary S.
From June 2019 to July 2021, the MIT Plasma Science and Fusion Center in collaboration with Commonwealth Fusions Systems (CFS) designed, built, and commissioned a test facility at MIT to evaluate the performance of a REBCO-based, 2.9-m tall, 1.9-m wide Toroidal Field Model Coil (TFMC) for the SPARC tokamak. This paper presents the facility’s supercritical helium (SHe) circulation system design and measured performance. The facility employed a forced-flow SHe-circulation loop cooled by cryocoolers to provide a nominal cooling power of 600 W at 20 K and up to 70 g/s SHe flow to the TFMC at an absolute pressure of 20 bar. The reliance on cryocoolers as the facility’s cooling source was an ideal arrangement. Procurement costs were modest, acquisition time was reasonably, and siting requirements were minimal. Steady improvement in cryocooler design provided a simple to use system with sufficient cooling capacity for our needs. Extensive, closed-loop analyses were preformed both to support this procurement and to finalize the overall design of the SHe cooling circuit. The SHe system worked reliably, permitting flexible operation of the TFMC test facility at all working conditions.
Submitted for publication in IEEE Transactions on Applied Superconductivity
</description>
<pubDate>Wed, 01 Nov 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158549</guid>
<dc:date>2023-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Cryogen-Free 25-T REBCO Magnet With the Extreme-No-Insulation Winding Technique</title>
<link>https://hdl.handle.net/1721.1/158548</link>
<description>A Cryogen-Free 25-T REBCO Magnet With the Extreme-No-Insulation Winding Technique
Park, Dongkeun; Lee, Wooseung; Bascuñán, Juan; Kim, Ho Min; Iwasa, Yukikazu
We present the operation result of a cryogen-free 23.5 T/ϕ12.5 mm-cold-boremagnet prototype composed of a stack of 12 no-insulation (NI) REBCO single pancake coils—ten middle coils of 6-mm wide and two end coils of 8-mm wide tape—forming 6 double pancake (DP) coils with inner joints. Each coil was wound with the tape having only 1-μm-thick copper layer on each side to overcome the conductor thickness uniformity issue and enhance the mechanical strength within the winding, and then, additional electrical shunting by thin layers of solder was applied on the top and bottom surfaces of eachDPcoil for effective cooling and quench protection—called extreme-NI winding technique.With this small prototypemagnet towards a benchtop 1-GHzNMR,wevalidate our coil design that include conductor performance, screening-currentinduced field and stresses, and conduction-cooling cryogenics. Included in the paper are: 1) conductor issues and our counterproposal in winding; 2) screening-current reduction method; 3) design and manufacture summary of the magnet; and 4) operating test results of the magnet up to 25 Tesla.
Submitted for publication in IEEE Transactions on Applied Superconductivity
</description>
<pubDate>Fri, 01 Oct 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158548</guid>
<dc:date>2021-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design Overview of the MIT 1.3-GHz LTS/HTS NMR Magnet with a New REBCO Insert</title>
<link>https://hdl.handle.net/1721.1/158547</link>
<description>Design Overview of the MIT 1.3-GHz LTS/HTS NMR Magnet with a New REBCO Insert
Park, Dongkeun; Bascunan, Juan; Li, Yi; Lee, Wooseung; Choi, Yoonhyuck; Iwasa, Yukikazu
We present a design overview of the MIT 1.3-GHz LTS/HTS NMR magnet (1.3G) with a newly designed 835-MHz REBCO insert (H835) as a replacement for the 800-MHz REBCO insert (H800) that was damaged when it quenched during opera-tion in 2018. The new H835 is designed to contribute 19.6 T in a background field of 10.93 T by an LTS NMR magnet that normal-ly rated at 11.74 T (500 MHz): combined, 1.3G generates a total field of 30.53 T corresponding to a proton resonance frequency of 1.3 GHz. H835 is designed to operate stably while meeting 1.3G de-sign constraints. We have also designed H835 to protect it from permanent damage in an improbable event like a quench. Key de-sign features are: 1) a single-coil formation, composed of 38 stacked metal-co-wound no-insulation and 2 stacked no-insulation double-pancake coils, all with mechanically improved cross-over sections; 2) enhanced thermal stability; and 3) reduced current margin with a detect-and-heat method. This paper in-cludes: 1) electromagnetic and mechanical design of H835; 2) cryo-genics overview; 3) quench protection strategy; and 3) discussion on the next steps to successfully complete 1.3G.
Submitted for publication in IEEE Transactions on Applied Superconductivity
</description>
<pubDate>Sun, 01 Aug 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158547</guid>
<dc:date>2021-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>3D xRAGE simulation of inertial confinement fusion implosion with imposed mode 2 laser drive asymmetry</title>
<link>https://hdl.handle.net/1721.1/158546</link>
<description>3D xRAGE simulation of inertial confinement fusion implosion with imposed mode 2 laser drive asymmetry
Gatu Johnson, Maria; Haines, B.M.; Adrian, Patrick J.; Forrest, C.; Frenje, Johan A.; Glebov, V.Yu.; Grimble, W.; Janezic, R.; Knauer, J.P.; Lahmann, Brandon; Marshall, F.J.; Michel, T.; Séguin, Frederick H.; Stoeckl, C.; Petrasso, Richard D.
Low-mode asymmetries represent an important obstacle to achieving high-gain inertial confinement fusion implosions. As a step in learning how to control such effects, an OMEGA experiment with imposed mode 2 laser drive asymmetries was done to study the expected signatures of this type of asymmetry [M. Gatu Johnson et al., PRE 2018]. In the present work, a 3D xRAGE simulation including the stalk mount has been brought to bear on the data from that experiment. Comprehensive comparisons between simulated and measured observables are made. Good agreement between simulated and measured x-ray image-inferred shell trajectories, bang times and neutron emission widths are seen, showing that the hydrodynamics are well captured in the simulation. Asymmetries seen in simulated and measured time-resolved and time-integrated x-ray images and areal densities also compare well, showing impact of both stalk and mode 2. On the other hand, important differences in measured and simulated neutron emission histories, yield, and ion temperature (Tion) asymmetries are seen, suggesting that the simulation is overestimating shock yield. The results clearly demonstrate the importance of considering all asymmetry sources when interpreting measured signatures of asymmetry.
Submitted for publication in High Energy Density Physics
</description>
<pubDate>Sun, 01 Dec 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158546</guid>
<dc:date>2019-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Progress Towards Interpretable Machine Learning-based Disruption Predictors Across Tokamaks</title>
<link>https://hdl.handle.net/1721.1/158545</link>
<description>Progress Towards Interpretable Machine Learning-based Disruption Predictors Across Tokamaks
Rea, Christina; Mones, K.J.; Pau, A.; Granetz, R.S.; Sauter, O.
In this paper we lay the groundwork for a robust cross-device comparison of data-driven disruption prediction algorithms on DIII-D and JET tokamaks. In order to consistently carry on a comparative analysis, we define physics-based indicators of disruption precursors based on temperature, density, and radiation profiles that are currently missing for DIII-D data. These profile-based indicators are shown to well-describe impurity accumulation events in both DIII-D and JET discharges that eventually disrupt. Thanks to the univariate analysis on the features used in such data-driven applications on both tokamaks, we are able to statistically highlight differences in the dominant disruption precursors: JET with its ITER-like wall is more prone to impurity accumulation events, while DIII-D is more subject to edge cooling mechanisms that destabilize dangerous MHD modes. Even though the analyzed datasets are characterized by such intrinsic differences, we show how data-driven algorithms trained on one device can be used to predict and interpret disruptive scenarios on the other. As long as the destabilizing precursors are diagnosed in a device-independent way, the knowledge that data-driven algorithms learn on one device can be used to explain a disruptive behavior on another device.
Submitted for publication in Fusion Science and Technology
</description>
<pubDate>Sun, 01 Dec 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158545</guid>
<dc:date>2019-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Drift kinetic theory of alpha transport by tokamak perturbations</title>
<link>https://hdl.handle.net/1721.1/158544</link>
<description>Drift kinetic theory of alpha transport by tokamak perturbations
Tolman, Elizabeth A.; Catto, Peter J.
Upcoming tokamak experiments fueled with deuterium and tritium are expected to have large alpha particle populations. Such experiments motivate new attention to the theory of alpha particle confinement and transport. A key topic is the interaction of alpha particles with perturbations to the tokamak fields, including those from magnetohydrodynamic modes like Alfvén eigenmodes and from ripple. These perturbations can transport alphas, leading to changed localization of alpha heating, loss of alpha power, and damage to device walls. Alpha interaction with these perturbations is often studied with single particle theory. In contrast, we derive a drift kinetic theory to calculate the alpha heat flux resulting from arbitrary perturbation frequency and periodicity (provided the frequency and periodicity can be studied drift kinetically). Novel features of the theory include the retention of a large effective collision frequency resulting from the resonant alpha collisional boundary layer, correlated interactions over many poloidal transits, and finite orbit effects. Heat fluxes are considered for the example cases of ripple and the toroidal Alfén eigenmode (TAE). The ripple heat flux is small. The TAE heat flux is significant and scales with the square of the perturbation amplitude, allowing the derivation of a constraint on mode amplitude for avoidance of significant alpha depletion. A simple saturation condition suggests that TAEs in one upcoming experiment will not cause significant alpha transport via the mechanisms in this theory. However, saturation above the level suggested by the simple condition, but within numerical and experimental experience, could cause significant transport.
Submitted for publication in Journal of Plasma Physics
</description>
<pubDate>Tue, 01 Sep 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158544</guid>
<dc:date>2020-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>An ultrahigh-bandwidth Phase Contrast Imaging system for fusion plasmas</title>
<link>https://hdl.handle.net/1721.1/158543</link>
<description>An ultrahigh-bandwidth Phase Contrast Imaging system for fusion plasmas
Marinoni, Alessandro; Rost, Jon C.; Porkolab, Miklos; Seraydarian, R.
A novel Phase Contrast Imaging system that uses probing light in the near infrared region has been developed to image electron density fluctuations in fusion plasmas. As compared to standard systems operating in the mid infra-red region, the spectral response of the system is extended in wave-number and frequency response by 7 and 100 times, respectively. The internal structure of turbulence and radio-frequency waves is therefore accessible across an unprecedented wavelength and frequency range, extending into the electron gyro-radius scale and the GHz frequency region
Submitted for publication in Journal of Instrumentation
</description>
<pubDate>Wed, 01 Dec 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158543</guid>
<dc:date>2021-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Partial-Insulation HTS Magnet for Reduction of Quench-Induced Peak Currents</title>
<link>https://hdl.handle.net/1721.1/158542</link>
<description>Partial-Insulation HTS Magnet for Reduction of Quench-Induced Peak Currents
Lee, Wooseung; Park, Dongkeun; Bascuñán, Juan; Iwasa, Yukikazu
The No-insulation-like (NI) coil’s turn-to-turn current paths prevent local heating by forcing the current to bypass into nearby turns when a hot spot appears in a coil. However, the changing direction of the current by bypassing will change the magnetic flux, which generates unwanted induced currents in the adjacent coils in a multiply-stacked HTS magnet. This induced current can temporarily exceed the designed maximum currents in the NI coils, damaging the magnet. A partial-insulation (PI) coil, in which a single or multiple insulated, with a polyimide-like material or a thin ceramic film, is inserted between windings to hinder the current paths, can reduce the peak induced currents in theNIHTS coil’s current paths. In this paper, we present the results of a simulation study on the peak-induced current upon a quench of the PI HTS magnet with a double pancake. The study shows that the peak-induced current varies with the number of insulated turns.We also discuss the induced current turn-by-turn simulation. According to the simulation result, the PI effectively reduces overall induced current, especially insulation applied every two turns.
Submitted for publication in IEEE Transactions on Applied Superconductivity
</description>
<pubDate>Fri, 01 Oct 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158542</guid>
<dc:date>2021-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Design Overview of the MIT 1.3-GHz LTS/HTS NMR Magnet with a New REBCO Insert</title>
<link>https://hdl.handle.net/1721.1/158541</link>
<description>Design Overview of the MIT 1.3-GHz LTS/HTS NMR Magnet with a New REBCO Insert
Park, Dongkeun; Bascuñán, Juan; Li, Yi; Lee, Wooseung; Choi, Yoonhyuck; Iwasa, Yukikazu
We present a design overview of the MIT 1.3-GHz LTS/HTS NMR magnet (1.3G) with a newly designed 835-MHz REBCO insert (H835) as a replacement of the 800-MHz REBCO insert that was damaged when it quenched during operation in 2018. The new H835 contributes 19.6 T as designed, with an LTS back-ground magnet of 10.9 T, toward a total field of 30.5 T that corre-sponds to a proton resonance frequency of 1.3 GHz. The H835 is de-signed to be stable within 1.3G design constraints. The design also prevents the entire insert from permanent damage in the improba-ble event like a quench. Key design features are: 1) a single-solenoid structure, composed of 38 stacked metal-co-wound no-insula-tion and 4 stacked no-insulation double-pancake coils with mechan-ically improved cross-over sections; 2) enhanced thermal stability; and 3) reduced excessive current margin with a detect-and-activate-the-heater method. This paper includes: 1) electromagnetic and me-chanical design of the H835; 2) cryogenics overview; 3) quench pro-tection schemes; and 3) discussion on the next steps toward the 1.3G.
Submitted for publication in IEEE Transactions on Applied Superconductivity
</description>
<pubDate>Mon, 01 Nov 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158541</guid>
<dc:date>2021-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Characterization of MEMS Acoustic Sensors and Amplifiers in Cryogenic Fluids for Quench Detection Applications in HTS CICC</title>
<link>https://hdl.handle.net/1721.1/158540</link>
<description>Characterization of MEMS Acoustic Sensors and Amplifiers in Cryogenic Fluids for Quench Detection Applications in HTS CICC
Zhoa, Z.; Moore, P.; Owen, C.; Anilus, M.; Chau, S.; Desai, A.; Emerling, M.; Chiesa, L.; Takayasu, Makoto; White, R.
An acoustic quench detection method utilizing MEMS (Micro Electro-Mechanical System) acoustic sensors is proposed. To investigate this method, a commercially available MEMS pie-zoelectric microphone, the Vesper VM1000, and two types of second stage amplifiers, using either an OPA344 or a LMH6629 based amplifier circuit, were characterized at cryo-genic temperatures in helium gas. The MEMS microphones were in their original package with an integrated preamplifier. The tests were performed inside a two-stage Gifford-McMahon cryocooler from room temperature down to 60 K, at static pressures between 1.2 and 1.4 bar in gaseous helium, over the frequency band from 100 Hz to 10 kHz. Second stage amplifiers were needed to achieve signal to noise ratios approaching the manufacturer specified operating levels.  The OPA344 based amplifier reduced in gain by &gt;55 dB below 230 K, while the LMH6629 based amplifier performed well down to 60 K. The MEMS microphones appear to perform acoustic measure-ments down to 165 K but with some reduction in sensitivity down to 60 K. An acoustic model of the cryocooler plane wave tube calibration setup is developed and used to calibrate the microphone despite the presence of a significant thermal gradients down the plane wave tube.
Submitted for publication in IEEE Transactions on Applied Superconductivity
</description>
<pubDate>Sun, 01 Nov 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158540</guid>
<dc:date>2020-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Hot-Spot Modeling of REBCO NI Pancake Coil: Analytical and Experimental Approaches</title>
<link>https://hdl.handle.net/1721.1/158539</link>
<description>Hot-Spot Modeling of REBCO NI Pancake Coil: Analytical and Experimental Approaches
Lee, Wooseung; Park, Dongkeun; Choi, Yoonhyuck; Li, Yi; Bascuana, Juan; Iwasa,Yukikazu
The No-Insulation (NI)winding provides intrinsic bypassing current paths that enable self-protection fromoverheating. The self-protection of the NI coil is one of the most promising protection techniques for the high field high-temperature superconductor (HTS) magnet applications. Since the additional paths are valid for an HTS magnet with a thinner matrix, the self-protection mechanism is applicable even for the higher current density magnet with reduced matrix thickness inside the HTS tape. However, reducing the matrix can cause damage to the magnet by producing excessive heat during the quench. This research introduces a new modeling method to investigate the hot-spot characteristics in the REBCO NI pancake coil. Themodel is also validated with a sample NI HTS coil experiment result. Radial direction Normal Zone Propagation (NZP) velocity of the sample coil is estimated based on the suggested model. The calculated radial direction NZP velocity is applied to calculate the center field drop of the NI HTS coil, and the result is well-matched with the experiment result.We also introduce one example of the model applications. The maximum current density that will not exceed a given reference temperature in the adiabatic cooling condition is estimated using the model.
Submitted for publication in IEEE Transactions on Applied Superconductivity
</description>
<pubDate>Sun, 01 Nov 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158539</guid>
<dc:date>2020-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>A more flexible design for MDSplus Device drivers</title>
<link>https://hdl.handle.net/1721.1/158538</link>
<description>A more flexible design for MDSplus Device drivers
Santoro, Fernando; Lane-Walsh, Stephen; Stiller, Joshua; Winkel, Mark
The traditional approach to building MDSplus Device drivers is rigid and lacks the ability to meet changing needs. We introduce a novel paradigm for Device driver development that allows the tree structure to dynamically change. This allows device drivers that can reconfigure to automatically reflect the hardware it represents, or a device that implements a variable number of queries to an external database. We have created a driver using this paradigm that communicates with a digitizer, queries the modules attached, and builds a MDSplus tree structure to utilize them. Additionally, this driver can reconfigure to match changes in the digitizer, by adding or deleting nodes using overwrite and/or delete modes. We also wrote a method for verifying both the setting provided and that the hardware matches the last known state. We have added fields to help validate settings such as min/max limits, and a list of allowed values. The definitions of the nodes which make of the device have been augmented to include help, tool tips and validation ranges. This will facilitate automated user interface generation.
Submitted for publication in Fusion Engineering and Design
</description>
<pubDate>Fri, 01 Nov 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158538</guid>
<dc:date>2024-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Introduction to MDSplus using Docker</title>
<link>https://hdl.handle.net/1721.1/158537</link>
<description>Introduction to MDSplus using Docker
Lane-Walsh, Stephen; Stillerman, J.; Santoro, Fernando; Fredian, T.
With increased use of MDSplus1 comes an influx of new users. With them comes a need for more and better ways to learn the suite of tools that is MDSplus. The MDSplus team continually evaluates new technologies to improve our software and user experience. To this end, we investigated Docker to determine if and how it could help new users understand MDSplus, make MDSplus easier to install, and allow us to easily test old/new versions.  To achieve this, a set of Docker images and instructions have been developed. This paper will provide an overview of MDSplus, and detail the methods to create and use the Docker images. Additionally, we will explore the limitations of such an approach, and the recommended applications.The project where these Docker Images were built, along with the Demo is here: https://github.com/WhoBrokeTheBuild/DockerizedMDSplus https://hub.docker.com/r/whobrokethebuild/mdsplus The now official Docker Images are available here: https://github.com/MDSplus/Docker https://hub.docker.com/r/mdsplus/mdsplus
Submitted for publication in Fusion Engineering and Design
</description>
<pubDate>Sat, 01 Aug 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158537</guid>
<dc:date>2020-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Fusion Plasma Turbulence Research Beyond the Burning Plasma Era: Perspectives on Transport Model Validation in Fusion and Fission</title>
<link>https://hdl.handle.net/1721.1/158536</link>
<description>Fusion Plasma Turbulence Research Beyond the Burning Plasma Era: Perspectives on Transport Model Validation in Fusion and Fission
White, Anne E.; Baglietto, E.; Bucci, M.; Howard, Nathan T.; Rodriguez Fernandez, Pablo
In fusion, the validation of turbulent transport models is undertaken with the goals of making basic physics discoveries as well as for development of new predictive models to improve the operation and enhance the performance of existing and future fusion reactors. A fusion industry is just beginning to emerge globally. Like fission, validation in fusion energy research is a vibrant research area, but unlike fusion, a fission industry exists. The fission power industry motivates validation efforts, often performed at universities with small-scale experiments and advanced models and simulations developed in-house. Because fission research spans basic physics and applications, and addresses near-term and long-term industry interests, validation is thriving. This perspective article describes the validation of turbulent transport models in both fusion research and fission research, draws parallels between the validation methods and techniques used in two areas of the fields, and presents an outlook for thriving university fusion and fission research programs underpinned by a virtual cycle of basic and applied research that supports industry needs as well as tackling intellectual grand challenges.
Submitted for publication in Frontiers in Nuclear Engineering
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158536</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>On the future sustainable ultra-high-speed maglev: An energy-economical superconducting linear thrusting system</title>
<link>https://hdl.handle.net/1721.1/158535</link>
<description>On the future sustainable ultra-high-speed maglev: An energy-economical superconducting linear thrusting system
Dong, Fangliang; Hao, Luning; Park, Dongkeun; Iwasa, Yukikazu; Huang, Zhen
Along with 1000-km/h magnetically levitated trains (maglevs), an era of future traveling is approaching. With only ∼1/5 energy consumption per passenger kilometer while achieving a similar speed compared to airplanes, the ultra-high-speed maglevs would change the way the world moves with an on-demand sustainable mass transportation system that connects cities in minutes. Meanwhile, with ever-advancing superconducting technology, the zero-joule-loss magnet in high-density-energy preservation is much improved with strong magnetic field. This consequently enables the energy-efficient but powerful superconducting linear thrusting system - the key part that drives the maglevs to the speed, in an even more energy-friendly way. Here, we take advantage of superconductor, and present successful solutions to two energy bottlenecks regarding energy preservation and conversion unique to this novel thrusting system, that is, 1) on-board feeding power constraint and 2) field-ripple-caused loss, by demonstrating a prototype with two merits: 1) its on-board superconducting propulsive magnet can operate as a standalone system free of any on-board feeding powers for maintaining energizing and cryogenic cooling; 2) the ground propulsive structure can greatly suppress thermal loss during operation. We hope the work could solve energy issues in the future maglev, and prompt the process of transport electrification and decarbonization.
Submitted for publication in Energy Conversion and Management
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158535</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>A new look at the environmental conditions favorable to secondary ice production</title>
<link>https://hdl.handle.net/1721.1/158534</link>
<description>A new look at the environmental conditions favorable to secondary ice production
Korolev, Alexei; Heckman, Ivan; Wolde, Mengistu; Ackerman, Andrew S; Fridlind, Ann M; Ladino, Luis A; Lawson, R Paul; Milbrandt, Jason; Williams, Earle
This study attempts a new identification of mechanisms of secondary ice production (SIP) based on the observation of small faceted ice crystals (hexagonal plates or columns) with typical sizes smaller than 100 µm. Due to their young age, such small ice crystals can be used as tracers for identifying the conditions for SIP. Observations reported here were conducted in oceanic tropical mesoscale convective systems (MCSs) and midlatitude frontal clouds in the temperature range from 0 to −15 ∘C and heavily seeded by aged ice particles. It was found that in both MCSs and frontal clouds, SIP was observed right above the melting layer and extended to higher altitudes with colder temperatures. The roles of six possible mechanisms to generate the SIP particles are assessed using additional observations. In most observed SIP cases, small secondary ice particles spatially correlated with liquid-phase, vertical updrafts and aged rimed ice particles. However, in many cases, neither graupel nor liquid drops were observed in the SIP regions, and therefore, the conditions for an active Hallett–Mossop process were not met. In many cases, large concentrations of small pristine ice particles were observed right above the melting layer, starting at temperatures as warm as −0.5 ∘C. It is proposed that the initiation of SIP above the melting layer is stimulated by the recirculation of large liquid drops through the melting layer with convective turbulent updrafts. After re-entering a supercooled environment above the melting layer, they impact with aged ice, freeze, and shatter. The size of the splinters generated during SIP was estimated as 10 µm or less. A principal conclusion of this work is that only the freezing-drop-shattering mechanism could be clearly supported by the airborne in situ observations.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158534</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>High-Connectivity Triazolate-Based Metal–Organic Framework for Water Harvesting</title>
<link>https://hdl.handle.net/1721.1/158533</link>
<description>High-Connectivity Triazolate-Based Metal–Organic Framework for Water Harvesting
Ravin, Karla; Sarver, Patrick; Dinakar, Bhavish; Palatinus, Lukáš; Müller, Peter; Oppenheim, Julius; Dincă, Mircea
Increasing the connectivity of structural units presents a potentially valuable approach to improve hydrolytic stability in metal–organic frameworks (MOFs). We herein leverage this strategy by synthesizing the first tritopic benzotriazolate MOF, Zn5(OAc)4(TBTT)2 (H3TBTT = 2,4,6-tris(1H-benzo[d][1,2,3]triazol-5-yl)-1,3,5-triazine), which exhibits open metal sites, high connectivity, high porosity, and significant water uptake capacity. The MOF adopts a previously unknown topology with (3,6,6)-connectivity, which is supported by single-crystal electron diffraction and elemental analysis. The framework undergoes postsynthetic metal and anion exchange with NiCl2, which increases the accessible pore volume and the net hydrophilicity of the framework. With this exchange, the apparent BET surface area increases from 1994 to 3034 m2/g, and the water uptake step shifts from 56 to 33% relative humidity (RH). The high gravimetric capacity of the Ni-rich MOF, 0.98 g/g, translates to a working capacity of 0.64 g/g during a pressure swing cycle between 20 and 40% RH at 25 °C. Combining this performance with a less than 2% loss in working capacity over 100 cycles, the new material rivals the best MOF water sorbents to date.
</description>
<pubDate>Wed, 19 Mar 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158533</guid>
<dc:date>2025-03-19T00:00:00Z</dc:date>
</item>
<item>
<title>Uncertainty Analysis of Climate Change and Policy Response</title>
<link>https://hdl.handle.net/1721.1/158532</link>
<description>Uncertainty Analysis of Climate Change and Policy Response
Webster, Mort; Forest, Chris; Reilly, John; Babiker, Mustafa; Kicklighter, David; Mayer, Monika; Prinn, Ronald; Sarofim, Marcus; Sokolov, Andrei; Stone, Peter; Wang, Chien
To aid climate policy decisions, accurate quantitative descriptions of the uncertainty in climate outcomes under various possible policies are needed. Here, we apply an earth systems model to describe the uncertainty in climate projections under two different policy scenarios. This study illustrates an internally consistent uncertainty analysis of one climate assessment modeling framework, propagating uncertainties in both economic and climate components, and constraining climate parameter uncertainties based on observation. We find that in the absence of greenhouse gas emissions restrictions, there is a one in forty chance that global mean surface temperature change will exceed 4.9°C by the year 2100. A policy case with aggressive emissions reductions over time lowers the temperature change to a one in forty chance of exceeding 3.2°C, thus reducing but not eliminating the chance of substantial warming.
</description>
<pubDate>Mon, 01 Dec 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158532</guid>
<dc:date>2003-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Source sector and region contributions to concentration and direct radiative forcing of black carbon in China</title>
<link>https://hdl.handle.net/1721.1/158531</link>
<description>Source sector and region contributions to concentration and direct radiative forcing of black carbon in China
Li, Ke; Liao, Hong; Mao, Yuhao; Ridley, David A
We quantify the contributions from five domestic emission sectors (residential, industry, transportation, energy, and biomass burning) and emissions outside of China (non-China) to concentration and direct radiative forcing (DRF) of black carbon (BC) in China for year 2010 using a nested-grid version of the global chemical transport model (GEOS-Chem) coupled with a radiative transfer model. The Hemispheric Transport of Air Pollution (HTAP) anthropogenic emissions of BC for year 2010 are used in this study. Simulated surface-layer BC concentrations in China have strong seasonal variations, which exceed 9 μg m-3 in winter and are about 1-5 μg m-3 in summer in the North China Plain and the Sichuan Basin. Residential sector is simulated to have the largest contribution to surface BC concentrations, by 5-7 μg m-3 in winter and by 1-3 μg m-3 in summer, reflecting the large emissions from winter heating and the enhanced wet deposition during summer monsoon. The contribution from industry sector is the second largest and shows relatively small seasonal variations; the emissions from industry sector contribute 1-3 μg m-3 to BC concentrations in the North China Plain and the Sichuan Basin. The contribution from transportation sector is the third largest, followed by that from biomass burning and energy sectors. The non-China emissions mainly influence the surface-layer concentrations of BC in western China; about 70% of surface-layer BC concentration in the Tibet Plateau is attributed to transboundary transport. Averaged over all of China, the all-sky DRF of BC at the top of the atmosphere (TOA) is simulated to be 1.22 W m-2. Sensitivity simulations show that the TOA BC direct radiative forcings from the five domestic emission sectors of residential, industry, energy, transportation, biomass burning, and non-China emissions are 0.44, 0.27, 0.01, 0.12, 0.04, and 0.30 W m-2, respectively. The domestic and non-China emissions contribute 75% and 25% to BC DRF in China, respectively. These results have important implications for taking measures to reduce BC emissions to mitigate near-term climate warming and to improve air quality in China.
</description>
<pubDate>Fri, 01 Jan 2016 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158531</guid>
<dc:date>2016-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Renewable energy policy design and framing influence public support in the United States</title>
<link>https://hdl.handle.net/1721.1/158530</link>
<description>Renewable energy policy design and framing influence public support in the United States
Stokes, Leah C.; Warshaw, Christopher
The United States has often led the world in supporting renewable energy technologies at both the state and federal level. However, since 2011 several states have weakened their renewable energy policies. Public opinion will probably be crucial for determining whether states expand or contract their renewable energy policies in the future. Here we show that a majority of the public in most states supports renewable portfolio standards, which require a portion of the electricity mix to come from renewables. However, policy design and framing can strongly influence public support. Using a survey experiment, we show that effects of renewable portfolio standards bills on residential electricity costs, jobs and pollution, as well as bipartisan elite support, are all important drivers of public support. In many states, these bills’ design and framing can push public opinion above or below majority support.
</description>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158530</guid>
</item>
<item>
<title>Orbit Plane Rotation Using Aerocapture</title>
<link>https://hdl.handle.net/1721.1/158529</link>
<description>Orbit Plane Rotation Using Aerocapture
Gochenaur, Daniel C.; Jones, Michael P.; Norheim, Johannes J.; de Weck, Olivier L.
This study investigates the feasibility of performing orbit plane rotations during aerocapture maneuvers. Three-degrees-of-freedom bounding trajectories at Mars are propagated for a range of vehicle lift-to-drag ratios &#119871;/&#119863;&#13;
 and hyperbolic arrival velocities &#119907;∞&#13;
. The results show that the maximum plane rotation achievable increases with vehicle &#119871;/&#119863;&#13;
 and &#119907;∞&#13;
. When arriving with &#119907;∞&#13;
 of 6 km/s, vehicles with &#119871;/&#119863;&#13;
 of 0.25 and 1.0 can achieve plane rotations of up to 11.6 and 45.3 deg, respectively. Heat rate, heat load, and g-loading constraints identified when rotating the orbital plane are not more severe than those observed for two-dimensional aerocapture at a given &#119871;/&#119863;&#13;
 and &#119907;∞&#13;
. A direct tradeoff between the maximum plane rotation and entry corridor width exists that will affect the ability of lower &#119871;/&#119863;&#13;
 vehicles to achieve large plane rotations. The proposed maneuver can allow the captured orbit inclination and right ascension of the ascending node to be altered in ways that are not possible using typical interplanetary orbit targeting methods. Further, the maneuver offers the possibility of deploying multiple satellites to different orbits around a target destination using a single launch or approach path.
</description>
<pubDate>Thu, 13 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158529</guid>
<dc:date>2025-02-13T00:00:00Z</dc:date>
</item>
<item>
<title>Development of a two-hit lethal liver injury model in swine</title>
<link>https://hdl.handle.net/1721.1/158528</link>
<description>Development of a two-hit lethal liver injury model in swine
Lagazzi, Emanuele; Wei, Helen S.; Panossian, Vahe S.; Pallotta, Jessica B.; Calisir, Anet; Rafaqat, Wardah; Abiad, May; Nzenwa, Ikemsinachi C.; King, David R.; Hong, Celestine; Hammond, Paula; Olsen, Bradley; Duggan, Michael J.; Velmahos, George C.
Purpose Noncompressible truncal hemorrhage remains a leading cause of preventable death in the prehospital setting. Standardized and reproducible large animal models are essential to test new therapeutic strategies. However, existing injury models vary significantly in consistency and clinical accuracy. This study aims to develop a lethal porcine model to test hemostatic agents targeting noncompressible abdominal hemorrhages. Methods We developed a two-hit injury model in Yorkshire swine, consisting of a grade IV liver injury combined with hemodilution. The hemodilution was induced by controlled exsanguination of 30% of the total blood volume and a 3:1 resuscitation with crystalloids. Subsequently, a grade IV liver injury was performed by sharp transection of both median lobes of the liver, resulting in major bleeding and severe hypotension. The abdominal incision was closed within 60 s from the injury. The endpoints included mortality, survival time, serum lab values, and blood loss within the abdomen. Results This model was lethal in all animals (5/5), with a mean survival time of 24.4 ± 3.8 min. The standardized liver resection was uniform at 14.4 ± 2.1% of the total liver weight. Following the injury, the MAP dropped by 27 ± 8mmHg within the first 10 min. The use of a mixed injury model (i.e., open injury, closed hemorrhage) was instrumental in creating a standardized injury while allowing for a clinically significant hemorrhage. Conclusion This novel highly lethal, consistent, and clinically relevant translational model can be used to test and develop life-saving interventions for massive noncompressible abdominal hemorrhage.
</description>
<pubDate>Thu, 23 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158528</guid>
<dc:date>2024-05-23T00:00:00Z</dc:date>
</item>
<item>
<title>Evaluating the reliability of a microperimetry-based method for assessing visual function in the junctional zone of geographic atrophy lesions</title>
<link>https://hdl.handle.net/1721.1/158526</link>
<description>Evaluating the reliability of a microperimetry-based method for assessing visual function in the junctional zone of geographic atrophy lesions
Alibhai, A. Y.; Moult, Eric  M.; Jamil, Muhammad U.; Raza, Khadija; Morales, Marco U.; Ribeiro, Ramiro; Baumal, Caroline R.; Fujimoto, James G.; Waheed, Nadia K.
Purpose To assess the repeatability of a microperimetry methodology for quantifying visual function changes in the junctional zone of eyes with geographic atrophy (GA) in the clinical trial context. Methods A post hoc analysis of the OAKS phase III trial was conducted, which enrolled patients with GA secondary to age-related macular degeneration. Microperimetry using a standard 10 − 2 fovea centered grid was performed at baseline and follow-up visits. GA regions were traced on fundus autofluorescence (FAF) images. Two graders independently registered baseline microperimetry images with baseline FAF images in a sampling of 30 eyes from the OAKS study. Agreement between the two graders’ assessments of mean sensitivity and the number of scotomatous points within a ± 250 &#120583;m GA junctional zone was assessed. Results The intraclass correlation (ICC) and coefficient of repeatability (CoR) for the mean junctional zone sensitivity were 0.987 and 0.214 dB, respectively. The ICC and CoR for the total number of scotomatous points within the junctional zone were 0.991 and 1.42, respectively. Conclusions The repeatability of the methodology and its compatibility with standard MP acquisitions appear to make it well-suited for identifying and analyzing retinal sensitivity within high-risk areas of the retina. Summary We present a microperimetry-based methodology for assessing visual function changes in the junctional zone of geographic atrophy lesions using a standard 10 − 2 fovea centered grid in a clinical trial context. The approach’s repeatability and compatibility with standard microperimetry grids may make it useful for assessing the effects of GA therapeutics.
</description>
<pubDate>Tue, 07 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158526</guid>
<dc:date>2025-01-07T00:00:00Z</dc:date>
</item>
<item>
<title>A Portable High-Resolution Snapshot Multispectral Imaging Device Leveraging Spatial and Spectral Features for Non-Invasive Corn Nitrogen Treatment Classification</title>
<link>https://hdl.handle.net/1721.1/158525</link>
<description>A Portable High-Resolution Snapshot Multispectral Imaging Device Leveraging Spatial and Spectral Features for Non-Invasive Corn Nitrogen Treatment Classification
Li, Xuan; Niu, Zhongzhong; Morales-Ona, Ana Gabriela; Chen, Ziling; Zhao, Tianzhang; Quinn, Daniel J.; Jin, Jian
Spectral imaging has been widely applied in plant phenotyping to assess corn leaf nitrogen status. Recent studies indicate that spatial variations within a single leaf’s multispectral image provide stronger signals for corn nitrogen estimation. However, current technologies for corn multispectral imaging cannot capture a large corn leaf segment with high-resolution and simple operation, limiting their efficiency and accuracy in nitrogen estimation. To address this gap, this study developed a proximal multispectral imaging device that can capture high-resolution snapshot multispectral images of a large segment of a single corn leaf. This device uses airflow to autonomously position and flatten the leaf to minimize the noise in images due to leaf curvature and simplify operation. Moreover, this device adopts a transmittance imaging regime by clamping the corn leaf between the camera and the lighting source to block the environmental lights and supply uniform lighting to capture high-resolution and high-precision leaf images within six seconds. A field assay was conducted to validate the effectiveness of the multispectral images captured by this device in assessing nitrogen status by classifying the nitrogen treatments applied to corn. Six nitrogen treatments were applied to 12 plots of corn fields, and 10 images were collected at each plot. By using the average vegetative index of the whole image, only one treatment was significantly different from the other five treatments, and no significant difference was observed among any other groups. However, by extracting the spatial and spectral features from the images and combining these features, the accuracy of nitrogen treatment classification improved compared to using the average index. In another analysis, by applying spatial–spectral analysis methods to the images, the nitrogen treatment classification accuracy has improved compared to using the average index. These results demonstrated the advantages of this high-resolution and high-throughput imaging device for distinguishing nitrogen treatments by facilitating spatial–spectral combined analysis for more precise classification.
</description>
<pubDate>Fri, 21 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158525</guid>
<dc:date>2025-02-21T00:00:00Z</dc:date>
</item>
<item>
<title>Utilization of Classification Learning Algorithms for Upper-Body Non-Cyclic Motion Prediction</title>
<link>https://hdl.handle.net/1721.1/158524</link>
<description>Utilization of Classification Learning Algorithms for Upper-Body Non-Cyclic Motion Prediction
Koo, Bon H.; Siu, Ho Chit; Newman, Dava J.; Roche, Ellen T.; Petersen, Lonnie G.
This study explores two methods of predicting non-cyclic upper-body motions using classification algorithms. Exoskeletons currently face challenges with low fluency, hypothesized to be in part caused by the lag in active control innate in many leader&amp;ndash;follower paradigms seen in today&amp;rsquo;s systems, leading to energetic inefficiencies and discomfort. To address this, we employ k-nearest neighbor (KNN) and deep learning models to predict motion characteristics, such as magnitude and category, from surface electromyography (sEMG) signals. Data were collected from six muscles located around the elbow. The sEMG signals were processed to identify significant activation changes. Two classification approaches were utilized: a KNN algorithm that categorizes motion based on the slopes of processed sEMG signals at change points and a deep neural network employing continuous categorization. Both methods demonstrated the capability to predict future voluntary non-cyclic motions up to and beyond commonly acknowledged electromechanical delay times, with the deep learning model able to predict, with certainty at or beyond 90%, motion characteristics even prior to myoelectric activation of the muscles involved. Our findings indicate that these classification algorithms can be used to predict upper-body non-cyclic motions to potentially increase machine interfacing fluency. Further exploration into regression-based prediction models could enhance the precision of these predictions, and further work could explore their effects on fluency when utilized in a tandem or wearable robotic application.
</description>
<pubDate>Thu, 20 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158524</guid>
<dc:date>2025-02-20T00:00:00Z</dc:date>
</item>
<item>
<title>Content Analysis of E-Participation Platforms in Taiwan with Topic Modeling: How to Train and Evaluate Neural Topic Models?</title>
<link>https://hdl.handle.net/1721.1/158523</link>
<description>Content Analysis of E-Participation Platforms in Taiwan with Topic Modeling: How to Train and Evaluate Neural Topic Models?
Sontheimer, Moritz; Fahlbusch, Jonas; Chou, Shuo-Yan; Kuo, Yu-Lin
E-participation platforms, such as iVoting and Join in Taiwan, provide digital spaces for citizens to engage in deliberation, voting, and oversight. As a forerunner in Asia, Taiwan has implemented these platforms to enhance participatory democracy. However, there is still limited research on the specific content debated on these platforms. Utilising recent advancements in Natural Language Processing, the content of proposals that users have submitted between 2015 and 2025 is explored. In this study, a pipeline for mining text corpora scraped from these platforms in the context of political analysis is proposed. The pipeline is applied to two datasets which have different characteristics. A topic model for each of the two platforms is generated and later evaluated with OCTIS (Optimizing and Comparing Topic Models Is Simple) and compared to different baselines. Our research highlights the trade-offs between model performance and processing time, emphasizing the balance between accuracy and meaningful topic creation. By integrating a translation pipeline from Chinese to English within the text-mining process, our method also demonstrates a solid approach to overcome language barriers. Consequently, our method is adaptable to e-participation platforms in various languages, providing decision-makers with a more comprehensive tool to understand citizens’ needs and enabling the formulation of more informed and effective policies.
</description>
<pubDate>Thu, 20 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158523</guid>
<dc:date>2025-02-20T00:00:00Z</dc:date>
</item>
<item>
<title>The impact of climate change policy on the risk of water stress in southern and eastern Asia</title>
<link>https://hdl.handle.net/1721.1/158522</link>
<description>The impact of climate change policy on the risk of water stress in southern and eastern Asia
Gao, Xiang; Schlosser, C Adam; Fant, Charles; Strzepek, Kenneth
The adequacy of freshwater resources remains a critical challenge for a sustainable and growing society. We present a self-consistent risk-based assessment of water availability and use under future climate change and socioeconomic growth by midcentury across southern and eastern Asia (SEA). We employ large ensemble scenarios from an integrated modeling framework that are consistent across the spectrum of regional climate, population, and economic projections. We find socioeconomic growth contributes to an increase in water stress across the entire ensemble. However, climate change drives the ensemble central tendency toward an increase in water stress in China but a reduction in India, with a considerable spread across the ensemble. Nevertheless, the most deleterious unabated climate-change impact is a low probability but salient extreme increase in water stress over China and India. In these outcomes, annual withdrawals will routinely exceed water-storage capacity. A modest greenhouse gas mitigation pathway eliminates the likelihood of these extreme outcomes and also benefits hundreds of millions of people at risk to various levels of water stress increase. Over SEA we estimate an additional 200 million people under threat of facing at least heavily water-stressed conditions from climate change and socioeconomic growth, but the mitigation scenario reduces the additional population-under-threat by 30% (60 million). Nevertheless, there remains a 1-in-2 chance that 100 million people across SEA experience a 50% increase in water stress and a 1-in-10 chance they experience a doubling of water stress. Therefore, widespread adaptive measures may be required over the coming decades to meet these unavoidable risks in water shortfalls.
</description>
<pubDate>Fri, 01 Jun 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158522</guid>
<dc:date>2018-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>TeleAbsence: A Vision of Past and Afterlife Telepresence</title>
<link>https://hdl.handle.net/1721.1/158451</link>
<description>TeleAbsence: A Vision of Past and Afterlife Telepresence
Ishii, Hiroshi; Pillis, Daniel; Pataranutaporn, Pat; Xiao, Xiao; Noh, Hayoun; Li, Lucy; Algargoosh, Alaa; Labrune, Jean-Baptiste
This paper presents our vision of TeleAbsence, extending the concept of telepresence to the past and the afterlife to address the vast emotional and temporal distance caused by the memory of loved ones who drifted apart and faded away. Instead of explicit and literal representations of loved ones, TeleAbsence describes poetic encounters with digital and physical traces left by the absence of others. TeleAbsence fosters illusory communications to conjure the feeling of being there with those no longer with us without using synthetic or generative representations and utterances. Our vision is deeply inspired by the Portuguese concept “Saudade”—the “desire for the beloved thing, people, place, and moment, made painful by its absence.” We present our vision through five design principles: presence of absence, illusory communication, the materiality of memory, traces of reflection, and remote time, grounded in historical and cultural contexts. We present exploratory narratives to illustrate these principles and the concept of ambient co-presence using poetry, phone, piano, and pen as mediums. We discuss challenges and opportunities for future work, including representational strategies to depict lost loved ones, ethical issues, and the possible extension of TeleAbsence to historical public figures.
</description>
<pubDate>Wed, 01 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158451</guid>
<dc:date>2025-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Practical DB-OS Co-Design with Privileged Kernel Bypass</title>
<link>https://hdl.handle.net/1721.1/158440</link>
<description>Practical DB-OS Co-Design with Privileged Kernel Bypass
Zhou, Xinjing; Leis, Viktor; Hu, Jinming; Yu, Xiangyao; Stonebraker, Michael
This paper revisits the longstanding challenge of coordinating database systems with general-purpose OS interfaces, such as POSIX, which often lack tailored support for DB requirements. Existing approaches to this DB-OS co-design struggle with limited design space, security risks, and compatibility issues. To overcome these hurdles, we propose a new co-design approach leveraging virtualization to elevate the privilege level of DB processes. Our method enables database systems to fully exploit hardware capabilities via virtualization, while minimizing the need for extensive modifications to the host OS kernel, thereby maintaining compatibility. We demonstrate the effectiveness of our approach through two novel virtual memory mechanisms tailored for database workloads: (1) an efficient snapshotting mechanism that captures memory snapshots at millisecond intervals for in-memory databases and HTAP workloads, and (2) a streamlined in-kernel buffer pool design. We introduce Libdbos, a lightweight guest kernel implementing these mechanisms. Our evaluations highlight significant improvements in latency and efficiency compared to existing snapshotting and buffer pool designs, underscoring the potential of the approach.
</description>
<pubDate>Tue, 11 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158440</guid>
<dc:date>2025-02-11T00:00:00Z</dc:date>
</item>
<item>
<title>SySTeC: A Symmetric Sparse Tensor Compiler</title>
<link>https://hdl.handle.net/1721.1/158438</link>
<description>SySTeC: A Symmetric Sparse Tensor Compiler
Patel, Radha; Ahrens, Willow; Amarasinghe, Saman
Symmetric and sparse tensors arise naturally in many domains including linear algebra, statistics, physics, chemistry, and graph theory. Symmetric tensors are equal to their transposes, so in the n-dimensional case we can save up to a factor of n! by avoiding redundant operations. Sparse tensors, on the other hand, are mostly zero, and we can save asymptotically by processing only nonzeros. Unfortunately, specializing for both symmetry and sparsity at the same time is uniquely challenging. Optimizing for symmetry requires consideration of n! transpositions of a triangular kernel, which can be complex and error prone. Considering multiple transposed iteration orders and triangular loop bounds also complicates iteration through intricate sparse tensor formats. Additionally, since each combination of symmetry and sparse tensor formats requires a specialized implementation, this leads to a combinatorial number of cases. A compiler is needed, but existing compilers cannot take advantage of both symmetry and sparsity within the same kernel. In this paper, we describe the first compiler which can automatically generate symmetry-aware code for sparse or structured tensor kernels. We introduce a taxonomy for symmetry in tensor kernels, and show how to target each kind of symmetry. Our implementation demonstrates significant speedups ranging from 1.36x for SSYMV to 30.4x for a 5-dimensional MTTKRP over the non-symmetric state of the art.
CGO ’25, March 01–05, 2025, Las Vegas, NV, USA
</description>
<pubDate>Sat, 01 Mar 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158438</guid>
<dc:date>2025-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>FHE-Rollups: Scaling Confidential Smart Contracts on Ethereum and Beyond</title>
<link>https://hdl.handle.net/1721.1/158437</link>
<description>FHE-Rollups: Scaling Confidential Smart Contracts on Ethereum and Beyond
Zyskind, Guy; Erez, Yonatan; Langer, Tom; Grossman, Itzik; Bondarevsky, Lior
Blockchains ensure that all transactions, including those that execute deterministic programs known as smart contracts, are processed correctly and without interruption. However, blockchains inherently provide no confidentiality - all transaction data, including inputs sent to smart contracts, are public. This has led to a rise of confidential smart contract blockchains. These blockchains utilize privacy-preserving techniques to add privacy to smart contracts, but they usually rely on Trusted Execution Environments (TEEs) (e.g., [14, 24]) that are susceptible to side-channel attacks and other security concerns ([7, 13, 33] to name a few).&#13;
More recently, several works have focused on achieving confidentiality using Fully Homomorphic Encryption (FHE) (e.g., [1, 30]). While this approach is promising, these works limit scalability as they require all nodes in the network to execute FHE computations and reach consensus over the encrypted state, which is prohibitive.&#13;
Instead, in this work and inspired by the recent move towards layer-2 solutions, we present the first rollup-based FHE architecture. We argue that while for plaintext computation rollups are a needed solution, in the context of FHE, where the computational overhead is orders of magnitude higher, they are a necessity.&#13;
In our design, we take an optimistic rollup approach, allowing us to avoid the orders of magnitude penalty incurred by state-of-the-art verifiable FHE techniques [34]. In fact, our framework can be seen as a cryptoeconomic solution to solve the same problem of verifiability in FHE.&#13;
We implement a proof-of-concept of our solution, and in the process, we show how we can build FHE rollups without making any changes to existing layer-Is like Ethereum, even if they do not support FHE operations inherently. We further implement three smart-contracts that are only possible if data remains confidential, and show that their performance is practical.
BSCI '24, July 2, 2024, Singapore, Singapore
</description>
<pubDate>Tue, 02 Jul 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158437</guid>
<dc:date>2024-07-02T00:00:00Z</dc:date>
</item>
<item>
<title>Computer Science Behind Bars: Lessons Learned from Teaching Incarcerated Students in Prisons and Jails</title>
<link>https://hdl.handle.net/1721.1/158331</link>
<description>Computer Science Behind Bars: Lessons Learned from Teaching Incarcerated Students in Prisons and Jails
Fishberg, Andrew; Gaetz, Marisa; Nisser, Martin; Cafferty, Carole; Perlman, Lee; Soicher, Raechel; Long, Joshua
Educational programs for incarcerated individuals, often called "behind bars" initiatives, have been shown to improve participants' social and economic outcomes upon release. Since its founding in 2018, MIT's Education Justice Institute (TEJI) has offered accredited classes for incarcerated students, with an increasing focus on computer education. Our courses have been delivered both in person and remotely (e.g., via Zoom). In this poster, we share insights into the challenges present in the incarcerated education environment, and highlight how remote learning offers unique advantages to incarcerated students. We also present preliminary findings from two years of data collected across four recurring computer science courses. This poster aims to foster a dialogue with the broader computer science education community, focusing on: (i) qualitative insights gained from extensive interactions with incarcerated education systems, (ii) preliminary empirical results obtained through IRB-approved surveys, (iii) common challenges faced during data collection, and (iv) an opportunity to seek feedback and pose questions to computer science education experts.
SIGCSE TS 2025, February 26-March 1, 2025, Pittsburgh, PA, USA
</description>
<pubDate>Tue, 18 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158331</guid>
<dc:date>2025-02-18T00:00:00Z</dc:date>
</item>
<item>
<title>"Why is my code slow?" Efficiency Bugs in Student Code</title>
<link>https://hdl.handle.net/1721.1/158329</link>
<description>"Why is my code slow?" Efficiency Bugs in Student Code
Dargan, Hope; Gilbert-Diamond, Adam; Hartz, Adam; Miller, Robert
While prior research has categorized common errors and code quality issues of student programmers, little attention has been paid to researching student efficiency bugs. Qualitative content analysis of 250 slow student submissions across five CS2 assignments yielded over 750 efficiency bugs. Extracting general themes resulted in an efficiency bug taxonomy with three main categories: superfluous computation, suboptimal data structure design, and suboptimal algorithm design, with 12 subcategories. Analysis of specific bug frequencies across the assignments provided insights that may inform content design for programming courses.
SIGCSE TS 2025, February 26-March 1, 2025, Pittsburgh, PA, USA
</description>
<pubDate>Wed, 12 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158329</guid>
<dc:date>2025-02-12T00:00:00Z</dc:date>
</item>
<item>
<title>Picto: Crafting Remote Tangible Gestures via Recordable, Replayable, and Shareable Motions</title>
<link>https://hdl.handle.net/1721.1/158328</link>
<description>Picto: Crafting Remote Tangible Gestures via Recordable, Replayable, and Shareable Motions
Choi, Kyung Yun; Jung, Taehee; Harasha, Noble; Ishii, Hiroshi
We introduce Picto, a paired tangible interface that enables intimate dyads to co-create shared kinetic messages, fostering playful remote communication beyond temporal and physical constraints. Picto’s two modular units—a knob for rotational motion and a slider for linear motion—allow users to craft personalized motions and shapes symbolizing their significant other. Presence can be conveyed in real-time or asynchronously through record, replay, and share features. Picto empowers users to express abstract ideas through iconic gestures and non-verbal cues. Using a bistable composite tape-spring structure, we developed a novel mechanism for programming dynamic shape variations and motions. Picto’s control system records, stores, and shares motion-based interactions. A user study with intimate dyads evaluates Picto’s usability and its potential as a remote story-sharing platform and ambient presence media enhanced by metaphorical and beat gestures. The results highlight its potential to enrich and sustain intimate relationships, supporting social presence across distances.
TEI ’25, March 04–07, 2025, Bordeaux / Talence, France
</description>
<pubDate>Tue, 04 Mar 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158328</guid>
<dc:date>2025-03-04T00:00:00Z</dc:date>
</item>
<item>
<title>Lower bounds for learning quantum states with single-copy measurements</title>
<link>https://hdl.handle.net/1721.1/158327</link>
<description>Lower bounds for learning quantum states with single-copy measurements
Nayak, Ashwin; Lowe, Angus
We study the problems of quantum tomography and shadow tomography using measurements performed on individual, identical copies of an unknown d-dimensional state. We first revisit known lower bounds [23] on quantum tomography with accuracy ϵ in trace distance, when the measurement choices are independent of previously observed outcomes, i.e., they are nonadaptive. We give a succinct proof of these results through the χ2-divergence between suitable distributions. Unlike prior work, we do not require that the measurements be given by rank-one operators. This leads to stronger lower bounds when the learner uses measurements with a constant number of outcomes (e.g., two-outcome measurements). In particular, this rigorously establishes the optimality of the folklore “Pauli tomography” algorithm in terms of its sample complexity. We also derive novel bounds of Ω(r2d/ϵ2) and Ω(r2d2/ϵ2) for learning rank r states using arbitrary and constant-outcome measurements, respectively, in the nonadaptive case.&#13;
In addition to the sample complexity, a resource of practical significance for learning quantum states is the number of unique measurement settings required (i.e., the number of different measurements used by an algorithm, each possibly with an arbitrary number of outcomes). Motivated by this consideration, we employ concentration of measure of χ2-divergence of suitable distributions to extend our lower bounds to the case where the learner performs possibly adaptive measurements from a fixed set of exp (O(d)) possible measurements. This implies in particular that adaptivity does not give us any advantage using single-copy measurements that are efficiently implementable. We also obtain a similar bound in the case where the goal is to predict the expectation values of a given sequence of observables, a task known as shadow tomography. Finally, in the case of adaptive, single-copy measurements implementable with polynomial-size circuits, we prove that a straightforward strategy based on computing sample means of the given observables is optimal.
</description>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158327</guid>
</item>
<item>
<title>Turing.jl: a general-purpose probabilistic programming language</title>
<link>https://hdl.handle.net/1721.1/158326</link>
<description>Turing.jl: a general-purpose probabilistic programming language
Fjelde, Tor Erlend; Xu, Kai; Widmann, David; Tarek, Mohamed; Pfiffer, Cameron; Trapp, Martin; Axen, Seth; Sun, Xianda; Hauru, Markus; Yong, Penelope; Tebbutt, Will; Ghahramani, Zoubin; Ge, Hong
Probabilistic programming languages (PPLs) are becoming increasingly important in many scientific disciplines, such as economics, epidemiology, and biology, to extract meaning from sources of data while accounting for one's uncertainty. The key idea of probabilistic programming is to decouple inference and model specification, thus allowing the practitioner to approach their task at hand using Bayesian inference, without requiring extensive knowledge in programming or computational statistics. At the same time, the complexity of problem settings in which PPLs are employed steadily increasing, both in terms of project size and model complexity, calling for more flexible and efficient systems.    In this work, we describe Turing.jl, a general-purpose PPL, which is designed to be flexible, efficient, and easy to use. Turing.jl is built on top of the Julia programming language, which is known for its high performance and ease-of-use. We describe the design of Turing.jl, contextualizing it within different types of users and use cases, its key features, and how it can be used to solve a wide range of problems. We also provide a brief overview of the ecosystem around Turing.jl, including the different libraries and tools that can be used in conjunction with it. Finally, we provide a few examples of how Turing.jl can be used in practice.
</description>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158326</guid>
</item>
<item>
<title>Aggregating Funnels for Faster Fetch&amp;Add and Queues</title>
<link>https://hdl.handle.net/1721.1/158325</link>
<description>Aggregating Funnels for Faster Fetch&amp;Add and Queues
Roh, Younghun; Wei, Yuanhao; Ruppert, Eric; Fatourou, Panagiota; Jayanti, Siddhartha; Shun, Julian
Many concurrent algorithms require processes to perform fetch-and-add operations on a single memory location, which can be a hot spot of contention. We present a novel algorithm called Aggregating Funnels that reduces this contention by spreading the fetch-and-add operations across multiple memory locations. It aggregates fetch-and-add operations into batches so that the batch can be performed by a single hardware fetch-and-add instruction on one location and all operations in the batch can efficiently compute their results by performing a fetch-and-add instruction on a different location. We show experimentally that this approach achieves higher throughput than previous combining techniques, such as Combining Funnels, and is substantially more scalable than applying hardware fetch-and-add instructions on a single memory location. We show that replacing the fetch-and-add instructions in the fastest state-of-the-art concurrent queue by our Aggregating Funnels eliminates a bottleneck and greatly improves the queue's overall throughput.
PPoPP ’25, Las Vegas, NV, USA
</description>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158325</guid>
</item>
<item>
<title>Electrochemical Characterization of Biomolecular Electron Transfer at Conductive Polymer Interfaces</title>
<link>https://hdl.handle.net/1721.1/158306</link>
<description>Electrochemical Characterization of Biomolecular Electron Transfer at Conductive Polymer Interfaces
Agee, Alec; Gill, Thomas Mark; Pace, Gordon; Segalman, Rachel; Furst, Ariel
Bio-electrochemical systems (BESs) are promising for renewable energy generation but remain hindered by inefficient electron transfer at electrode surfaces. As the toolbox of bio-anode materials increases, rigorous electrochemical characterization of emerging materials is needed. Here, we holistically characterize the electrochemical interaction of flavin mononucleotide (FMN), an electron shuttle in biological systems and a cofactor for oxidoreductase enzymes, with the bio-inspired mixed conducting polymer poly{3-[6'-(N-methylimidazolium)hexyl]thiophene} (P3HT-Im+). The behavior of this polymer is compared to the equivalent polymer without the histidine-like imidazolium. We find improved conductivity and charge storage in imidazolium-containing polymers beyond what is explained by differences in the electroactive area. The P3HT-Im+ further shows internal charge storage but with negligible faradaic contribution, indicating that charge storage capacity may translate to improved biocatalysis non-intuitive ways. Finally, one-electron transfer is observed between FMN and glassy carbon, while a bio-similar two-electron transfer is observed for the P3HT-Im+. To our knowledge, this is the first example of a concerted two-electron transfer between FMN and an electrode interface, which we attribute to the bio-inspired, histidine-like imidazolium functional groups in the polymer. These studies demonstrate the importance of bio-relevant materials characterization when such materials are deployed in BESs.
</description>
<pubDate>Sun, 01 Jan 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158306</guid>
<dc:date>2023-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>An Improved Spectrophotometric Method for Toluene‐4‐Monooxygenase Activity</title>
<link>https://hdl.handle.net/1721.1/158305</link>
<description>An Improved Spectrophotometric Method for Toluene‐4‐Monooxygenase Activity
Baskaran, Barathkumar; Gill, Thomas M; Furst, Ariel L
Monooxygenases, an important class of enzymes, have been the subject of enzyme engineering due to their high activity and versatile substrate scope. Reactions performed by these biocatalysts have long been monitored by a colorimetric method involving the coupling of a dye precursor to naphthalene hydroxylation products generated by the enzyme. Despite the popularity of this method, we found the dye product to be unstable, preventing quantitative readout. By incorporating an extraction step to solubilize the dye produced, we have improved this assay to the point where quantitation of enzyme activity is possible. Further, by incorporating spectral deconvolution, we have, for the first time, enabled independent quantification of the two possible regioisomeric products: 1-naphthol and 2-naphthol. Previously, such analysis was only possible with chromatographic separation, increasing the cost and complexity of analysis. The efficacy of our improved workflow was evaluated by monitoring the activity of a toluene-4-monooxygenase enzyme from Pseudomonas mendocina KR-1. Our colorimetric regioisomer quantification was found to be consistent with chromatographic analysis by HPLC. The development and validation of a quantitative colorimetric assay for monooxygenase activity that enables regioisomeric distinction and quantification represents a significant advance in analytical methods to monitor enzyme activity. By maintaining facile, low-cost, high-throughput readout while incorporating quantification, this assay represents an important alternative to more expensive chromatographic quantification techniques.
</description>
<pubDate>Mon, 03 Apr 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158305</guid>
<dc:date>2023-04-03T00:00:00Z</dc:date>
</item>
<item>
<title>Self-Assembled Nanocoatings Protect Microbial Fertilizers for Climate-Resilient Agriculture</title>
<link>https://hdl.handle.net/1721.1/158304</link>
<description>Self-Assembled Nanocoatings Protect Microbial Fertilizers for Climate-Resilient Agriculture
Burke, Benjamin; Fan, Gang; Wasuwanich, Pris; Moore, Evan B; Furst, Ariel L
Chemical fertilizers have been crucial for sustaining the current global population by supplementing overused farmland to support consistent food production, but their use is unsustainable. Pseudomonas chlororaphis is a nitrogen-fixing bacterium that could be used as a fertilizer replacement, but this microbe is delicate. It is sensitive to stressors, such as freeze-drying and high temperatures. Here, we demonstrate protection of P. chlororaphis from freeze-drying, high temperatures (50 oC), and high humidity using self-assembling metal-phenolic network (MPN) coatings. The composition of the MPN is found to significantly impact its protective efficacy, and with optimized compositions, no viability loss is observed for MPN-coated microbes under conditions where uncoated cells do not survive. Further, we demonstrate that MPN-coated microbes improve germination of seeds by 150% as compared to those treated with fresh P. chlororaphis. Taken together, these results demonstrate the protective capabilities of MPNs against environmental stressors and represent a critical step towards enabling the production and storage of delicate microbes under nonideal conditions.
</description>
<pubDate>Mon, 27 Nov 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158304</guid>
<dc:date>2023-11-27T00:00:00Z</dc:date>
</item>
<item>
<title>Highly Efficient Carbon Dioxide Electroreduction via DNA-Directed Catalyst Immobilization</title>
<link>https://hdl.handle.net/1721.1/158303</link>
<description>Highly Efficient Carbon Dioxide Electroreduction via DNA-Directed Catalyst Immobilization
Fan, Gang; Corbin, Nathan; Chung, Minju; Gill, Thomas M; Moore, Evan B; Karbelkar, Amruta A; Furst, Ariel L
Electrochemical reduction of carbon dioxide (CO2) is a promising route to up-convert this industrial byproduct. However, to perform this reaction with a small-molecule catalyst, the catalyst must be proximal to an electrode surface. Efforts to immobilize molecular catalysts on electrodes have been stymied by the need to optimize the immobilization chemistries on a case-by-case basis. Taking inspiration from nature, we applied DNA as a molecular-scale "Velcro" to investigate the tethering of three porphyrin-based catalysts to electrodes. This tethering strategy improved both the stability of the catalysts and their Faradaic efficiencies (FEs). DNA-catalyst conjugates were immobilized on screen-printed carbon and carbon paper electrodes via DNA hybridization with nearly 100% efficiency. Following immobilization, a higher catalyst stability at relevant potentials is observed. Additionally, lower overpotentials are required for the generation of carbon monoxide (CO). Finally, high FE for CO generation was observed with the DNA-immobilized catalysts as compared to the unmodified small-molecule systems, as high as 79.1% FE for CO at -0.95 V vs SHE using a DNA-tethered catalyst. This work demonstrates the potential of DNA "Velcro" as a powerful strategy for catalyst immobilization. Here, we demonstrated improved catalytic characteristics of molecular catalysts for CO2 valorization, but this strategy is anticipated to be generalizable to any reaction that proceeds in aqueous solutions.
</description>
<pubDate>Mon, 22 Apr 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158303</guid>
<dc:date>2024-04-22T00:00:00Z</dc:date>
</item>
<item>
<title>Secondary Structure in Enzyme‐Inspired Polymer Catalysts Impacts Water Oxidation Efficiency</title>
<link>https://hdl.handle.net/1721.1/158302</link>
<description>Secondary Structure in Enzyme‐Inspired Polymer Catalysts Impacts Water Oxidation Efficiency
Sedenho, Graziela C; Nascimento, Steffane Q; Zamani, Marjon; Crespilho, Frank N; Furst, Ariel L
Protein structure plays an essential role on their stability, functionality, and catalytic activity. In this work, the interplay between the β-sheet structure and its catalytic implications to the design of enzyme-inspired materials is investigated. Here, inspiration is drawn from the active sites and β-sheet rich structure of the highly efficient multicopper oxidase (MCO) to engineer a bio-inspired electrocatalyst for water oxidation utilizing the abundant metal, copper. Copper ions are coordinated to poly-histidine (polyCuHis), as they are in MCO active sites. The resultant polyCuHis material effectively promotes water oxidation with low overpotentials (0.15 V) in alkaline systems. This activity is due to the 3D structure of the poly-histidine backbone. By increasing the prevalence of β-sheet structure and decreasing the random coil nature of the polyCuHis secondary structures, this study is able to modulates the electrocatalytic activity of this material is modulated, shifting it toward water oxidation. These results highlight the crucial role of the local environment at catalytic sites for efficient, energy-relevant transformations. Moreover, this work highlights the importance of conformational structure in the design of scaffolds for high-performance electrocatalysts.
</description>
<pubDate>Mon, 01 Jul 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158302</guid>
<dc:date>2024-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Systematic Simulations of Structural Stability, Phonon Dispersions, and Thermal Expansion in Zinc-Blende ZnO</title>
<link>https://hdl.handle.net/1721.1/158301</link>
<description>Systematic Simulations of Structural Stability, Phonon Dispersions, and Thermal Expansion in Zinc-Blende ZnO
Talwar, Devki N.; Becla, Piotr
Zinc oxide (ZnO) has recently gained considerable attention due to its exceptional properties, including higher electron mobility, good thermal conductivity, high breakdown voltage, and a relatively large exciton-binding energy. These characteristics helped engineers to develop low dimensional heterostructures (LDHs)-based advanced flexible/transparent nanoelectronics, which were then integrated into thermal management systems. Coefficients of thermal expansion α(T),&#13;
 phonon dispersions  ωj(q→)&#13;
, and Grüneisen parameters  γj(q→)&#13;
 can play important roles in evaluating the suitability of materials in such devices. By adopting a realistic rigid-ion model in the quasi-harmonic approximation, this work aims to report the results of a methodical study to comprehend the structural, lattice dynamical, and thermodynamic behavior of zinc-blende (zb) ZnO. Systematic calculations of ωj(q→)&#13;
, γj(q→),&#13;
 and α(T)&#13;
 have indicated negative thermal expansion (NTE) at low T. Soft transverse acoustic shear mode gammas  γTA&#13;
 at critical points offered major contributions to NTE. Our results of ωj(q→)&#13;
 at ambient pressure compare reasonably well with Raman scattering spectroscopy measurements and first-principles calculations. By adjusting the layers of materials with positive and negative thermal expansion, it is possible to create LDHs with near-zero α(T)&#13;
. Such a nanostructure might experience a minimal dimensional change with T fluctuations, making it ideal for devices where precise dimensional stability is crucial.
</description>
<pubDate>Mon, 17 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158301</guid>
<dc:date>2025-02-17T00:00:00Z</dc:date>
</item>
<item>
<title>Base-Load Nuclear Reactors for Fully Dispatchable Electricity: Nuclear Air-Brayton Combined Cycles, Firebrick Heat Storage, Hydrogen Storage, and Hydrocarbon Biofuels</title>
<link>https://hdl.handle.net/1721.1/158300</link>
<description>Base-Load Nuclear Reactors for Fully Dispatchable Electricity: Nuclear Air-Brayton Combined Cycles, Firebrick Heat Storage, Hydrogen Storage, and Hydrocarbon Biofuels
Forsberg, Charles
Three partly coupled integrated nuclear energy systems are described. These enable base-load nuclear reactors to provide fully dispatchable electricity without greenhouse-gas emissions, thus replacing gas turbines burning natural gas and batteries storing electricity. These hybrid systems link the industrial sector to the electricity sector. Firstly, electricity-to-high-temperature (1800 &amp;deg;C) gigawatt-hour firebrick heat storage converts low-price electricity to high-temperature stored heat to provide dispatchable heat for industry and power generation. Secondly, Nuclear Air-Brayton Combined Cycles (NACC) with thermodynamic topping cycles using high-temperature stored heat or combustible fuel to provide dispatchable electricity. Peak power output can be two to five times the base-load electricity production. The heat-to-electricity efficiency of the thermodynamic topping cycles exceeds 70%. Thirdly, nuclear hydrogen production for industrial markets enables the production of dispatchable electricity where hydrogen is used for energy storage but not to produce heat and electricity. Base-load nuclear reactors send electricity to the grid and/or electrolyzers for hydrogen production depending upon electricity prices. Low-cost hydrogen storage enables us to meet steady-state industrial hydrogen demands, even though hydrogen and grid electricity production is varied. Hydrogen production for industrial uses (ammonia fertilizer, direct reduction of iron ore to iron replacing coke, cellulosic liquid hydrocarbon biofuels replacing crude oil) may exceed 20% of total energy demand and may be a massive source of dispatchable electricity. The biofuels provide storable energy when heat storage is depleted.
</description>
<pubDate>Mon, 10 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158300</guid>
<dc:date>2025-02-10T00:00:00Z</dc:date>
</item>
<item>
<title>Predicting Mortality in Subarachnoid Hemorrhage Patients Using Big Data and Machine Learning: A Nationwide Study in Türkiye</title>
<link>https://hdl.handle.net/1721.1/158299</link>
<description>Predicting Mortality in Subarachnoid Hemorrhage Patients Using Big Data and Machine Learning: A Nationwide Study in Türkiye
Khaniyev, Taghi; Cekic, Efecan; Gecici, Neslihan Nisa; Can, Sinem; Ata, Naim; Ulgu, Mustafa Mahir; Birinci, Suayip; Isikay, Ahmet Ilkay; Bakir, Abdurrahman; Arat, Anil; Hanalioglu, Sahin
Background/Objective: Subarachnoid hemorrhage (SAH) is associated with high morbidity and mortality rates, necessitating prognostic algorithms to guide decisions. Our study evaluates the use of machine learning (ML) models for predicting 1-month and 1-year mortality among SAH patients using national electronic health records (EHR) system. Methods: Retrospective cohort of 29,274 SAH patients, identified through national EHR system from January 2017 to December 2022, was analyzed, with mortality data obtained from central civil registration system in Türkiye. Variables included (n = 102) pre- (n = 65) and post-admission (n = 37) data, such as patient demographics, clinical presentation, comorbidities, laboratory results, and complications. We employed logistic regression (LR), decision trees (DTs), random forests (RFs), and artificial neural networks (ANN). Model performance was evaluated using area under the curve (AUC), average precision, and accuracy. Feature significance analysis was conducted using LR. Results: The average age was 56.23 ± 16.45 years (47.8% female). The overall mortality rate was 22.8% at 1 month and 33.3% at 1 year. One-month mortality increased from 20.9% to 24.57% (p &lt; 0.001), and 1-year mortality rose from 30.85% to 35.55% (p &lt; 0.001) in the post-COVID period compared to the pre-COVID period. For 1-month mortality prediction, the ANN, LR, RF, and DT models achieved AUCs of 0.946, 0.942, 0.931, and 0.916, with accuracies of 0.905, 0.901, 0.893, and 0.885, respectively. For 1-year mortality, the AUCs were 0.941, 0.927, 0.926, and 0.907, with accuracies of 0.884, 0.875, 0.861, and 0.851, respectively. Key predictors of mortality included age, cardiopulmonary arrest, abnormal laboratory results (such as abnormal glucose and lactate levels) at presentation, and pre-existing comorbidities. Incorporating post-admission features (n = 37) alongside pre-admission features (n = 65) improved model performance for both 1-month and 1-year mortality predictions, with average AUC improvements of 0.093 ± 0.011 and 0.089 ± 0.012, respectively. Conclusions: Our study demonstrates the effectiveness of ML models in predicting mortality in SAH patients using big data. LR models’ robustness, interpretability, and feature significance analysis validate its importance. Including post-admission data significantly improved all models’ performances. Our results demonstrate the utility of big data analytics in population-level health outcomes studies.
</description>
<pubDate>Mon, 10 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158299</guid>
<dc:date>2025-02-10T00:00:00Z</dc:date>
</item>
<item>
<title>Engineering of Genetically Encoded Bright Near-Infrared Fluorescent Voltage Indicator</title>
<link>https://hdl.handle.net/1721.1/158298</link>
<description>Engineering of Genetically Encoded Bright Near-Infrared Fluorescent Voltage Indicator
Xiao, Xian; Yang, Aimei; Zhang, Hanbin; Park, Demian; Wang, Yangdong; Szabo, Balint; Boyden, Edward S.; Piatkevich, Kiryl D.
Genetically encoded voltage indicators (GEVIs) allow for the cell-type-specific real-time imaging of neuronal membrane potential dynamics, which is essential to understanding neuronal information processing at both cellular and circuit levels. Among GEVIs, near-infrared-shifted GEVIs offer faster kinetics, better tissue penetration, and compatibility with optogenetic tools, enabling all-optical electrophysiology in complex biological contexts. In our previous work, we employed the directed molecular evolution of microbial rhodopsin Archaerhodopsin-3 (Arch-3) in mammalian cells to develop a voltage sensor called Archon1. Archon1 demonstrated excellent membrane localization, signal-to-noise ratio (SNR), sensitivity, kinetics, and photostability, and full compatibility with optogenetic tools. However, Archon1 suffers from low brightness and requires high illumination intensities, which leads to tissue heating and phototoxicity during prolonged imaging. In this study, we aim to improve the brightness of this voltage sensor. We performed random mutation on a bright Archon derivative and identified a novel variant, monArch, which exhibits satisfactory voltage sensitivity (4~5% ΔF/FAP) and a 9-fold increase in basal brightness compared with Archon1. However, it is hindered by suboptimal membrane localization and compromised voltage sensitivity. These challenges underscore the need for continued optimization to achieve an optimal balance of brightness, stability, and functionality in rhodopsin-based voltage sensors.
</description>
<pubDate>Sat, 08 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158298</guid>
<dc:date>2025-02-08T00:00:00Z</dc:date>
</item>
<item>
<title>Data Are Power: Addressing the Power Imbalance Around Community Data with the Open-Access Data4HumanRights Curriculum</title>
<link>https://hdl.handle.net/1721.1/158297</link>
<description>Data Are Power: Addressing the Power Imbalance Around Community Data with the Open-Access Data4HumanRights Curriculum
Kuffer, Monika; Thomson, Dana R.; Wakonyo, Dianne; Kimani, Nicera Wanjiru; Kohli-Poll Jonker, Divyani; Okoko, Enyo; Toheeb, Rasak; Akinmuyiwa, Bisola; Zanna, Mohammed; Imole, Dezyno; Maki, Andrew
Data4HumanRights’ training materials have been developed as open-source and tailored to limited-resource settings, where community data collectors often live and work. Access to training on data collection, analysis, and visualisation to support the advocacy of vulnerable groups is essential, particularly in the context of increasing human rights challenges such as land rights, adequate housing, conflicts, and climate justice. This paper provides an overview of how the training materials were co-developed with community data collectors in Nigeria and Kenya, offering insights into the fundamental principles (i.e., inclusiveness, adaptive, limited resources, and being gender- and incentive-sensitive) and the structure of the open-access training materials. The development process resulted in 28 modules, each designed to be delivered in a face-to-face format in less than one day by a local trainer. To maximize adaptivity, the training modules can be mixed and matched (e.g., as individual modules or a learning path of several modules around a specific training need). The individual modules cover a range of methods and tools that are useful to human rights work and community advocacy, e.g., documenting evictions, performing rapid needs assessments after acute crises, community profiling, and monitoring community development indicators. The training materials contain instructions for the training facilitator(s) and all necessary training materials. To ensure inclusivity, the training covers both basic and advanced topics, with most modules designed to address basic needs that can be followed using a mobile phone, thereby avoiding the need for computers or printed handouts. The training results in Nigeria and Kenya showcase applications, including mapping waste problems and addressing forced evictions. Trained community groups produced maps of waste piles to prioritize community actions, such as finding space for urban agriculture, and conducted rapid needs assessments during a massive eviction. This approach helps reduce power imbalances and empowers community groups to effectively manage and utilise their own data.
</description>
<pubDate>Mon, 03 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158297</guid>
<dc:date>2025-02-03T00:00:00Z</dc:date>
</item>
<item>
<title>Exploring the Holiday Effect on Elevated Traffic-Related Air Pollution with Hyperlocal Measurements in Chengdu, China</title>
<link>https://hdl.handle.net/1721.1/158296</link>
<description>Exploring the Holiday Effect on Elevated Traffic-Related Air Pollution with Hyperlocal Measurements in Chengdu, China
Xiang, Sheng; Yu, Jiaojiao; Yu, Yu Ting; Zhao, Pengbo; Zheng, Tie; Yue, Jingsong; Yang, Yuanyuan; Liu, Haobing
Traffic-related air pollutants (TRAPs) pose significant health risks in megacities, yet fixed monitoring sites often fail to capture their complexity. To characterize the TRAP concentrations which fixed sites cannot address, we employed a mobile platform to effectively capture real-time hyperlocal-scale TRAP variations in Chengdu, China. A 17-day sampling campaign was conducted covering the National Holiday of China and collected ~1.2 × 105 1 Hz paired data. We measured particle number concentration (PNC), black carbon (BC), and nitrogen oxides (NOx) across urban and rural freeway environments to assess the impact of reduced heavy-duty diesel vehicles (HDDVs) during the holiday (i.e., holiday effect). No clear impact of wind direction on TRAP concentrations was found in this study. However, substantial differences (two times) were observed when comparing non-holiday to holiday campaigns. Spearman correlations (0.21–0.56) between TRAPs persistently exceeded Pearson correlations (0.14–0.41), indicating non-linear relationships and suggesting the necessity for data transformations (e.g., logarithms) in TRAP analysis. The comparison of the background subtracted TRAPs concentrations between non-holiday and holidays, revealing approximately a 50% reduction in TRAPs across microenvironments. Among the TRAPs, NOx emerged as a reliable indicator of HDDV emissions. The study provides insights into vehicle fleet composition impacts, paving the way for enhanced exposure assessment strategies.
</description>
<pubDate>Sun, 02 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158296</guid>
<dc:date>2025-02-02T00:00:00Z</dc:date>
</item>
<item>
<title>Do What You Say—Computing Personal Values Associated with Professions Based on the Words They Use</title>
<link>https://hdl.handle.net/1721.1/158295</link>
<description>Do What You Say—Computing Personal Values Associated with Professions Based on the Words They Use
Jha, Aditya; Gloor, Peter A.
Members of a profession frequently show similar personality characteristics. In this research, we leverage recent advances in NLP to compute personal values using a moral values framework, distinguishing between four different personas that assist in categorizing different professions by personal values: “fatherlanders”—valuing tradition and authority, “nerds”—valuing scientific achievements, “spiritualists”—valuing compassion and non-monetary achievements, and “treehuggers”—valuing sustainability and the environment. We collected 200 YouTube videos and podcasts for each professional category of lawyers, academics, athletes, engineers, creatives, managers, and accountants, converting their audio to text. We also categorize these professions by team player personas into “bees”—collaborative creative team players, “ants”—competitive hard workers, and “leeches”—selfish egoists using pre-trained models. We find distinctive personal value profiles for each of our seven professions computed from the words that members of each profession use.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158295</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Review of Pnictogenides for Next-Generation Anode Materials for Sodium-Ion Batteries</title>
<link>https://hdl.handle.net/1721.1/158294</link>
<description>A Review of Pnictogenides for Next-Generation Anode Materials for Sodium-Ion Batteries
Ha, Sion; Kim, Junhee; Kim, Dong Won; Suh, Jun Min; Kim, Kyeong-Ho
With the growing market of secondary batteries for electric vehicles (EVs) and grid-scale energy storage systems (ESS), driven by environmental challenges, the commercialization of sodium-ion batteries (SIBs) has emerged to address the high price of lithium resources used in lithium-ion batteries (LIBs). However, achieving competitive energy densities of SIBs to LIBs remains challenging due to the absence of high-capacity anodes in SIBs such as the group-14 elements, Si or Ge, which are highly abundant in LIBs. This review presents potential candidates in metal pnictogenides as promising anode materials for SIBs to overcome the energy density bottleneck. The sodium-ion storage mechanisms and electrochemical performance across various compositions and intrinsic physical and chemical properties of pnictogenide have been summarized. By correlating these properties, strategic frameworks for designing advanced anode materials for next-generation SIBs were suggested. The trade-off relation in pnictogenides between the high specific capacities and the failure mechanism due to large volume expansion has been considered in this paper to address the current issues. This review covers several emerging strategies focused on improving both high reversible capacity and cycle stability.
</description>
<pubDate>Wed, 29 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158294</guid>
<dc:date>2025-01-29T00:00:00Z</dc:date>
</item>
<item>
<title>Electrocatalytic Properties of Electrochemically‐Polymerized Metal‐Phenolic Networks</title>
<link>https://hdl.handle.net/1721.1/158293</link>
<description>Electrocatalytic Properties of Electrochemically‐Polymerized Metal‐Phenolic Networks
Zaragoza, Nadia; Widder, Sage; Huynh, Heidi; Zamani, Marjon; Furst, Ariel L
Metal‐phenolic networks (MPNs) are a promising platform for developing new heterogeneous catalytic materials for water splitting technologies. This study systematically investigates the relationship between MPN composition and catalytic properties via electropolymerization of copper and cobalt combined with lignin, tannic acid, epigallocatechin‐3‐gallate (EGCG), and gallic acid polyphenols. We find that the choice of metal, size of polyphenol, and polymerization method have the greatest impact on the propensity of MPNs for catalyzing hydrogen evolution. For example, gallic acid‐based MPNs result in smoother surfaces with ~2 nm roughness, resulting in low surface area and lower average current densities compared to all other polyphenols tested. Cobalt‐based MPNs show higher current densities compared to copper, yet higher onset potentials. The results provide a map of design choices that can be used to increase the catalytic performance of new materials used in water electrolysis.
</description>
<pubDate>Fri, 17 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158293</guid>
<dc:date>2024-05-17T00:00:00Z</dc:date>
</item>
<item>
<title>Complementary Cost‐Effective Electrochemical Platforms for Point‐Of‐Use Biosensing</title>
<link>https://hdl.handle.net/1721.1/158292</link>
<description>Complementary Cost‐Effective Electrochemical Platforms for Point‐Of‐Use Biosensing
Monaco, Mason; Zamani, Marjon; Sarram, Ava; Kuo, Chao‐Chi; Abeyrathne, Chathurika; Li, Miaosi; Furst, Ariel L
The COVID‐19 pandemic has illustrated the urgent need for rapid and affordable point‐of‐use diagnostics. Electrochemical biosensors are useful for such applications because they enable quantitative readout and show drastically improved sensitivity compared to prevalent lateral flow technologies. However, to‐date, the poor quality of commercially‐available, mass‐produced electrodes has prohibited the scaled production and commercialization of such biosensors beyond glucose sensing. Low‐cost gold leaf electrodes have previously been developed that can be fabricated with no specialized equipment at the point‐of‐use. These electrodes are more effective for biosensing than prevalent commercially‐available systems. Yet, their manual fabrication can be tedious and is not scalable in its current form. Here, performance of mass‐produced gold electrodes generated using roll‐to‐roll manufacturing is evaluated, offering the potential to scale production. Upon comparison of these electrodes with the gold leaf, it is found that these electrodes are high quality, equivalent to the gold leaf electrodes, and support biosensing applications through the detection of both DNase I and BtsI‐v2 activity with comparable performance. These results demonstrate the role of complementary technologies to achieve point‐of‐use sensing by enabling flexibility between mass‐produced manufacture and on‐site production.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158292</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Innovative Approach to Sustainable Fertilizer Production: Leveraging Electrically Assisted Conversion of Sewage Sludge for Nutrient Recovery</title>
<link>https://hdl.handle.net/1721.1/158291</link>
<description>Innovative Approach to Sustainable Fertilizer Production: Leveraging Electrically Assisted Conversion of Sewage Sludge for Nutrient Recovery
Botte, Gerardine G; Donneys-Victoria, Dayana; Alvarez-Pugliese, Christian E; Adjei, Jedidian; Sahin, Selin; Wilson, Nathan W; Millerick, Kayleigh; Hardberger, Amy; Furst, Ariel L; Hu, Nicole; Medford, Andrew J
Efforts addressing sludge management, food security, and resource recovery have led to novel approaches in these areas. Electrically assisted conversion of sludge stands out as a promising technology for sewage sludge valorization, producing nitrogen and phosphorus-based fertilizers. The adoption of this technology, which could lead to a fertilizer circular economy, holds the potential to catalyze a transformative change in wastewater treatment facilities toward process intensification, innovation, and sustainability. This paper provides insights into the economic aspects of the technology, policy considerations, and challenges involved in realizing the potential of electrified processes for sludge valorization. To demonstrate the impact of the technology, a case study for its implementation in the United States assuming the municipal wastewater treatment plants market is discussed. It was found that electrically assisted sludge conversion could enable the recovery of nitrogen and phosphorus from waste, representing up to 9% of the nitrogen and 32% of the phosphorus consumption of the U.S. for fertilizer use. This technology also enables full electrification and modularization of the process, thereby presenting significant economic and environmental opportunities.
</description>
<pubDate>Tue, 17 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158291</guid>
<dc:date>2024-12-17T00:00:00Z</dc:date>
</item>
<item>
<title>Half-Covered ‘Glitter-Cake’ AM@SE Composite: A Novel Electrode Design for High Energy Density All-Solid-State Batteries</title>
<link>https://hdl.handle.net/1721.1/158290</link>
<description>Half-Covered ‘Glitter-Cake’ AM@SE Composite: A Novel Electrode Design for High Energy Density All-Solid-State Batteries
Kim, Min J.; Park, Jin-Sung; Lee, Jin W.; Wang, Sung E.; Yoon, Dowoong; Lee, Jong D.; Kim, Jung H.; Song, Taeseup; Li, Ju; Kang, Yun C.; Jung, Dae S.
All-solid-state batteries (ASSBs) are pursued due to their potential for better safety and high energy density. However, the energy density of the cathode for ASSBs does not seem to be satisfactory due to the low utilization of active materials (AMs) at high loading. With small amount of solid electrolyte (SE) powder in the cathode, poor electrochemical performance is often observed due to contact loss and non-homogeneous distribution of AMs and SEs, leading to high tortuosity and limitation of lithium and electron transport pathways. Here, we propose a novel cathode design that can achieve high volumetric energy density of 1258 Wh L−1 at high AM content of 85 wt% by synergizing the merits of AM@SE core–shell composite particles with conformally coated thin SE shell prepared from mechanofusion process and small SE particles. The core–shell structure with an intimate and thin SE shell guarantees high ionic conduction pathway while unharming the electronic conduction. In addition, small SE particles play the role of a filler that reduces the packing porosity in the cathode composite electrode as well as between the cathode and the SE separator layer. The systematic demonstration of the optimization process may provide understanding and guidance on the design of electrodes for ASSBs with high electrode density, capacity, and ultimately energy density.
</description>
<pubDate>Tue, 28 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158290</guid>
<dc:date>2025-01-28T00:00:00Z</dc:date>
</item>
<item>
<title>Lessons from COVID-19 patient visitation restrictions: six considerations to help develop ethical patient visitor policies</title>
<link>https://hdl.handle.net/1721.1/158289</link>
<description>Lessons from COVID-19 patient visitation restrictions: six considerations to help develop ethical patient visitor policies
Høeg, Tracy B.; Knudsen, Benjamin; Prasad, Vinay
Patient visitor restrictions were implemented in unprecedented ways during the COVID-19 pandemic and included bans on any visitors to dying patients and bans separating mothers from infants. These were implemented without high quality evidence they would be beneficial and the harms to patients, families and medical personnel were often immediately clear. Evidence has also accumulated finding strict visitor restrictions were accompanied by long-term individual and societal consequences. We highlight numerous examples of restrictions that were enacted during the COVID-19 pandemic, including some that continue to be in place today. We outline six specific concerns about the nature and effects of the visitor restrictions seen during the COVID-19 pandemic. These considerations may help provide both an ethical and science-based framework, through which healthcare workers, families and government entities can work towards safeguarding patient and family rights and well-being.
</description>
<pubDate>Sat, 08 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158289</guid>
<dc:date>2025-02-08T00:00:00Z</dc:date>
</item>
<item>
<title>Waves dangerous, domesticated, and diagnostic</title>
<link>https://hdl.handle.net/1721.1/158288</link>
<description>Waves dangerous, domesticated, and diagnostic
Helmreich, Stefan
This paper, based on a keynote presented at the MARE People and the Sea Conference 2023 as well as on material from A Book of Waves, examines how oceanographers and coastal engineers in the United States, the Netherlands, Australia, Japan, and Bangladesh study and represent waves. Waves, seen as both chaotic and ordered, ephemeral and enduring, offer insights into how science engages with environmental, national, and planetary futures. The discussion begins in the Netherlands, where centuries-old efforts to resist waves in a nation below sea level have evolved into “building-with-nature” strategies, reframing waves as collaborators in environmental resilience. Historical contexts, from wave folklore to physical scale models, underpin this shift in Dutch wave science. Next, I explore the wave simulation laboratory at Oregon State University, where researchers model tsunami risks from the Cascadia fault line. These experiments connect the Pacific Northwest with Japan’s tsunami research, highlighting challenges in adapting wave knowledge across regions. Finally, I turn to Bangladesh’s Ganges Delta, where Dutch hydrological expertise was applied in mid-20th-century development projects, often with uneven results. This case illustrates the complexities of transposing wave science into diverse settings. I conclude by reflecting on how these scientific practices contribute to understanding the Anthropocene, particularly from the perspective of the Global South’s oceans.
</description>
<pubDate>Mon, 20 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158288</guid>
<dc:date>2025-01-20T00:00:00Z</dc:date>
</item>
<item>
<title>Preliminary results on the long-term operation of RPCs with eco-friendly gas mixtures under irradiation at the CERN Gamma Irradiation Facility</title>
<link>https://hdl.handle.net/1721.1/158287</link>
<description>Preliminary results on the long-term operation of RPCs with eco-friendly gas mixtures under irradiation at the CERN Gamma Irradiation Facility
Quaglia, L.; Ramos, D.; Abbrescia, M.; Aielli, G.; Aly, R.; Arena, M. C.; Barroso, M.; Benussi, L.; Bianco, S.; Boscherini, D.; Bordon, F.; Bruni, A.; Buontempo, S.; Busato, M.; Camarri, P.; Cardarelli, R.; Congedo, L.; De Jesus Damiao, D.; De Serio, M.; Di Ciacco, A.
Since 2019, a collaboration between researchers from various institutes and experiments (i.e., ATLAS, CMS, ALICE, LHCb/SHiP and the CERN EP-DT group) has been operating several RPCs with diverse electronics, gas gap thicknesses and detector layouts at the CERN Gamma Irradiation Facility (GIF++). The studies aim at assessing the performance of RPCs when filled with new eco-friendly gas mixtures in avalanche mode and in view of evaluating possible aging effects after long high background irradiation periods, for example, high-luminosity LHC phase. This challenging research is also part of a task of the European AidaInnova project. A promising eco-friendly gas identified for RPC operation is the tetrafluoruropropene (C 3 H 2 F 4 , commercially known as HFO-1234ze) that has been studied at the CERN GIF++ in combination with different percentages of CO 2 . Between the end of 2021 and 2022, several beam tests have been carried out to establish the performance of RPCs operated with such mixtures before starting the irradiation campaign for the aging study. Results of these tests for different RPCs layouts and different gas mixtures, under increasing background rates are presented here, together with the preliminary outcome of the detector aging tests.
</description>
<pubDate>Wed, 15 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158287</guid>
<dc:date>2025-01-15T00:00:00Z</dc:date>
</item>
<item>
<title>The Data Artifacts Glossary: a community-based repository for bias on health datasets</title>
<link>https://hdl.handle.net/1721.1/158286</link>
<description>The Data Artifacts Glossary: a community-based repository for bias on health datasets
Gameiro, Rodrigo R.; Woite, Naira L.; Sauer, Christopher M.; Hao, Sicheng; Fernandes, Chrystinne O.; Premo, Anna E.; Teixeira, Alice R.; Resli, Isabelle; Wong, An-Kwok I.; Celi, Leo A.
Background The deployment of Artificial Intelligence (AI) in healthcare has the potential to transform patient care through improved diagnostics, personalized treatment plans, and more efficient resource management. However, the effectiveness and fairness of AI are critically dependent on the data it learns from. Biased datasets can lead to AI outputs that perpetuate disparities, particularly affecting social minorities and marginalized groups. Objective This paper introduces the “Data Artifacts Glossary”, a dynamic, open-source framework designed to systematically document and update potential biases in healthcare datasets. The aim is to provide a comprehensive tool that enhances the transparency and accuracy of AI applications in healthcare and contributes to understanding and addressing health inequities. Methods Utilizing a methodology inspired by the Delphi method, a diverse team of experts conducted iterative rounds of discussions and literature reviews. The team synthesized insights to develop a comprehensive list of bias categories and designed the glossary’s structure. The Data Artifacts Glossary was piloted using the MIMIC-IV dataset to validate its utility and structure. Results The Data Artifacts Glossary adopts a collaborative approach modeled on successful open-source projects like Linux and Python. Hosted on GitHub, it utilizes robust version control and collaborative features, allowing stakeholders from diverse backgrounds to contribute. Through a rigorous peer review process managed by community members, the glossary ensures the continual refinement and accuracy of its contents. The implementation of the Data Artifacts Glossary with the MIMIC-IV dataset illustrates its utility. It categorizes biases, and facilitates their identification and understanding. Conclusion The Data Artifacts Glossary serves as a vital resource for enhancing the integrity of AI applications in healthcare by providing a mechanism to recognize and mitigate dataset biases before they impact AI outputs. It not only aids in avoiding bias in model development but also contributes to understanding and addressing the root causes of health disparities.
</description>
<pubDate>Tue, 04 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158286</guid>
<dc:date>2025-02-04T00:00:00Z</dc:date>
</item>
<item>
<title>X-Mapper: fast and accurate sequence alignment via gapped x-mers</title>
<link>https://hdl.handle.net/1721.1/158285</link>
<description>X-Mapper: fast and accurate sequence alignment via gapped x-mers
Gaston, Jeffry M.; Alm, Eric J.; Zhang, An-Ni
Sequence alignment is foundational to many bioinformatic analyses. Many aligners start by splitting sequences into contiguous, fixed-length seeds, called k-mers. Alignment is faster with longer, unique seeds, but more accurate with shorter seeds avoiding mutations. Here, we introduce X-Mapper, aiming to offer high speed and accuracy via dynamic-length seeds containing gaps, called gapped x-mers. We observe 11–24-fold fewer suboptimal alignments analyzing a human reference and 3–579-fold lower inconsistency across bacterial references than other aligners, improving on 53% and 30% of reads aligned to non-target strains and species, respectively. Other seed-based analysis algorithms might benefit from gapped x-mers too.
</description>
<pubDate>Wed, 22 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158285</guid>
<dc:date>2025-01-22T00:00:00Z</dc:date>
</item>
<item>
<title>Molecular and cellular characteristics of cerebrovascular cell types and their contribution to neurodegenerative diseases</title>
<link>https://hdl.handle.net/1721.1/158284</link>
<description>Molecular and cellular characteristics of cerebrovascular cell types and their contribution to neurodegenerative diseases
Garcia, Francisco J.; Heiman, Myriam
Many diseases and disorders of the nervous system suffer from a lack of adequate therapeutics to halt or slow disease progression, and to this day, no cure exists for any of the fatal neurodegenerative diseases. In part this is due to the incredible diversity of cell types that comprise the brain, knowledge gaps in understanding basic mechanisms of disease, as well as a lack of reliable strategies for delivering new therapeutic modalities to affected areas. With the advent of single cell genomics, it is now possible to interrogate the molecular characteristics of diverse cell populations and their alterations in diseased states. More recently, much attention has been devoted to cell populations that have historically been difficult to profile with bulk single cell technologies. In particular, cell types that comprise the cerebrovasculature have become increasingly better characterized in normal and neurodegenerative disease contexts. In this review, we describe the current understanding of cerebrovasculature structure, function, and cell type diversity and its role in the mechanisms underlying various neurodegenerative diseases. We focus on human and mouse cerebrovasculature studies and discuss both origins and consequences of cerebrovascular dysfunction, emphasizing known cell type-specific vulnerabilities in neuronal and cerebrovascular cell populations. Lastly, we highlight how novel insights into cerebrovascular biology have impacted the development of modern therapeutic approaches and discuss outstanding questions in the field.
</description>
<pubDate>Wed, 29 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158284</guid>
<dc:date>2025-01-29T00:00:00Z</dc:date>
</item>
<item>
<title>Stability phase diagram of active Brownian particles</title>
<link>https://hdl.handle.net/1721.1/158283</link>
<description>Stability phase diagram of active Brownian particles
Nie, Pin; Chattoraj, Joyjit; Piscitelli, Antonio; Doyle, Patrick; Ni, Ran; Ciamarra, Massimo Pica
Phase separation in a low-density gas-like phase and a high-density liquid-like one is a common trait of biological and synthetic self-propelling particle systems. The competition between motility and stochastic forces is assumed to fix the boundary between the homogeneous and the phase-separated phase. Here we demonstrate that, on the contrary, motility does also promote the homogeneous phase allowing particles to resolve their collisions. This understanding allows quantitatively predicting the spinodal line of hard self-propelling Brownian particles, the prototypical model exhibiting a motility-induced phase separation. Furthermore, we demonstrate that frictional forces control the physical process by which motility promotes the homogeneous phase. Hence, friction emerges as an experimentally variable parameter to control the motility-induced phase diagram.
</description>
<pubDate>Wed, 01 Jan 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158283</guid>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Hydrogel Microparticle‐Templated Anti‐Solvent Crystallization of Small‐Molecule Drugs</title>
<link>https://hdl.handle.net/1721.1/158282</link>
<description>Hydrogel Microparticle‐Templated Anti‐Solvent Crystallization of Small‐Molecule Drugs
Bora, Meghali; Hsu, Myat Noe; Khan, Saif A; Doyle, Patrick S
Conventional formulation strategies for hydrophobic small‐molecule drug products frequently include mechanical milling to decrease active pharmaceutical ingredient (API) crystal size and subsequent granulation processes to produce an easily handled powder. A hydrogel‐templated anti‐solvent crystallization method is presented for the facile fabrication of microparticles containing dispersed nanocrystals of poorly soluble API. Direct crystallization within a porous hydrogel particle template yields core–shell structures in which the hydrogel core containing API nanocrystals is encased by a crystalline API shell. The process of controllable loading (up to 64% w/w) is demonstrated, and tailored dissolution profiles are achieved by simply altering the template particle size. API release is well described by a shrinking core model. Overall, the approach is a simple, scalable and potentially generalizable method that enables novel means of independently controlling both API crystallization and excipient characteristics, offering a “designer” drug particle system.
</description>
<pubDate>Fri, 01 Apr 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158282</guid>
<dc:date>2022-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Tuning the topology of a two-dimensional catenated DNA network</title>
<link>https://hdl.handle.net/1721.1/158281</link>
<description>Tuning the topology of a two-dimensional catenated DNA network
Yadav, Indresh; Al Sulaiman, Dana; Doyle, Patrick S
Molecular topology of polymers plays a key role in determining their physical properties. We studied herein the topological effects on the static and dynamic properties of a 2D catenated network of DNA rings called a kinetoplast. Restriction enzymes that cleave DNA at sequence-specific sites are used to selectively cut and remove rings from the network and hence tune the molecular topology while maintaining overall structural integrity. We find that topology has minimal effects over the spatial extension of the 2D network; however, it significantly affects the relaxation behavior. The shape fluctuations of the network are governed by two distinct characteristic time scales attributed to the thermal fluctuations and confinement of the network. The relationship between the time constant of thermal relaxation and the amplitude of anisotropy fluctuations yields a universal scaling. Interestingly, this scaling is independent of the detailed arrangements of rings and/or perforation within the catenated networks. This study provides a route to tune the elastic properties of 2D catenated DNA networks and other polymeric materials by modifying the underlying topology in a rational and highly controllable manner.
</description>
<pubDate>Sun, 01 Jan 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158281</guid>
<dc:date>2023-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Crystalline Antibody‐Laden Alginate Particles: A Platform for Enabling High Concentration Subcutaneous Delivery of Antibodies</title>
<link>https://hdl.handle.net/1721.1/158280</link>
<description>Crystalline Antibody‐Laden Alginate Particles: A Platform for Enabling High Concentration Subcutaneous Delivery of Antibodies
Erfani, Amir; Schieferstein, Jeremy M; Reichert, Paul; Narasimhan, Chakravarthy N; Pastuskovas, Cinthia; Parab, Vaishali; Simmons, Denarra; Yang, Xiaoyu; Shanker, Apoorv; Hammond, Paula; Doyle, Patrick S
Subcutaneous (SC) administration is a desired route for monoclonal antibodies (mAbs). However, formulating mAbs for small injection volumes at high concentrations with suitable stability and injectability is a significant challenge. Here, this work presents a platform technology that combines the stability of crystalline antibodies with injectability and tunability of soft hydrogel particles. Composite alginate hydrogel particles are generated via a gentle centrifugal encapsulation process which avoids use of chemical reactions or an external organic phase. Crystalline suspension of anti‐programmed cell death protein 1 (PD‐1) antibody (pembrolizumab) is utilized as a model therapeutic antibody. Crystalline forms of the mAb encapsuled in the hydrogel particles lead to stable, high concentration, and injectable formulations. Formulation concentrations as high as 315 mg mL&lt;jats:sup&gt;−1&lt;/jats:sup&gt; antibody are achieved with encapsulation efficiencies in the range of 89–97%, with no perceivable increase in the number of antibody aggregates. Bioanalytical studies confirm superior maintained quality of the antibody in comparison with formulation approaches involving organic phases and chemical reactions. This work illustrates tuning the alginate particles’ disintegration by using partially oxide alginates. Crystalline mAb‐laden particles are evaluated for their biocompatibility using cell‐based in vitro assays. Furthermore, the pharmacokinetics (PK) of the subcutaneously delivered human anti‐PD‐1 mAb in crystalline antibody‐laden alginate hydrogel particles in Wistar rats is evaluated.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158280</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Noisy-channel language comprehension in aphasia: A Bayesian mixture modeling approach</title>
<link>https://hdl.handle.net/1721.1/158279</link>
<description>Noisy-channel language comprehension in aphasia: A Bayesian mixture modeling approach
Ryskin, Rachel; Gibson, Edward; Kiran, Swathi
Individuals with “agrammatic” receptive aphasia have long been known to rely on semantic plausibility rather than syntactic cues when interpreting sentences. In contrast to early interpretations of this pattern as indicative of a deficit in syntactic knowledge, a recent proposal views agrammatic comprehension as a case of “noisy-channel” language processing with an increased expectation of noise in the input relative to healthy adults. Here, we investigate the nature of the noise model in aphasia and whether it is adapted to the statistics of the environment. We first replicate findings that a) healthy adults (N = 40) make inferences about the intended meaning of a sentence by weighing the prior probability of an intended sentence against the likelihood of a noise corruption and b) their estimate of the probability of noise increases when there are more errors in the input (manipulated via exposure sentences). We then extend prior findings that adults with chronic post-stroke aphasia (N = 28) and healthy age-matched adults (N = 19) similarly engage in noisy-channel inference during comprehension. We use a hierarchical latent mixture modeling approach to account for the fact that rates of guessing are likely to differ between healthy controls and individuals with aphasia and capture individual differences in the tendency to make inferences. We show that individuals with aphasia are more likely than healthy controls to draw noisy-channel inferences when interpreting semantically implausible sentences, even when group differences in the tendency to guess are accounted for. While healthy adults rapidly adapt their inference rates to an increase in noise in their input, whether individuals with aphasia do the same remains equivocal. Further investigation of comprehension through a noisy-channel lens holds promise for a parsimonious understanding of language processing in aphasia and may suggest potential avenues for treatment.
</description>
<pubDate>Tue, 28 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158279</guid>
<dc:date>2025-01-28T00:00:00Z</dc:date>
</item>
<item>
<title>Design and Validation of a High-Fidelity Left Atrial Cardiac Simulator for the Study and Advancement of Left Atrial Appendage Occlusion</title>
<link>https://hdl.handle.net/1721.1/158278</link>
<description>Design and Validation of a High-Fidelity Left Atrial Cardiac Simulator for the Study and Advancement of Left Atrial Appendage Occlusion
Mendez, Keegan; Singh, Manisha; Willoughby, Patrick; Ncho, Beatrice; Liao, Aileen; Su, Susan; Lim, Megan; Lee, Elijah; Alkhouli, Mohamad; Alarouri, Hasan; Roche, Ellen T.
Purpose Atrial fibrillation (AF) is the most common chronic cardiac arrhythmia that increases the risk of stroke, primarily due to thrombus formation in the left atrial appendage (LAA). Left atrial appendage occlusion (LAAO) devices offer an alternative to oral anticoagulation for stroke prevention. However, the complex and variable anatomy of the LAA presents significant challenges to device design and deployment. Current benchtop models fail to replicate both anatomical variability and physiological hemodynamics, limiting their utility. This study introduces a novel left atrial cardiac simulator that incorporates patient-derived LAA models within a benchtop circulatory flow loop, enabling high-fidelity LAAO device testing and development. Methods A rigid, patient-derived left atrium (LA) model was 3D printed from segmented MRI data and modified to accommodate attachment of patient-specific LAA models. A library of LAA geometries was fabricated using silicone casting techniques to replicate the mechanical properties of native tissue. The LA-LAA model was integrated into a circulatory flow loop equipped with a pulsatile pump, pressure sensors, and flow probes, allowing real-time hemodynamic analysis. System tunability was demonstrated by varying heart rate, stroke volume, resistance, and compliance to simulate physiological and pathological conditions. Results The simulator accurately replicated LA pressure and flow waveforms, closely approximating physiological conditions. Changes in heart rate, stroke volume, and compliance effectively modulated LAP and LA inflow before and after LAAO. Distinct pressure and flow waveforms were observed with different LAA geometries. Hemodynamic analysis revealed increased left atrial pulse pressure after occlusion, with the greatest increase occurring after complete exclusion of the LAA. The simulator facilitated the evaluation of LAAO device performance, including metrics such as seal and PDL, and served as an effective training tool for iterative device deployment and recapture with visual and imaging-guided feedback. Conclusions The left atrial cardiac simulator offers a highly tunable and realistic platform for testing and developing LAAO devices. It also serves as an effective procedural training tool, allowing for the simulation of patient-specific anatomical and hemodynamic conditions. By enabling these advanced simulations, the simulator enhances pre-procedural planning, device sizing, and placement. This innovation represents a significant step toward advancing personalized medicine in atrial fibrillation management and improving LAAO outcomes.
</description>
<pubDate>Mon, 27 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158278</guid>
<dc:date>2025-01-27T00:00:00Z</dc:date>
</item>
<item>
<title>Search for long-lived heavy neutral leptons in proton-proton collision events with a lepton-jet pair associated with a secondary vertex at √s = 13 TeV</title>
<link>https://hdl.handle.net/1721.1/158277</link>
<description>Search for long-lived heavy neutral leptons in proton-proton collision events with a lepton-jet pair associated with a secondary vertex at √s = 13 TeV
Hayrapetyan, A.; Tumasyan, A.; Adam, W.; Andrejkovic, J. W.; Bergauer, T.; Chatterjee, S.; Damanakis, K.; Dragicevic, M.; Hussain, P. S.; Jeitler, M.; Krammer, N.; Li, A.; Liko, D.; Mikulec, I.; Schieck, J.; Schöfbeck, R.; Schwarz, D.; Sonawane, M.; The CMS collaboration
A search for long-lived heavy neutral leptons (HNLs) using proton-proton collision data corresponding to an integrated luminosity of 138 fb−1 collected at s = 13 TeV with the CMS detector at the CERN LHC is presented. Events are selected with a charged lepton originating from the primary vertex associated with the proton-proton interaction, as well as a second charged lepton and a hadronic jet associated with a secondary vertex that corresponds to the semileptonic decay of a long-lived HNL. No excess of events above the standard model expectation is observed. Exclusion limits at 95% confidence level are evaluated for HNLs that mix with electron and/or muon neutrinos. Limits are presented in the mass range of 1–16.5 GeV, with excluded square mixing parameter values reaching as low as 2 × 10−7. For masses above 11 GeV, the presented limits exceed all previous results in the semileptonic decay channel, and for some of the considered scenarios are the strongest to date.
</description>
<pubDate>Thu, 06 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158277</guid>
<dc:date>2025-02-06T00:00:00Z</dc:date>
</item>
<item>
<title>Cryptographic Censorship</title>
<link>https://hdl.handle.net/1721.1/158276</link>
<description>Cryptographic Censorship
Engelhardt, Netta; Folkestad, Åsmund; Levine, Adam; Verheijden, Evita; Yang, Lisa
We formulate and take two large strides towards proving a quantum version of the weak cosmic censorship conjecture. We first prove “Cryptographic Censorship”: a theorem showing that when the time evolution operator of a holographic CFT is approximately pseudorandom (or Haar random) on some code subspace, then there must be an event horizon in the corresponding bulk dual. This result provides a general condition that guarantees (in finite time) event horizon formation, with minimal assumptions about the global spacetime structure. Our theorem relies on an extension of a recent quantum learning no-go theorem and is proved using new techniques of pseudorandom measure concentration. To apply this result to cosmic censorship, we separate singularities into classical, semi-Planckian, and Planckian types. We illustrate that classical and semi-Planckian singularities are compatible with approximately pseudorandom CFT time evolution; thus, if such singularities are indeed approximately pseudorandom, by Cryptographic Censorship, they cannot exist in the absence of event horizons. This result provides a sufficient condition guaranteeing that seminal holographic results on quantum chaos and thermalization, whose general applicability relies on typicality of horizons, will not be invalidated by the formation of naked singularities in AdS/CFT.
</description>
<pubDate>Thu, 23 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158276</guid>
<dc:date>2025-01-23T00:00:00Z</dc:date>
</item>
<item>
<title>Recycling of Tantalum Capacitors Via Sulfide Chemistry</title>
<link>https://hdl.handle.net/1721.1/158275</link>
<description>Recycling of Tantalum Capacitors Via Sulfide Chemistry
Boury, Charles; Allanore, Antoine
The fabrication of tantalum capacitors represents more than 35 pct of the total consumption of metallic tantalum with an increasing demand for the high-technology sector. Tantalum capacitors contain a large concentration of tantalum, and the absence of niobium leads to interesting economic outcomes for potential recycling processes. The article discusses such recycling using sulfur, where an AB2O6 crystal structure analogous to the orthorhombic columbite-tantalite series is sulfidized. Sulfide affinities differences between A (Mn, Fe) and B (Nb, Ta) effectively separate the ternary oxide, capitalizing on the distinct chemical properties between A and B elements, in the absence of fluoridic acids. To bypass the fluoride-based chemistry process entirely, a proof of concept of tantalum disulfide (TaS2) production via sulfidation of Ta2O5 and its subsequent metallic reduction via molten sulfide electrolysis are also presented.
</description>
<pubDate>Mon, 27 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158275</guid>
<dc:date>2025-01-27T00:00:00Z</dc:date>
</item>
<item>
<title>Convergence to Bohmian Mechanics in a de Broglie-Like Pilot-Wave System</title>
<link>https://hdl.handle.net/1721.1/158274</link>
<description>Convergence to Bohmian Mechanics in a de Broglie-Like Pilot-Wave System
Darrow, David
Bohmian mechanics supplements the quantum wavefunction with deterministic particle trajectories, offering an alternate, dynamical language for quantum theory. However, the Bohmian wavefunction evolves independently of these trajectories, and is thus unaffected by the observable properties of the system. While this property is widely assumed necessary to ensure agreement with quantum mechanics, much work has recently been dedicated to understanding classical pilot-wave systems, which feature a two-way coupling between particle and wave. These systems—including the “walking droplet” system of Couder and Fort (Couder and Fort (2006) Phys. Rev. Lett. 97:154101) and its various abstractions (Dagan and Bush (2020) CR Mecanique 348:555–571; Durey and Bush (2020) Front. Phys. 8:300; (2021) Chaos 31:033136; Darrow and Bush (2024) Symmetry 16:149)—allow us to investigate the limits of classical systems and offer a touchstone between quantum and classical dynamics. In this work, we present a general result that bridges Bohmian mechanics with this classical pilot-wave theory. Namely, Darrow and Bush ((2024) Symmetry 16:149) recently introduced a Lagrangian pilot-wave framework to study quantum-like behaviours in classical systems; with a particular choice of particle-wave coupling, they recover key dynamics hypothesised in de Broglie’s early double-solution theory (de Broglie (1970) Foundations Phys. 1:5–15). We here show that, with a different choice of coupling, their de Broglie-like system reduces exactly to single-particle Bohmian mechanics in the non-relativistic limit. Our result clarifies that, while multi-particle entanglement is impossible to replicate in general with local, classical theories, no such restriction exists for single-particle quantum mechanics. Moreover, connecting with the previous work of Darrow and Bush, our work demonstrates that de Broglie’s and Bohm’s theories can be connected naturally within a single Lagrangian framework. Finally, we present an application of the present work in developing a single-particle analogue for position measurement in a de Broglie-like setting.
</description>
<pubDate>Tue, 04 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158274</guid>
<dc:date>2025-02-04T00:00:00Z</dc:date>
</item>
<item>
<title>First and last as superlatives of before and after</title>
<link>https://hdl.handle.net/1721.1/158273</link>
<description>First and last as superlatives of before and after
Alstott, Johanna
First and last have been variously described as ordinals, superlatives, or both. These descriptions are generally not accompanied by extensive argumentation, and those who label first and last as superlatives do not present and argue for a particular decomposition. Thus, first and last’s status as ordinals vs. superlatives and their internal composition remain open issues. In this paper, I argue that first and last are superlatives, in particular the superlative forms of before and after. To argue that first and last are superlatives, I show that they pattern like superlatives and unlike ordinals (second, third, etc.) with respect to plurality, modifier choice, “modal superlatives” with possible, and the ordinal superlative construction. I next argue that the relations between before and first and between after and last show themselves overtly in many languages and in English paraphrases; furthermore, first and last semantically differ in ways that before and after have also been noted to differ. While I acknowledge one observation that prima facie counterexemplifies these claims, I argue that it constitutes a genuine counterexample only if one formalizes my decomposition of first/last using a standard Heimian (Heim in Notes on superlatives. Manuscript, MIT (1999)) entry for -est. The counterexample, which concerns the “upstairs de dicto” reading of superlatives, ceases to be an issue if one treats before and after as simplex and formalizes my decomposition using a Containment Hypothesis-inspired semantics (Bobaljik in Universals in comparative morphology: Suppletion, superlatives, and the structure of words, MIT Press, Cambridge, 2012) for -est.
</description>
<pubDate>Fri, 31 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158273</guid>
<dc:date>2025-01-31T00:00:00Z</dc:date>
</item>
<item>
<title>Hydrogel-enabled, local administration and combinatorial delivery of immunotherapies for cancer treatment</title>
<link>https://hdl.handle.net/1721.1/158266</link>
<description>Hydrogel-enabled, local administration and combinatorial delivery of immunotherapies for cancer treatment
Erfani, Amir; Diaz, Antonio E; Doyle, Patrick S
Throughout the last decade, interventions to engineer the immune system called immunotherapy have revolutionized the fields of oncology and autoimmune disease. Researchers are developing platforms that enable new modes of immunotherapy and expand the current limitations by incorporating non-intravenous delivery strategies. Recent advances in the immunotherapy include the use of chemokines to direct immune cells into tumors, alternative combinatorial therapies, and oncolytic viruses. Similarly, there have been significant breakthroughs in the design and understanding of new biocompatible hydrogel-based materials for diverse biomedical applications, including large molecule drug delivery. In this review, we discuss how hydrogel platforms can enable modes of immunotherapy that are otherwise not feasible. Despite the many pre-clinical successes of hydrogels for the delivery of immunotherapies for treatment of cancer, hydrogels still face challenges in getting to the clinic and eventually approved. Herein we examine the application of hydrogels in high concentration subcutaneous, intratumoral, peritumoral, intraperitoneal, intracranial, and pulmonary delivery of immunotherapies. By analyzing the results of many pre-clinical hydrogel-enabled immunotherapy studies, we describe that local hydrogel delivery is a promising approach to increase the efficacy and decrease systemic toxicities of immunotherapies. We also discuss the application of hydrogels for synergistic combinatorial immunotherapy. Furthermore, we summarize the advancements and obstacles in local intratumoral administration and sustained release of immunotherapy-loaded hydrogels. Finally, we discuss challenges in the translational research, clinical development, and manufacturing of hydrogels which must be addressed to advance the field.
</description>
<pubDate>Mon, 01 May 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158266</guid>
<dc:date>2023-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Injectable hydrogel particles for amorphous solid formulation of biologics</title>
<link>https://hdl.handle.net/1721.1/158265</link>
<description>Injectable hydrogel particles for amorphous solid formulation of biologics
Erfani, Amir; Reichert, Paul; Narasimhan, Chakravarthy N; Doyle, Patrick S
The fast pace of breakthroughs in cancer immunotherapy, combined with the new paradigm of moving toward high-concentration dosages and combinatorial treatments, is generating new challenges in the formulation of biologics. To address these challenges, we describe a method of formulation that enables high-concentration injectable and stable formulation of biologics as amorphous solids in aqueous suspension. This technology combines the benefits of liquid formulation with the stability of solid formulation and eliminates the need for drying and reconstitution. This widely applicable formulation integrates the amorphous solid forms of antibodies with the injectability, lubricity, and tunability of soft alginate hydrogel particles using a minimal process. The platform was evaluated for anti-PD-1 antibody pembrolizumab and human immunoglobulin G at concentrations up to 300 mg/mL with confirmed quality after release. The soft nature of the hydrogel matrix allowed packing the particles to high volume fractions.
</description>
<pubDate>Tue, 01 Aug 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158265</guid>
<dc:date>2023-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Metrology of Individual Small Viruses</title>
<link>https://hdl.handle.net/1721.1/158264</link>
<description>Metrology of Individual Small Viruses
Li, Kun; Shah, Arjav; Sharma, Rajesh Kumar; Adkins, Raymond; Marjanovic, Tihomir; Doyle, Patrick S; Garaj, Slaven
Viruses come in various shapes and sizes, and understanding their morphology is central to understanding their activity and function. The need for fast recognition and real‐time fingerprinting methods for pathogenic viruses is a critical bottleneck in implementing many diagnostic and therapeutic techniques. In this work, nanopore tomography (NT) is implemented for fast measurements of the characteristic dimensions of viruses and the optimal operating conditions are explored. Using a small filamentous bacteriophage as a model, it is demonstrated that NT can detect geometrical features in a few‐nanometer regime, with high throughput and accuracy, in aqueous conditions. The instrumental parameters are optimized to obtain virus diameter measurements that are robust to the uncertainties of the external parameters. Furthermore, NT is critically compared to various single‐particle imaging techniques, with a particular emphasis on emerging helium ion microscopy (HIM). It is shown that, with proper operating procedures, HIM can reach a nanometer‐scale resolution in viral metrology, while retaining a high throughput second only to NT. The high throughput of both techniques can foster sufficient statistics for a precise exploration of viral heterogeneity.
</description>
<pubDate>Wed, 01 Nov 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158264</guid>
<dc:date>2023-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Deep learning initialized compressed sensing (Deli-CS) in volumetric spatio-temporal subspace reconstruction</title>
<link>https://hdl.handle.net/1721.1/158263</link>
<description>Deep learning initialized compressed sensing (Deli-CS) in volumetric spatio-temporal subspace reconstruction
Schauman, S. S.; Iyer, Siddharth S.; Sandino, Christopher M.; Yurt, Mahmut; Cao, Xiaozhi; Liao, Congyu; Ruengchaijatuporn, Natthanan; Chatnuntawech, Itthi; Tong, Elizabeth; Setsompop, Kawin
Object Spatio-temporal MRI methods offer rapid whole-brain multi-parametric mapping, yet they are often hindered by prolonged reconstruction times or prohibitively burdensome hardware requirements. The aim of this project is to reduce reconstruction time using deep learning. Materials and methods This study focuses on accelerating the reconstruction of volumetric multi-axis spiral projection MRF, aiming for whole-brain T1 and T2 mapping, while ensuring a streamlined approach compatible with clinical requirements. To optimize reconstruction time, the traditional method is first revamped with a memory-efficient GPU implementation. Deep Learning Initialized Compressed Sensing (Deli-CS) is then introduced, which initiates iterative reconstruction with a DL-generated seed point, reducing the number of iterations needed for convergence. Results The full reconstruction process for volumetric multi-axis spiral projection MRF is completed in just 20 min compared to over 2 h for the previously published implementation. Comparative analysis demonstrates Deli-CS’s efficiency in expediting iterative reconstruction while maintaining high-quality results. Discussion By offering a rapid warm start to the iterative reconstruction algorithm, this method substantially reduces processing time while preserving reconstruction quality. Its successful implementation paves the way for advanced spatio-temporal MRI techniques, addressing the challenge of extensive reconstruction times and ensuring efficient, high-quality imaging in a streamlined manner.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158263</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Impact of lesion preparation-induced calcified plaque defects in vascular intervention for atherosclerotic disease: in silico assessment</title>
<link>https://hdl.handle.net/1721.1/158262</link>
<description>Impact of lesion preparation-induced calcified plaque defects in vascular intervention for atherosclerotic disease: in silico assessment
Sogbadji, Jonas; Kadry, Karim; Poletti, Gianluca; Berti, Francesca; Edelman, Elazer R.; Nezami, Farhad R.
Percutaneous coronary interventions in highly calcified atherosclerotic lesions are challenging due to the high mechanical stiffness that significantly restricts stent expansion. Intravascular lithotripsy (IVL) is a novel vessel preparation technique with the potential to improve interventional outcomes by inducing microscopic and macroscopic cracks to enhance stent expansion. However, the exact mechanism of action for IVL is poorly understood, and it remains unclear whether the improvement in-stent expansion is caused by either the macro-cracks allowing the vessel to open or the micro-cracks altering the bulk material properties. In silico models offer a robust means to examine (a) diverse lesion morphologies, (b) a range of lesion modifications to address these deficiencies, and (c) the correlation between calcium morphology alteration and improved stenting outcomes. These models also help identify which lesions would benefit the most from IVL. In this study, we develop an in silico model of stent expansion to study the effect of macro-crack morphology on interventional outcomes in clinically inspired geometries. Larger IVL-induced defects promote more post-stent lumen gain. IVL seems to induce better stenting outcomes for large calcified lesions. IVL defects that split calcified plaque in two parts are the most beneficial for stenting angioplasty, regardless of the calcified plaque size. Location of the IVL defect does not seem to matter with respect to lumen gain. These findings underscore the potential of IVL to enhance lesion compliance and improve clinical outcomes in PCI. The macroscopic defects induced by IVL seem to have a substantial impact on post-stent outcomes.
</description>
<pubDate>Tue, 21 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158262</guid>
<dc:date>2025-01-21T00:00:00Z</dc:date>
</item>
<item>
<title>First demonstration of a TES based cryogenic Li2MoO4 detector for neutrinoless double beta decay search</title>
<link>https://hdl.handle.net/1721.1/158261</link>
<description>First demonstration of a TES based cryogenic Li2MoO4 detector for neutrinoless double beta decay search
Bratrud, G.; Chang, C. L.; Chen, R.; Cudmore, E.; Figueroa-Feliciano, E.; Hong, Z.; Kennard, K. T.; Lewis, S.; Lisovenko, M.; Mateo, L. O.; Novati, V.; Novosad, V.; Oliveri, E.; Ren, R.; Scarpaci, J. A.; Schmidt, B.; Wang, G.; Winslow, L.; Yefremenko, V. G.; Zhang, J.
Cryogenic calorimetric experiments to search for neutrinoless double-beta decay ( 0 ν β β ) are highly competitive, scalable and versatile in isotope. The largest planned detector array, CUPID, is comprised of about 1500 individual Li 2 100 MoO 4 detector modules with a further scale up envisioned for a follow up experiment (CUPID-1T). In this article, we present a novel detector concept targeting this second stage with a low impedance TES based readout for the Li 2 MoO 4 absorber that is easily mass-produced and lends itself to a multiplexed readout. We present the detector design and results from a first prototype detector operated at the NEXUS shallow underground facility at Fermilab. The detector is a 2-cm-side cube with 21 g mass that is strongly thermally coupled to its readout chip to allow rise-times of ∼ 0.5 ms. This design is more than one order of magnitude faster than present NTD based detectors and is hence expected to effectively mitigate backgrounds generated through the pile-up of two independent two neutrino decay events coinciding close in time. Together with a baseline resolution of 1.95 keV (FWHM) these performance parameters extrapolate to a background index from pile-up as low as 5 · 10 - 6  counts/keV/kg/yr in CUPID size crystals. The detector was calibrated up to the MeV region showing sufficient dynamic range for 0 ν β β searches. In combination with a SuperCDMS HVeV detector this setup also allowed us to perform a precision measurement of the scintillation time constants of Li 2 MoO 4 , which showed a primary component with a fast O(20  μ s) time scale.
</description>
<pubDate>Fri, 31 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158261</guid>
<dc:date>2025-01-31T00:00:00Z</dc:date>
</item>
<item>
<title>Measurement of inclusive and diferential cross sections of single top quark production in association with a W boson in proton-proton collisions at √s = 13.6 TeV</title>
<link>https://hdl.handle.net/1721.1/158260</link>
<description>Measurement of inclusive and diferential cross sections of single top quark production in association with a W boson in proton-proton collisions at √s = 13.6 TeV
Hayrapetyan, A.; Tumasyan, A.; Adam, W.; Andrejkovic, J. W.; Bergauer, T.; Chatterjee, S.; Damanakis, K.; Dragicevic, M.; Hussain, P. S.; Jeitler, M.; Krammer, N.; Li, A.; Liko, D.; Mikulec, I.; Schieck, J.; Schöfbeck, R.; Schwarz, D.; Sonawane, M.; The CMS collaboration
The first measurement of the inclusive and normalised differential cross sections of single top quark production in association with a W boson in proton-proton collisions at a centre-of-mass energy of 13.6 TeV is presented. The data were recorded with the CMS detector at the LHC in 2022, and correspond to an integrated luminosity of 34.7 fb−1. The analysed events contain one muon and one electron in the final state. For the inclusive measurement, multivariate discriminants exploiting the kinematic properties of the events are used to separate the signal from the dominant top quark-antiquark production background. A cross section of 82.3 ± 2.1 stat − 9.7 + 9.9 syst ± 3.3 lumi pb is obtained, consistent with the predictions of the standard model. A fiducial region is defined according to the detector acceptance to perform the differential measurements. The resulting differential distributions are unfolded to particle level and show good agreement with the predictions at next-to-leading order in perturbative quantum chromodynamics.
</description>
<pubDate>Tue, 21 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158260</guid>
<dc:date>2025-01-21T00:00:00Z</dc:date>
</item>
<item>
<title>Amplitude analysis of B+ → ψ(2S)K+π+π− decays</title>
<link>https://hdl.handle.net/1721.1/158259</link>
<description>Amplitude analysis of B+ → ψ(2S)K+π+π− decays
Aaij, R.; Abdelmotteleb, A. S. W.; Abellan Beteta, C.; Abudinén, F.; Ackernley, T.; Adefisoye, A. A.; Adeva, B.; Adinolfi, M.; Adlarson, P.; Agapopoulou, C.; Aidala, C. A.; Ajaltouni, Z.; Akar, S.; Akiba, K.; Albicocco, P.; Albrecht, J.; The LHCb collaboration
he first full amplitude analysis of B+ → ψ(2S)K+π+π− decays is performed using proton-proton collision data corresponding to an integrated luminosity of 9 fb−1 recorded with the LHCb detector. The rich K+π+π− spectrum is studied and the branching fractions of the resonant substructure associated with the prominent K1(1270)+ contribution are measured. The data cannot be described by conventional strange and charmonium resonances only. An amplitude model with 53 components is developed comprising 11 hidden-charm exotic hadrons. New production mechanisms for charged charmonium-like states are observed. Significant resonant activity with spin-parity JP = 1+ in the ψ(2S)π+ system is confirmed and a multi-pole structure is demonstrated. The spectral decomposition of the ψ(2S)π+π− invariant-mass structure, dominated by X0 → ψ(2S)ρ(770)0 decays, broadly resembles the J/ψϕ spectrum observed in B+ → J/ψϕK+ decays. Exotic ψ(2S)K+π− resonances are observed for the first time.
</description>
<pubDate>Wed, 08 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158259</guid>
<dc:date>2025-01-08T00:00:00Z</dc:date>
</item>
<item>
<title>Measurement of multidifferential cross sections for dijet production in proton–proton collisions at √s = 13 TeV</title>
<link>https://hdl.handle.net/1721.1/158258</link>
<description>Measurement of multidifferential cross sections for dijet production in proton–proton collisions at √s = 13 TeV
A measurement of the dijet production cross section is reported based on proton–proton collision data collected in 2016 at s = 13 Te V by the CMS experiment at the CERN LHC, corresponding to an integrated luminosity of up to 36.3 fb - 1 . Jets are reconstructed with the anti- k T algorithm for distance parameters of R = 0.4 and 0.8. Cross sections are measured double-differentially (2D) as a function of the largest absolute rapidity | y | max of the two jets with the highest transverse momenta p T and their invariant mass m 1 , 2 , and triple-differentially (3D) as a function of the rapidity separation y ∗ , the total boost y b , and either m 1 , 2 or the average p T of the two jets. The cross sections are unfolded to correct for detector effects and are compared with fixed-order calculations derived at next-to-next-to-leading order in perturbative quantum chromodynamics. The impact of the measurements on the parton distribution functions and the strong coupling constant at the mass of the Z boson is investigated, yielding a value of α S ( m Z ) = 0.1179 ± 0.0019.
</description>
<pubDate>Fri, 24 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158258</guid>
<dc:date>2025-01-24T00:00:00Z</dc:date>
</item>
<item>
<title>Orthogonal Gelations to Synthesize Core–Shell Hydrogels Loaded with Nanoemulsion‐Templated Drug Nanoparticles for Versatile Oral Drug Delivery (Adv. Healthcare Mater. 31/2023)</title>
<link>https://hdl.handle.net/1721.1/158257</link>
<description>Orthogonal Gelations to Synthesize Core–Shell Hydrogels Loaded with Nanoemulsion‐Templated Drug Nanoparticles for Versatile Oral Drug Delivery (Adv. Healthcare Mater. 31/2023)
Attia, Lucas; Chen, Liang‐Hsun; Doyle, Patrick S
Hydrophobic active pharmaceutical ingredients (APIs) are ubiquitous in the drug development pipeline, but their poor bioavailability often prevents their translation into drug products. Industrial processes to formulate hydrophobic APIs are expensive, difficult to optimize, and not flexible enough to incorporate customizable drug release profiles into drug products. Here, a novel, dual-responsive gelation process that exploits orthogonal thermo-responsive and ion-responsive gelations is introduced. This one-step “dual gelation” synthesizes core–shell (methylcellulose-alginate) hydrogel particles and encapsulates drug-laden nanoemulsions in the hydrogel matrices. In situ crystallization templates drug nanocrystals inside the polymeric core, while a kinetically stable amorphous solid dispersion is templated in the shell. Drug release is explored as a function of particle geometry, and programmable release is demonstrated for various therapeutic applications including delayed pulsatile release and sequential release of a model fixed-dose combination drug product of ibuprofen and fenofibrate. Independent control over drug loading between the shell and the core is demonstrated. This formulation approach is shown to be a flexible process to develop drug products with biocompatible materials, facile synthesis, and precise drug release performance. This work suggests and applies a novel method to leverage orthogonal gel chemistries to generate functional core–shell hydrogel particles.
</description>
<pubDate>Fri, 01 Dec 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158257</guid>
<dc:date>2023-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Zwitterionic Hydrogel‐Based Heterogeneous Fenton Catalyst for Water Treatment</title>
<link>https://hdl.handle.net/1721.1/158256</link>
<description>A Zwitterionic Hydrogel‐Based Heterogeneous Fenton Catalyst for Water Treatment
Gokhale, Devashish; Chen, Ian; Wu, Wan‐Ni; Monne Gagnaire, Arthur; Doyle, Patrick S
Persistent organic pollutants (POPs), including xenoestrogens and polyfluoroalkyl substances (PFAS), demand urgent global intervention. Fenton oxidation, catalyzed by iron ions, offers a cost-effective means to degrade POPs. However, numerous challenges like acid dependency, catalyst loss, and toxic waste generation hinder practical application. Efforts to create long-lasting heterogeneous Fenton catalysts, capable of simultaneously eliminating acid requirements, sustaining rapid kinetics, and retaining iron efficiently, have been unsuccessful. This study introduces an innovative heterogeneous zwitterionic hydrogel-based Fenton catalyst, surmounting these challenges in a cost-effective and scalable manner. The hydrogel, hosting individually complexed iron ions in a porous scaffold, exhibits substantial effective surface area and kinetics akin to homogeneous Fenton reactions. Complexed ions within the hydrogel can initiate Fenton degradation at neutral pH, eliminating acid additions. Simultaneously, the zwitterionic hydrogel scaffold, chosen for its resistance to Fenton oxidation, forms strong bonds with iron ions, enabling prolonged reuse. Diverging from existing designs, the catalyst proves compatible with UV-Fenton processes and achieves rapid self-regeneration during operation, offering a promising solution for the efficient and scalable degradation of POPs. The study underscores the efficacy of the approach by demonstrating the swift degradation of three significant contaminants—xenoestrogens, pesticides, and PFAS—across multiple cycles at trace concentrations.
</description>
<pubDate>Mon, 01 Jan 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158256</guid>
<dc:date>2024-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Multiscale topological analysis of kinetoplast DNA &lt;i&gt;via&lt;/i&gt; high-resolution AFM</title>
<link>https://hdl.handle.net/1721.1/158255</link>
<description>Multiscale topological analysis of kinetoplast DNA &lt;i&gt;via&lt;/i&gt; high-resolution AFM
Diggines, Bradley; Whittle, Sylvia; Yadav, Indresh; Holmes, Elizabeth P; Rollins, Daniel E; Catley, Thomas E; Doyle, Patrick S; Pyne, Alice LB
Kinetoplast DNA is a complex nanoscale network, naturally assembled from thousands of interconnected DNA circles within the mitochondrion of certain parasites. Despite the relevance of this molecule to parasitology and the recent discovery of tuneable mechanics, its topology remains highly contested. Here we present a multiscale analysis into the structure of kDNA using a combination of high-resolution atomic force microscopy and custom-designed image analysis protocols. By capturing a notably large set of high-resolution images, we are able to look beyond individual kDNA variations and quantify population properties throughout several length scales. Within the sample, geometric fluctuations of area and mean curvature are observed, corresponding with previous in vitro measurements. These translate to localised variations in density, with a sample-wide decrease in DNA density from the outer rim of the molecule to the centre and an increase in pore size. Nodes were investigated in a single molecule study, and their estimated connectivity significantly exceeded mean valence, with a high dependence on their position in the network. While node separation was approximately half the minicircle circumference, it followed a strong bimodal distribution, suggesting more complex underlying behaviour. Finally, upon selective digestion of the network, breakdown of the fibril-cap heterogeneity was observed, with molecules expanding less upon immobilisation on the mica surface. Additionally, preferential digestion was seen in localised areas of the network, increasing pore size disproportionately. Overall, the combination of high-resolution AFM and single molecule image analysis provides a promising method to the continued investigation of complex nanoscale structures. These findings support the ongoing characterisation of kDNA topology to aid understanding of its biological and mechanical phenomena.
</description>
<pubDate>Mon, 01 Jan 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158255</guid>
<dc:date>2024-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Quantitative and spatially resolved detection of multiplexed microRNA from plant tissue via hybridization to hydrogel-bound DNA probes in nanoliter well arrays</title>
<link>https://hdl.handle.net/1721.1/158254</link>
<description>Quantitative and spatially resolved detection of multiplexed microRNA from plant tissue via hybridization to hydrogel-bound DNA probes in nanoliter well arrays
Fang, Jennifer; Doyle, Patrick S
Understanding complex regulatory networks in plant systems requires elucidating the roles of various gene regulators under a spatial landscape. MicroRNA are key regulators that impart high information value through their tissue specificity and stability when using expression patterns for evaluating network outcomes. However, current techniques that utilize spatial multiplexing and quantitation of microRNA are limited to primarily mammalian systems. Here, we present a method to spatially resolve and quantify multiple endogenous microRNA in situ using ethanol fixed, paraffin embedded model plant species. This method utilizes target-specific microRNA capture along with universal ligating and labelling, all within functionalized hydrogel posts containing DNA probes in nanoliter well arrays. We demonstrate the platform’s multiplexing capabilities through analyzing three endogenous microRNA in Arabidopsis thaliana rosettes which provide useful answers to fundamental plant growth and development from the unique expression patterns. The spatial tissue technique is also validated using non-spatial small RNA assays to demonstrate the versatility of the well array platform. Our new platform expands the toolkit of spatial omics technologies for plants.
</description>
<pubDate>Tue, 08 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158254</guid>
<dc:date>2024-10-08T00:00:00Z</dc:date>
</item>
<item>
<title>From burst to controlled release: using hydrogel crosslinking chemistry to tune release of micro-crystalline active pharmaceutical ingredients</title>
<link>https://hdl.handle.net/1721.1/158253</link>
<description>From burst to controlled release: using hydrogel crosslinking chemistry to tune release of micro-crystalline active pharmaceutical ingredients
Manghnani, Purnima N; Nelson, Arif Z; Wong, Kelvin; Lee, Yi Wei; Khan, Saif A; Doyle, Patrick S
Hydrogels have been widely studied as substrates for drug delivery and tissue engineering owing to their biocompatibility and ability to swell in aqueous media. Encapsulation of lipophilic active pharmaceutical ingredients (API) as crystalline micro-/nanoparticles within hydrogel formulations has shown promise for improving their bioavailability and achieving high drug load. Despite the size reduction of the API within the hydrogel mesh, the bioavailability of these formulations is largely governed by the inherent ability of the hydrogel polymer backbone to release the API. In this work, Michael addition-based Polyethylene glycol (PEG) hydrogels are developed for micro-crystalline fenofibrate (Fen) encapsulation. Using a parallelized step emulsification device, API nanoemulsion (NE) loaded micro-hydrogels are fabricated and subsequently subjected to anti-solvent extraction for API crystallization. The bi-molecular nature of the Michael addition reaction provides modular incorporation of crosslinking functional groups leading to precise temporal control over hydrogel degradation, thereby offering a sensitive handle on the release of micro-crystalline fenofibrate. By merely changing the chemical identity of the hydrogel cross-link, complete Fen release could be tuned from 4 hours to 10 days. Furthermore, the interaction of crystallizing Fen and PEG within the micro-hydrogel environment led to eutectic formation. This unique feature offered a second handle on the Fen release from the composite micro-hydrogels.
</description>
<pubDate>Tue, 21 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158253</guid>
<dc:date>2025-01-21T00:00:00Z</dc:date>
</item>
<item>
<title>Injectable sustained-release hydrogel for high-concentration antibody delivery</title>
<link>https://hdl.handle.net/1721.1/158252</link>
<description>Injectable sustained-release hydrogel for high-concentration antibody delivery
Zheng, Talia; Doyle, Patrick S
There is an increasing interest in subcutaneous (SC) delivery as an alternative to the traditional intravenous (IV) for immunotherapies and other advanced therapies. High-concentration formulations of antibodies are needed to meet the limited-volume requirements of subcutaneous SC delivery. Despite this need, there remain challenges in delivering stable and injectable antibodies in these high concentrations. Hydrogel encapsulation of amorphous solid antibodies has been proven to improve the stability and injectability of high-concentration antibody formulations. However, the antibody is quickly released from the hydrogel due to the material's porosity, leading to rapid, uncontrolled drug release kinetics undesirable for the drug's efficacy and safety. In this paper, we propose a dual-network composite hydrogel which leverages interactions between the two polymer networks to achieve controlled release of the antibody. We load the solid form of the antibody at high concentrations within alginate hydrogel microparticles which are then suspended in thermogelling methylcellulose solution to formulate the in situ gelling composite hydrogel. By facile chemical modification of the alginate to tune the microparticles’ gel properties and alginate–methylcellulose interactions, we demonstrate how the composite system can delay release of the drug in a tunable manner and achieve a near-zero order release profile for improved therapeutic efficacy. We show acceptable injectability properties of the composite hydrogel at high antibody concentrations, highlighting the functionalities of dualnetwork encapsulation. We imagine this composite system to be applicable for the sustained delivery of various therapeutic protein forms, especially for high-loading SC formulations.
</description>
<pubDate>Tue, 21 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158252</guid>
<dc:date>2025-01-21T00:00:00Z</dc:date>
</item>
<item>
<title>Search for B ( s ) ∗ 0 → μ + μ - in B c + → π + μ + μ - decays</title>
<link>https://hdl.handle.net/1721.1/158251</link>
<description>Search for B ( s ) ∗ 0 → μ + μ - in B c + → π + μ + μ - decays
LHCb Collaboration
A search for the very rare B ∗ 0 → μ + μ - and B s ∗ 0 → μ + μ - decays is conducted by analysing the B c + → π + μ + μ - process. The analysis uses proton-proton collision data collected with the LHCb detector between 2011 and 2018, corresponding to an integrated luminosity of 9 \,fb - 1 . The signal signatures correspond to simultaneous peaks in the μ + μ - and π + μ + μ - invariant masses. No evidence for an excess of events over background is observed for either signal decay mode. Upper limits at the 90 % confidence level are set on the branching fractions relative to that for B c + → J / ψ π + decays, R B ∗ 0 ( μ + μ - ) π + / J / ψ π + &lt; 3.8 × 10 - 5 and R B s ∗ 0 ( μ + μ - ) π + / J / ψ π + &lt; 5.0 × 10 - 5.
</description>
<pubDate>Mon, 13 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158251</guid>
<dc:date>2025-01-13T00:00:00Z</dc:date>
</item>
<item>
<title>Iron-sulfur clusters: the road to room temperature</title>
<link>https://hdl.handle.net/1721.1/158250</link>
<description>Iron-sulfur clusters: the road to room temperature
Skeel, Brighton A.; Suess, Daniel L. M.
Iron-sulfur proteins perform a wide variety of reactions central to the metabolisms of all living organisms. Foundational to their reaction chemistry are the rich electronic structures of their constituent Fe-S clusters, which differ in important ways from the active sites of mononuclear Fe enzymes. In this perspective, we summarize the essential electronic structure features that make Fe-S clusters unique, and point to the need for studies aimed at understanding the electronic basis for their reactivity under physiological conditions. Specifically, at ambient temperature, both the ground state and a large number of excited states are thermally populated, and thus a complete understanding of Fe-S cluster reactivity must take into account the properties, energies, and reactivity patterns of these excited states. We highlight prior research toward characterizing the low-energy excited states of Fe-S clusters that has established what is now a consensus model of these excited state manifolds and the bonding interactions that give rise to them. In particular, we discuss the low-energy alternate spin states and valence electron configurations that occur in Fe-S clusters of varying nuclearities, and finally suggest that there may be unrecognized functional roles for these states. Graphical abstract
</description>
<pubDate>Fri, 31 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158250</guid>
<dc:date>2025-01-31T00:00:00Z</dc:date>
</item>
<item>
<title>Thermogelation of nanoemulsions stabilized by a commercial pea protein isolate: high-pressure homogenization defines gel strength</title>
<link>https://hdl.handle.net/1721.1/158249</link>
<description>Thermogelation of nanoemulsions stabilized by a commercial pea protein isolate: high-pressure homogenization defines gel strength
Renggli, Damian; Doyle, Patrick S
The impact of animal-based food production on climate change drives the development of plant-based alternatives. We demonstrate the use of colloidal thermogelation on a real nanoemulsion system to create structured gels that could be of interest for thermo-mechanical processing of next-generation plant-based food applications. We use a commercial pea protein isolate (PPI) without further purification to stabilize a 20 vol% peanut oil-in-water nanoemulsion at pH = 7 by high-pressure homogenization (HPH) and demonstrate the temperature induced gelation behavior of the nanoemulsion as a function of the HPH processing parameters. Bright-field and laser scanning confocal fluorescence microscopy reveals a diverse microstructure of the aqueous PPI dispersions, with a large amount of insoluble protein particles, cell-wall debris particles, and lipid inclusions. Sedimentation of particulates is prevented by HPH treatment and leads to a loss of the dispersion's thermogelation properties. The non-gelling PPI dispersion stabilizes nanoemulsions and the insoluble components of the PPI dispersions persist throughout the HPH processing. We perform a systematic rheological investigation of the effect of HPH processing on thermogelation and demonstrate that the number of HPH passes n and HPH pressure P control the average nanoemulsion droplet size measured by DLS at a 90° scattering angle. We show that the droplet size defines the final gel strength with a strong inverse dependence of the elastic modulus on droplet size. Furthermore, processing can lead to heterogeneously structured gels that yield over a large strain amplitude range.
</description>
<pubDate>Wed, 22 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158249</guid>
<dc:date>2025-01-22T00:00:00Z</dc:date>
</item>
<item>
<title>Shift invariance of half space integrable models</title>
<link>https://hdl.handle.net/1721.1/158248</link>
<description>Shift invariance of half space integrable models
He, Jimmy
We formulate and establish symmetries of certain integrable half space models, analogous to recent results on symmetries for models in a full space. Our starting point is the colored stochastic six vertex model in a half space, from which we obtain results on the asymmetric simple exclusion process, as well as for the beta polymer through a fusion procedure which may be of independent interest. As an application, we establish a distributional identity between the absorption time in a type B analogue of the oriented swap process and last passage times in a half space, establishing the Baik–Ben Arous–Péché phase transition for the absorption time. The proof uses Hecke algebras and integrability of the six vertex model through the Yang–Baxter and reflection equations.
</description>
<pubDate>Thu, 16 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158248</guid>
<dc:date>2025-01-16T00:00:00Z</dc:date>
</item>
<item>
<title>Identifying Novel Emotions and Wellbeing of Horses from Videos Through Unsupervised Learning</title>
<link>https://hdl.handle.net/1721.1/158247</link>
<description>Identifying Novel Emotions and Wellbeing of Horses from Videos Through Unsupervised Learning
Bhave, Aarya; Kieson, Emily; Hafner, Alina; Gloor, Peter A.
first_pageDownload PDFsettingsOrder Article Reprints&#13;
Open AccessArticle&#13;
Identifying Novel Emotions and Wellbeing of Horses from Videos Through Unsupervised Learning&#13;
by Aarya Bhave 1ORCID,Emily Kieson 2ORCID,Alina Hafner 3 andPeter A. Gloor 1,*ORCID&#13;
1&#13;
Massachusetts Institute of Technology, System Design &amp; Management, Cambridge, MA 02142, USA&#13;
2&#13;
Equine International, Cambridge CB22 5LD, UK&#13;
3&#13;
TUM School of Computation, Information and Technology, Technical University of Munich, Arcisstraße 21, 80333 Munich, Germany&#13;
*&#13;
Author to whom correspondence should be addressed.&#13;
Sensors 2025, 25(3), 859; https://doi.org/10.3390/s25030859&#13;
Submission received: 5 January 2025 / Revised: 22 January 2025 / Accepted: 30 January 2025 / Published: 31 January 2025&#13;
(This article belongs to the Special Issue Emotion Recognition and Cognitive Behavior Analysis Based on Sensors)&#13;
Downloadkeyboard_arrow_down Browse Figures Review Reports Versions Notes&#13;
&#13;
Abstract&#13;
This research applies unsupervised learning on a large original dataset of horses in the wild to identify previously unidentified horse emotions. We construct a novel, high-quality, diverse dataset of 3929 images consisting of five wild horse breeds worldwide at different geographical locations. We base our analysis on the seven Panksepp emotions of mammals “Exploring”, “Sadness”, “Playing”, “Rage”, “Fear”, “Affectionate” and “Lust”, along with one additional emotion “Pain” which has been shown to be highly relevant for horses. We apply the contrastive learning framework MoCo (Momentum Contrast for Unsupervised Visual Representation Learning) on our dataset to predict the seven Panksepp emotions and “Pain” using unsupervised learning. We significantly modify the MoCo framework, building a custom downstream classifier network that connects with a frozen CNN encoder that is pretrained using MoCo. Our method allows the encoder network to learn similarities and differences within image groups on its own without labels. The clusters thus formed are indicative of deeper nuances and complexities within a horse’s mood, which can possibly hint towards the existence of novel and complex equine emotions.
</description>
<pubDate>Fri, 31 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158247</guid>
<dc:date>2025-01-31T00:00:00Z</dc:date>
</item>
<item>
<title>Energy Burden in the United States: An Analysis Using Decision Trees</title>
<link>https://hdl.handle.net/1721.1/158246</link>
<description>Energy Burden in the United States: An Analysis Using Decision Trees
Chun, Jungwoo; Ortiz, Dania; Jin, Brooke; Kulkarni, Nikita; Hart, Stephen; Knox-Hayes, Janelle
The concept of energy burden (EB) continues to gain prominence in energy and associated policy research as energy prices rise and electricity and heating options diversify. This research offers a deeper understanding of EB dynamics and how EB can be addressed more effectively by discerning the interplay between regional environmental, social, and economic factors. Using decision trees (DTs), a powerful machine learning technique, we explore the multifaceted dynamics that shape EB across the United States (U.S.) by examining how factors like housing quality, demographic variations, access to energy sources, and regional economic conditions interact, creating distinct EB profiles across communities. Following a comprehensive review of existing literature and DT analysis, we map the results to identify the most significant factors influencing EB. We find that no single variable has a determinant effect on EB levels. While there is no uniform regional pattern, regions with higher population density exhibit a stronger correlation between EB and socioeconomic and other demographic factors such as educational attainment levels and racial segregation. Our findings underscore the significance of regional ecologies in shaping EB, revealing how localized environmental and economic contexts amplify or mitigate systemic inequities. Specifically, our analysis reveals significant regional disparities, highlighting the need for localized policies and interventions. We find that a one-size-fits-all approach is insufficient and that targeted, place-based strategies are necessary to address the specific needs of different communities. Policy interventions should prioritize energy democracy, address systemic inequities, and ensure universal energy access through participatory planning, financial assistance, and targeted initiatives such as housing rehabilitation, energy efficiency improvements, and incentives for underrepresented communities.
</description>
<pubDate>Thu, 30 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158246</guid>
<dc:date>2025-01-30T00:00:00Z</dc:date>
</item>
<item>
<title>Microhardness, Young’s and Shear Modulus in Tetrahedrally Bonded Novel II-Oxides and III-Nitrides</title>
<link>https://hdl.handle.net/1721.1/158245</link>
<description>Microhardness, Young’s and Shear Modulus in Tetrahedrally Bonded Novel II-Oxides and III-Nitrides
Talwar, Devki N.; Becla, Piotr
Direct wide-bandgap III-Ns and II-Os have recently gained considerable attention due to their unique electrical and chemical properties. These novel semiconductors are being explored to design short-wavelength light-emitting diodes, sensors/biosensors, photodetectors for integration into flexible transparent nanoelectronics/photonics to achieve high-power radio-frequency modules, and heat-resistant optical switches for communication networks. Knowledge of the elastic constants structural and mechanical properties has played crucial roles both in the basic understanding and assessing materials’ use in thermal management applications. In the absence of experimental structural, elastic constants, and mechanical traits, many theoretical simulations have yielded inconsistent results. This work aims to investigate the basic characteristics of tetrahedrally coordinated, partially ionic BeO, MgO, ZnO, and CdO, and partially covalent BN, AlN, GaN, and InN materials. By incorporating a bond-orbital and a valance force field model, we have reported comparative results of our systematic calculations for the bond length d&#13;
, bond polarity αP&#13;
, covalency αC&#13;
, bulk modulus B&#13;
, elastic stiffness C(=[c11−c12]2)&#13;
, bond-stretching α&#13;
 and bond-bending β&#13;
 force constants, Kleinmann’s internal displacement ζ, and Born’s transverse effective charge e∗T&#13;
. Correlations between C/B&#13;
, β&#13;
/α&#13;
, c12c11,&#13;
 ζ, and&#13;
 αC &#13;
revealed valuable trends of structural, elastic, and bonding characteristics. The study noticed AlN and GaN (MgO and ZnO) showing nearly comparable features, while BN (BeO) is much harder compared to InN (CdO) material, with drastically softer bonding. Calculations of microhardness H&#13;
, shear modulus G,&#13;
 and Young’s modulus Y&#13;
 have predicted BN (BeO) satisfying a criterion of super hardness. III-Ns (II-Os) could be vital in electronics, aerospace, defense, nuclear reactors, and automotive industries, providing integrity and performance at high temperature in high-power applications, ranging from heat sinks to electronic substrates to insulators in high-power devices.
</description>
<pubDate>Wed, 22 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158245</guid>
<dc:date>2025-01-22T00:00:00Z</dc:date>
</item>
<item>
<title>Transforming Growth Factor Beta and Epithelial to Mesenchymal Transition Alter Homologous Recombination Repair Gene Expression and Sensitize BRCA Wild-Type Ovarian Cancer Cells to Olaparib</title>
<link>https://hdl.handle.net/1721.1/158244</link>
<description>Transforming Growth Factor Beta and Epithelial to Mesenchymal Transition Alter Homologous Recombination Repair Gene Expression and Sensitize BRCA Wild-Type Ovarian Cancer Cells to Olaparib
Roberts, Cai M.; Rojas-Alexandre, Mehida; Hanna, Ruth E.; Lin, Z. Ping; Ratner, Elena S.
Simple Summary&#13;
Olaparib is a PARP inhibitor that is currently the standard treatment for ovarian cancer. However, its use is largely confined to tumors carrying a mutation in the BRCA1 or BRCA2 genes. Our study sought to identify additional ovarian cancer cell populations sensitive to olaparib. TGFβ has been well characterized as a driver of epithelial to mesenchymal transition (EMT), a process whereby epithelial cancer cells alter their adhesion molecules and gain the ability to migrate and invade. We hypothesized that the cytokine TGFβ would alter DNA repair mechanisms that render wild-type ovarian cancer cells sensitive to olaparib. We used two pairs of epithelial and mesenchymal ovarian cancer cell lines to probe DNA repair and olaparib response. Our findings suggest that some populations of metastatic cancer cells may be vulnerable to olaparib or other therapies targeting DNA repair.&#13;
&#13;
Abstract&#13;
Epithelial ovarian cancer (EOC) remains the most lethal gynecologic malignancy, largely due to metastasis and drug resistant recurrences. Fifteen percent of ovarian tumors carry mutations in BRCA1 or BRCA2, rendering them vulnerable to treatment with PARP inhibitors such as olaparib. Recent studies have shown that TGFβ can induce “BRCAness” in BRCA wild-type cancer cells. Given that TGFβ is a known driver of epithelial to mesenchymal transition (EMT), and the connection between EMT and metastatic spread in EOC and other cancers, we asked if TGFβ and EMT alter the susceptibility of EOC to PARP inhibition. Epithelial EOC cells were transiently treated with soluble TGFβ, and their clonogenic potential, expression, and function of EMT and DNA repair genes, and response to PARP inhibitors compared with untreated controls. A second epithelial cell line was compared to its mesenchymal derivative for EMT and DNA repair gene expression and drug responses. We found that TGFβ and EMT resulted in the downregulation of genes responsible for homologous recombination (HR) and sensitized cells to olaparib. HR efficiency was reduced in a dose-dependent manner. Furthermore, mesenchymal cells displayed sensitivity to olaparib, cisplatin, and the DNA-PK inhibitor Nu-7441. Therefore, the treatment of disseminated, mesenchymal tumors may represent an opportunity to expand the clinical utility of PARP inhibitors and similar agents.
</description>
<pubDate>Tue, 01 Aug 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158244</guid>
<dc:date>2023-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Association Between Medicaid Expansion and Insurance Status, Risk Group, Receipt, and Refusal of Treatment Among Men with Prostate Cancer</title>
<link>https://hdl.handle.net/1721.1/158243</link>
<description>Association Between Medicaid Expansion and Insurance Status, Risk Group, Receipt, and Refusal of Treatment Among Men with Prostate Cancer
Patel, Tej A.; Jain, Bhav; Dee, Edward Christopher; Kohli, Khushi; Ranganathan, Sruthi; Janopaul-Naylor, James; Mahal, Brandon A.; Yamoah, Kosj; McBride, Sean M.; Nguyen, Paul L.; Chino, Fumiko; Muralidhar, Vinayak; Lam, Miranda B.; Vapiwala, Neha
Simple Summary&#13;
We sought to quantify the impact of Medicaid expansion on insurance status, stage at diagnosis, time to treatment initiation, and refusal of locoregional treatment among patients with prostate cancer, the second leading cause of cancer death among men in the United States. We found that while Medicaid expansion was associated with increased insurance coverage and decreased refusal of radiation therapy, there was no significant association with earlier risk group at diagnosis, treatment within 180 days, nor refusal of locoregional therapy. Similarly, racial minorities experienced no significant changes in time to treatment initiation following Affordable Care Act implementation compared to White patients. Ultimately, more research is needed to understand how Medicaid expansion affects cancer outcomes and whether these effects are borne equitably among different populations.&#13;
&#13;
Abstract&#13;
Background: Although the Patient Protection and Affordable Care Act (ACA) has been associated with increased Medicaid coverage among prostate cancer patients, the association between Medicaid expansion with risk group at diagnosis, time to treatment initiation (TTI), and the refusal of locoregional treatment (LT) among patients requires further exploration. Methods: Using the National Cancer Database, we performed a retrospective cohort analysis of all patients aged 40 to 64 years diagnosed with localized prostate cancer from 2011 to 2016. Difference-in-difference (DID) analysis was used to compare changes in insurance status, risk group at diagnosis, TTI, and the refusal of LT among patients residing in Medicaid expansion versus non-expansion states. In a secondary analysis, we used DID to compare changes in the above outcomes among racial minorities versus White patients living in expansion states. Results: Of the 112,434 patients with prostate cancer in our analysis, 50,958 patients lived in Medicaid expansion states, and 61,476 patients lived in non-expansion states. In the adjusted analysis, we found that the proportion of uninsured patients (adjusted DID: −0.87%; 95% confidence interval [95% CI]: −1.28 to −0.46) and patients who refused radiation therapy (adjusted DID: −0.71%; 95% CI: −0.95 to −0.47) decreased more in expansion states compared to non-expansion states. Similarly, we observed that the racial disparity of select outcomes in expansion states narrowed, as racial minorities experienced larger absolute decreases in uninsured status and the refusal of radiation therapy (RT) regimens than White patients following ACA implementation (p &lt; 0.01 for all). However, residence in a Medicaid expansion state was not associated with changes in risk group at diagnosis, TTI, nor the refusal of LT (p &gt; 0.01 for all); racial disparities in TTI were also exacerbated in expansion states following ACA implementation. Conclusions: The association between Medicaid expansion and prostate cancer outcomes and disparities remains unclear. While ACA implementation was associated with increased insurance coverage and decreased refusal of RT, there was no significant association with earlier risk group at diagnosis, TTI within 180 days, or refusal of LT. Similarly, racial minorities in expansion states had larger decreases in uninsured status and the refusal of RT regimens, as well as smaller increases in intermediate-/high-risk disease at presentation than White patients following ACA implementation, but experienced no significant changes in TTI. More research is needed to understand how Medicaid expansion affects cancer outcomes and whether these effects are borne equitably among different populations.
</description>
<pubDate>Thu, 06 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158243</guid>
<dc:date>2025-02-06T00:00:00Z</dc:date>
</item>
<item>
<title>Recent Progress in Flexible Piezoelectric Tactile Sensors: Materials, Structures, Fabrication, and Application</title>
<link>https://hdl.handle.net/1721.1/158242</link>
<description>Recent Progress in Flexible Piezoelectric Tactile Sensors: Materials, Structures, Fabrication, and Application
Tang, Jingyao; Li, Yiheng; Yu, Yirong; Hu, Qing; Du, Wenya; Lin, Dabin
Flexible tactile sensors are widely used in aerospace, medical and health monitoring, electronic skin, human–computer interaction, and other fields due to their unique advantages, thus becoming a research hotspot. The goal is to develop a flexible tactile sensor characterized by outstanding sensitivity, extensive detection range and linearity, elevated spatial resolution, and commendable adaptability. Among several strategies like capacitive, piezoresistive, and triboelectric tactile sensors, etc., we focus on piezoelectric tactile sensors because of their self-powered nature, high sensitivity, and quick response time. These sensors can respond to a wide range of dynamic mechanical stimuli and turn them into measurable electrical signals. This makes it possible to accurately detect objects, including their shapes and textures, and for them to sense touch in real time. This work encapsulates current advancements in flexible piezoelectric tactile sensors, focusing on enhanced material properties, optimized structural design, improved fabrication techniques, and broadened application domains. We outline the challenges facing piezoelectric tactile sensors to provide inspiration and guidance for their future development.
</description>
<pubDate>Wed, 05 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158242</guid>
<dc:date>2025-02-05T00:00:00Z</dc:date>
</item>
<item>
<title>Estudios feministas de seguridad desde América Latina y el Caribe</title>
<link>https://hdl.handle.net/1721.1/158241</link>
<description>Estudios feministas de seguridad desde América Latina y el Caribe
Jungs de Almeida, Alessandra; D'Ignazio, Catherine
Jungs de Almeida, Alessandra
</description>
<pubDate>Wed, 01 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158241</guid>
<dc:date>2025-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Antibody-directed evolution reveals a mechanism for enhanced neutralization at the HIV-1 fusion peptide site</title>
<link>https://hdl.handle.net/1721.1/158240</link>
<description>Antibody-directed evolution reveals a mechanism for enhanced neutralization at the HIV-1 fusion peptide site
Banach, Bailey B; Pletnev, Sergei; Olia, Adam S; Xu, Kai; Zhang, Baoshan; Rawi, Reda; Bylund, Tatsiana; Doria-Rose, Nicole A; Nguyen, Thuy Duong; Fahad, Ahmed S; Lee, Myungjin; Lin, Bob C; Liu, Tracy; Louder, Mark K; Madan, Bharat; McKee, Krisha; O’Dell, Sijy; Sastry, Mallika; Schön, Arne; Bui, Natalie; Shen, Chen-Hsiang; Wolfe, Jacy R; Chuang, Gwo-Yu; Mascola, John R; Kwong, Peter D; DeKosky, Brandon J
The HIV-1 fusion peptide (FP) represents a promising vaccine target, but global FP sequence diversity among circulating strains has limited anti-FP antibodies to ~60% neutralization breadth. Here we evolve the FP-targeting antibody VRC34.01 in vitro to enhance FP-neutralization using site saturation mutagenesis and yeast display. Successive rounds of directed evolution by iterative selection of antibodies for binding to resistant HIV-1 strains establish a variant, VRC34.01_mm28, as a best-in-class antibody with 10-fold enhanced potency compared to the template antibody and ~80% breadth on a cross-clade 208-strain neutralization panel. Structural analyses demonstrate that the improved paratope expands the FP binding groove to accommodate diverse FP sequences of different lengths while also recognizing the HIV-1 Env backbone. These data reveal critical antibody features for enhanced neutralization breadth and potency against the FP site of vulnerability and accelerate clinical development of broad HIV-1 FP-targeting vaccines and therapeutics.
</description>
<pubDate>Sun, 01 Jan 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158240</guid>
<dc:date>2023-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Cell activation-based screening of natively paired human T cell receptor repertoires</title>
<link>https://hdl.handle.net/1721.1/158239</link>
<description>Cell activation-based screening of natively paired human T cell receptor repertoires
Fahad, Ahmed S; Chung, Cheng Yu; López Acevedo, Sheila N; Boyle, Nicoleen; Madan, Bharat; Gutiérrez-González, Matías F; Matus-Nicodemos, Rodrigo; Laflin, Amy D; Ladi, Rukmini R; Zhou, John; Wolfe, Jacy; Llewellyn-Lacey, Sian; Koup, Richard A; Douek, Daniel C; Balfour, Henry H; Price, David A; DeKosky, Brandon J
Adoptive immune therapies based on the transfer of antigen-specific T cells have been used successfully to treat various cancers and viral infections, but improved techniques are needed to identify optimally protective human T cell receptors (TCRs). Here we present a high-throughput approach to the identification of natively paired human TCRα and TCRβ (TCRα:β) genes encoding heterodimeric TCRs that recognize specific peptide antigens bound to major histocompatibility complex molecules (pMHCs). We first captured and cloned TCRα:β genes from individual cells, ensuring fidelity using a suppression PCR. We then screened TCRα:β libraries expressed in an immortalized cell line using peptide-pulsed antigen-presenting cells and sequenced activated clones to identify the cognate TCRs. Our results validated an experimental pipeline that allows large-scale repertoire datasets to be annotated with functional specificity information, facilitating the discovery of therapeutically relevant TCRs.
</description>
<pubDate>Sun, 01 Jan 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158239</guid>
<dc:date>2023-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Expanding the landscape of antibody discovery</title>
<link>https://hdl.handle.net/1721.1/158238</link>
<description>Expanding the landscape of antibody discovery
Johnson, Shelbe; DeKosky, Brandon J
Library:library screening technologies hold substantial promise for paired antibody:antigen discovery, but challenges have persisted. In this issue of Cell Reports Methods, Wagner et al. introduce a method that combines antibody-ribosome-mRNA complexes, antigen cell surface display, and single-cell RNA sequencing to successfully screen diverse antibody gene libraries against a library of viral receptor proteins.
</description>
<pubDate>Sun, 01 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158238</guid>
<dc:date>2024-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>It Is Time to Standardize Principles and Practices for Software Memory Safety</title>
<link>https://hdl.handle.net/1721.1/158237</link>
<description>It Is Time to Standardize Principles and Practices for Software Memory Safety
Watson, Robert; Baldwin, John; Chen, Tony; Chisnall, David; Clarke, Jessica; Davis, Brooks; Filardo, Nathaniel; Gutstein, Brett; Jenkinson, Graeme; Laurie, Ben; Mazzinghi, Alfredo; Moore, Simon; Neumann, Peter; Okhravi, Hamed; Rebert, Alex; Richardson, Alex; Sewell, Peter; Tratt, Laurence; Vijayaraghavan, Muralidaran; Vincent, Hugo; Witaszczyk, Konrad
In this Inside Risks column, we explore memory-safety standardization, which we argue is an essential step to promoting universal strong memory safety in government and industry, and, in turn, to ensure access to more secure software for all. During the last two decades, a set of research technologies for strong memory safety—memory-safe languages, hardware and software protection, formal approaches, and software compartmentalization—have reached sufficient maturity to see early deployment in security-critical use cases. However, there remains no shared, technology-neutral terminology or framework with which to specify memory-safety requirements. This is needed to enable reliable specification, design, implementation, auditing, and procurement of strongly memory-safe systems. Failure to speak in a common language makes it difficult to understand the possibilities or communicate accurately with each other, limiting perceived benefits and hence actual demand. The lack of such a framework also acts as an impediment to potential future policy interventions, and as an impediment to stating requirements to address observed market failures preventing adoption of these technologies. Standardization would also play a critical role in improving industrial best practice, another key aspect of adoption.
</description>
<pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158237</guid>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Inference Plans for Hybrid Particle Filtering</title>
<link>https://hdl.handle.net/1721.1/158236</link>
<description>Inference Plans for Hybrid Particle Filtering
Cheng, Ellie; Atkinson, Eric; Baudart, Guillaume; Mandel, Louis; Carbin, Michael
Advanced probabilistic programming languages (PPLs) using hybrid particle filtering combine symbolic exact inference and Monte Carlo methods to improve inference performance. These systems use heuristics to partition random variables within the program into variables that are encoded symbolically and variables that are encoded with sampled values, and the heuristics are not necessarily aligned with the developer's performance evaluation metrics. In this work, we present inference plans, a programming interface that enables developers to control the partitioning of random variables during hybrid particle filtering. We further present Siren, a new PPL that enables developers to use annotations to specify inference plans the inference system must implement. To assist developers with statically reasoning about whether an inference plan can be implemented, we present an abstract-interpretation-based static analysis for Siren for determining inference plan satisfiability. We prove the analysis is sound with respect to Siren's semantics. Our evaluation applies inference plans to three different hybrid particle filtering algorithms on a suite of benchmarks. It shows that the control provided by inference plans enables speed ups of 1.76x on average and up to 206x to reach a target accuracy, compared to the inference plans implemented by default heuristics; the results also show that inference plans improve accuracy by 1.83x on average and up to 595x with less or equal runtime, compared to the default inference plans. We further show that our static analysis is precise in practice, identifying all satisfiable inference plans in 27 out of the 33 benchmark-algorithm evaluation settings.
</description>
<pubDate>Tue, 07 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158236</guid>
<dc:date>2025-01-07T00:00:00Z</dc:date>
</item>
<item>
<title>Bridging Social Sciences and AI for Understanding Child Behaviour</title>
<link>https://hdl.handle.net/1721.1/158235</link>
<description>Bridging Social Sciences and AI for Understanding Child Behaviour
Kaya, Heysem; Hessels, Roy; Najafian, Maryam; Hanekamp, Sandra; Safavi, Saeid
Child behaviour is a topic of wide scientific interest among many different disciplines, including social and behavioural sciences and artificial intelligence (AI). In this workshop, we aimed to connect researchers from these fields to address topics such as the usage of AI to better understand and model child behavioural and developmental processes, challenges and opportunities for AI in large-scale child behaviour analysis and implementing explainable ML/AI on sensitive child data. The workshop served as a successful first step towards this goal and attracted contributions from different research disciplines on the analysis of child behaviour. This paper provides a summary of the activities of the workshop and the accepted papers and abstracts.
ICMI ’20, October 25–29, 2020, Virtual Event, Netherlands
</description>
<pubDate>Wed, 21 Oct 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158235</guid>
<dc:date>2020-10-21T00:00:00Z</dc:date>
</item>
<item>
<title>RoboGrammar: Graph Grammar for Terrain-Optimized Robot Design</title>
<link>https://hdl.handle.net/1721.1/158234</link>
<description>RoboGrammar: Graph Grammar for Terrain-Optimized Robot Design
Zhao, Allan; Xu, Jie; Konakovic-Lukovic, Mina; Hughes, Josephine; Spielberg, Andrew; Rus, Daniela; Matusik, Wojciech
We present RoboGrammar, a fully automated approach for generating optimized robot structures to traverse given terrains. In this framework, we represent each robot design as a graph, and use a graph grammar to express possible arrangements of physical robot assemblies. Each robot design can then be expressed as a sequence of grammar rules. Using only a small set of rules our grammar can describe hundreds of thousands of possible robot designs. The construction of the grammar limits the design space to designs that can be fabricated. For a given input terrain, the design space is searched to find the top performing robots and their corresponding controllers. We introduce Graph Heuristic Search - a novel method for efficient search of combinatorial design spaces. In Graph Heuristic Search, we explore the design space while simultaneously learning a function that maps incomplete designs (e.g., nodes in the combinatorial search tree) to the best performance values that can be achieved by expanding these incomplete designs. Graph Heuristic Search prioritizes exploration of the most promising branches of the design space. To test our method we optimize robots for a number of challenging and varied terrains. We demonstrate that RoboGrammar can successfully generate nontrivial robots that are optimized for a single terrain or a combination of terrains.
</description>
<pubDate>Thu, 26 Nov 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158234</guid>
<dc:date>2020-11-26T00:00:00Z</dc:date>
</item>
<item>
<title>Light Stage Super-Resolution: Continuous High-Frequency Relighting</title>
<link>https://hdl.handle.net/1721.1/158233</link>
<description>Light Stage Super-Resolution: Continuous High-Frequency Relighting
Sun, Tiancheng; Xu, Zexiang; Zhang, Xiuming; Fanello, Sean; Rhemann, Christoph; Debevec, Paul; Tsai, Yun-Ta; Barron, Jonathan; Ramamoorthi, Ravi
The light stage has been widely used in computer graphics for the past two decades, primarily to enable the relighting of human faces. By capturing the appearance of the human subject under different light sources, one obtains the light transport matrix of that subject, which enables image-based relighting in novel environments. However, due to the finite number of lights in the stage, the light transport matrix only represents a sparse sampling on the entire sphere. As a consequence, relighting the subject with a point light or a directional source that does not coincide exactly with one of the lights in the stage requires interpolation and resampling the images corresponding to nearby lights, and this leads to ghosting shadows, aliased specularities, and other artifacts. To ameliorate these artifacts and produce better results under arbitrary high-frequency lighting, this paper proposes a learning-based solution for the "super-resolution" of scans of human faces taken from a light stage. Given an arbitrary "query" light direction, our method aggregates the captured images corresponding to neighboring lights in the stage, and uses a neural network to synthesize a rendering of the face that appears to be illuminated by a "virtual" light source at the query location. This neural network must circumvent the inherent aliasing and regularity of the light stage data that was used for training, which we accomplish through the use of regularized traditional interpolation methods within our network. Our learned model is able to produce renderings for arbitrary light directions that exhibit realistic shadows and specular highlights, and is able to generalize across a wide variety of subjects. Our super-resolution approach enables more accurate renderings of human subjects under detailed environment maps, or the construction of simpler light stages that contain fewer light sources while still yielding comparable quality renderings as light stages with more densely sampled lights.
</description>
<pubDate>Thu, 26 Nov 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158233</guid>
<dc:date>2020-11-26T00:00:00Z</dc:date>
</item>
<item>
<title>SonicHoop: Using Interactive Sonification to Support Aerial Hoop Practices</title>
<link>https://hdl.handle.net/1721.1/158210</link>
<description>SonicHoop: Using Interactive Sonification to Support Aerial Hoop Practices
Liu, Wanyu; Dementyev, Artem; Schwarz, Diemo; Flety, Emmanuel; Mackay, Wendy; Beaudouin-Lafon, Michel; Bevilacqua, Frederic
Aerial hoops are circular, hanging devices for both acrobatic exercise and artistic performance that let us explore the role of interactive sonification in physical activity. We present SonicHoop, an augmented aerial hoop that generates auditory feedback via capacitive touch sensing, thus becoming a digital musical instrument that performers can play with their bodies. We compare three sonification strategies through a structured observation study with two professional aerial hoop performers. Results show that SonicHoop fundamentally changes their perception and choreographic processes: instead of translating music into movement, they search for bodily expressions that compose music. Different sound designs affect their movement differently, and auditory feedback, regardless of type of sound, improves movement quality. We discuss opportunities for using SonicHoop as an aerial hoop training tool, as a digital musical instrument, and as a creative object; as well as using interactive sonification in other acrobatic practices to explore full-body vertical interaction.
CHI ’21, May 8–13, 2021, Yokohama, Japan
</description>
<pubDate>Thu, 06 May 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158210</guid>
<dc:date>2021-05-06T00:00:00Z</dc:date>
</item>
<item>
<title>The Robust Malware Detection Challenge and Greedy Random Accelerated Multi-Bit Search</title>
<link>https://hdl.handle.net/1721.1/158209</link>
<description>The Robust Malware Detection Challenge and Greedy Random Accelerated Multi-Bit Search
Verwer, Sicco; Nadeem, Azqa; Hammerschmidt, Christian; Bliek, Laurens; Al-Dujaili, Abdullah; O'Reilly, Una-May
Training classifiers that are robust against adversarially modified examples is becoming increasingly important in practice. In the field of malware detection, adversaries modify malicious binary files to seem benign while preserving their malicious behavior. We report on the results of a recently held robust malware detection challenge. There were two tracks in which teams could participate: the attack track asked for adversarially modified malware samples and the defend track asked for trained neural network classifiers that are robust to such modifications. The teams were unaware of the attacks/defenses they had to detect/evade. Although only 9 teams participated, this unique setting allowed us to make several interesting observations.&#13;
We also present the challenge winner: GRAMS, a family of novel techniques to train adversarially robust networks that preserve the intended (malicious) functionality and yield high-quality adversarial samples. These samples are used to iteratively train a robust classifier. We show that our techniques, based on discrete optimization techniques, beat purely gradient-based methods. GRAMS obtained first place in both the attack and defend tracks of the competition.
AISec’20, November 13, 2020, Virtual Event, USA
</description>
<pubDate>Fri, 13 Nov 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158209</guid>
<dc:date>2020-11-13T00:00:00Z</dc:date>
</item>
<item>
<title>Going with our Guts: Potentials of Wearable Electrogastrography (EGG) for Affect Detection</title>
<link>https://hdl.handle.net/1721.1/158208</link>
<description>Going with our Guts: Potentials of Wearable Electrogastrography (EGG) for Affect Detection
Vujic, Angela; Tong, Stephanie; Picard, Rosalind; Maes, Pattie
ICMI ’20, October 25–29, 2020, Virtual Event, Netherlands
</description>
<pubDate>Wed, 21 Oct 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158208</guid>
<dc:date>2020-10-21T00:00:00Z</dc:date>
</item>
<item>
<title>Diversity-oriented synthesis encoded by deoxyoligonucleotides</title>
<link>https://hdl.handle.net/1721.1/158201</link>
<description>Diversity-oriented synthesis encoded by deoxyoligonucleotides
Diversity-oriented synthesis (DOS) is a powerful strategy to prepare molecules with underrepresented features in commercial screening collections, resulting in the elucidation of novel biological mechanisms. In parallel to the development of DOS, DNA-encoded libraries (DELs) have emerged as an effective, efficient screening strategy to identify protein binders. Despite recent advancements in this field, most DEL syntheses are limited by the presence of sensitive DNA-based constructs. Here, we describe the design, synthesis, and validation experiments performed for a 3.7 million-member DEL, generated using diverse skeleton architectures with varying exit vectors and derived from DOS, to achieve structural diversity beyond what is possible by varying appendages alone. We also show screening results for three diverse protein targets. We will make this DEL available to the academic scientific community to increase access to novel structural features and accelerate early-phase drug discovery.
</description>
<pubDate>Sun, 01 Jan 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158201</guid>
<dc:date>2023-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Evaluating the roughness of structure–property relationships using pretrained molecular representations</title>
<link>https://hdl.handle.net/1721.1/158200</link>
<description>Evaluating the roughness of structure–property relationships using pretrained molecular representations
Graff, David E; Pyzer-Knapp, Edward O; Jordan, Kirk E; Shakhnovich, Eugene I; Coley, Connor W
Quantitative structure–property relationships (QSPRs) aid in understanding molecular properties as a function of molecular structure. When the correlation between structure and property weakens, a dataset is described as “rough,” but this characteristic is partly a function of the chosen representation. Among possible molecular representations are those from recently-developed “foundation models” for chemistry which learn molecular representation from unlabeled samples via self-supervision. However, the performance of these pretrained representations on property prediction benchmarks is mixed when compared to baseline approaches. We sought to understand these trends in terms of the roughness of the underlying QSPR surfaces. We introduce a reformulation of the roughness index (ROGI), ROGI-XD, to enable comparison of ROGI values across representations and evaluate various pretrained representations and those constructed by simple fingerprints and descriptors. We show that pretrained representations do not produce smoother QSPR surfaces, in agreement with previous empirical results of model accuracy. Our findings suggest that imposing stronger assumptions of smoothness with respect to molecular structure during model pretraining could aid in the downstream generation of smoother QSPR surfaces.
</description>
<pubDate>Sun, 01 Jan 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158200</guid>
<dc:date>2023-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>BodyPrinter: Fabricating Circuits Directly on the Skin at Arbitrary Locations Using a Wearable Compact Plotter</title>
<link>https://hdl.handle.net/1721.1/158199</link>
<description>BodyPrinter: Fabricating Circuits Directly on the Skin at Arbitrary Locations Using a Wearable Compact Plotter
Choi, Youngkyung; Ryu, Neung; Kim, Myung Jin; Dementyev, Artem; Bianchi, Andrea
On-body electronics and sensors offer the opportunity to seamlessly augment the human with computing power. Accordingly, numerous previous work investigated methods that exploit conductive materials and flexible substrates to fabricate circuits in the form of wearable devices, stretchable patches, and stickers that can be attached to the skin. For all these methods, the fabrication process involves several manual steps, such as designing the circuit in software, constructing conductive patches, and manually placing these physical patches on the body. In contrast, in this work, we propose to fabricate electronics directly on the skin. We present BodyPrinter, a wearable conductive-ink deposition machine, that prints flexible electronics directly on the body using skin-safe conductive ink. The paper describes our system in detail and, through a series of examples and a technical evaluation, we show how direct on-body fabrication of electronic circuits and sensors can further enhance the human body.
UIST ’20, October 20–23, 2020, Virtual Event, USA
</description>
<pubDate>Tue, 20 Oct 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158199</guid>
<dc:date>2020-10-20T00:00:00Z</dc:date>
</item>
<item>
<title>Decoding Surface Touch Typing from Hand-Tracking</title>
<link>https://hdl.handle.net/1721.1/158198</link>
<description>Decoding Surface Touch Typing from Hand-Tracking
Richardson, Mark; Durasoff, Matt; Wang, Robert
We propose a novel text decoding method that enables touch typing on an uninstrumented flat surface. Rather than relying on physical keyboards or capacitive touch, our method takes as input hand motion of the typist, obtained through hand-tracking, and decodes this motion directly into text. We use a temporal convolutional network to represent a motion model that maps the hand motion, represented as a sequence of hand pose features, into text characters. To enable touch typing without the haptic feedback of a physical keyboard, we had to address more erratic typing motion due to drift of the fingers. Thus, we incorporate a language model as a text prior and use beam search to efficiently combine our motion and language models to decode text from erratic or ambiguous hand motion. We collected a dataset of 20 touch typists and evaluated our model on several baselines, including contact-based text decoding and typing on a physical keyboard. Our proposed method is able to leverage continuous hand pose information to decode text more accurately than contact-based methods and an offline study shows parity (73 WPM, 2.38% UER) with typing on a physical keyboard. Our results show that hand-tracking has the potential to enable rapid text entry in mobile environments.
UIST ’20, October 20–23, 2020, Virtual Event, USA
</description>
<pubDate>Tue, 20 Oct 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158198</guid>
<dc:date>2020-10-20T00:00:00Z</dc:date>
</item>
<item>
<title>Neural Light Transport for Relighting and View Synthesis</title>
<link>https://hdl.handle.net/1721.1/158197</link>
<description>Neural Light Transport for Relighting and View Synthesis
Zhang, Xiuming; Fanello, Sean; Tsai, Yun-Ta; Sun, Tiancheng; Xue, Tianfan; Pandey?, Rohit; Orts-Escolano, Sergio; Davidson?, Philip; Rhemann, Christoph; Debevec?, Paul; Barron, Jonathan T.; Ramamoorthi, Ravi; Freeman, William
The light transport (LT) of a scene describes how it appears under different lighting conditions from different viewing directions, and complete knowledge of a scene?s LT enables the synthesis of novel views under arbitrary lighting. In this paper, we focus on image-based LT acquisition, primarily for human bodies within a light stage setup. We propose a semi-parametric approach for learning a neural representation of the LT that is embedded in a texture atlas of known but possibly rough geometry. We model all non-diffuse and global LT as residuals added to a physically-based diffuse base rendering. In particular, we show how to fuse previously seen observations of illuminants and views to synthesize a new image of the same scene under a desired lighting condition from a chosen viewpoint. This strategy allows the network to learn complex material effects (such as subsurface scattering) and global illumination (such as diffuse interreflection), while guaranteeing the physical correctness of the diffuse LT (such as hard shadows). With this learned LT, one can relight the scene photorealistically with a directional light or an HDRI map, synthesize novel views with view-dependent effects, or do both simultaneously, all in a unified framework using a set of sparse observations.
</description>
<pubDate>Mon, 18 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158197</guid>
<dc:date>2021-01-18T00:00:00Z</dc:date>
</item>
<item>
<title>Active Learning for Inference and Regeneration of Applications that Access Databases</title>
<link>https://hdl.handle.net/1721.1/158196</link>
<description>Active Learning for Inference and Regeneration of Applications that Access Databases
Shen, Jiasi; Rinard, Martin
We present Konure, a new system that uses active learning to infer models of applications that retrieve data from relational databases. Konure comprises a domain-specific language (each model is a program in this language) and associated inference algorithm that infers models of applications whose behavior can be expressed in this language. The inference algorithm generates inputs and database contents, runs the application, then observes the resulting database traffic and outputs to progressively refine its current model hypothesis.  Because the technique works with only externally observable inputs, outputs, and database contents, it can infer the behavior of applications written in arbitrary languages using arbitrary coding styles (as long as the behavior of the application is expressible in the domain-specific language).  Konure also implements a regenerator that produces a translated Python implementation of the application that systematically includes relevant security and error checks.
</description>
<pubDate>Fri, 22 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158196</guid>
<dc:date>2021-01-22T00:00:00Z</dc:date>
</item>
<item>
<title>Neural scaling of deep chemical models</title>
<link>https://hdl.handle.net/1721.1/158195</link>
<description>Neural scaling of deep chemical models
Frey, Nathan C; Soklaski, Ryan; Axelrod, Simon; Samsi, Siddharth; Gómez-Bombarelli, Rafael; Coley, Connor W; Gadepally, Vijay
Massive scale, in terms of both data availability and computation, enables important breakthroughs in key application areas of deep learning such as natural language processing and computer vision. There is emerging evidence that scale may be a key ingredient in scientific deep learning, but the importance of physical priors in scientific domains makes the strategies and benefits of scaling uncertain. Here we investigate neural-scaling behaviour in large chemical models by varying model and dataset sizes over many orders of magnitude, studying models with over one billion parameters, pre-trained on datasets of up to ten million datapoints. We consider large language models for generative chemistry and graph neural networks for machine-learned interatomic potentials. We investigate the interplay between physical priors and scale and discover empirical neural-scaling relations for language models in chemistry with a scaling exponent of 0.17 for the largest dataset size considered, and a scaling exponent of 0.26 for equivariant graph neural network interatomic potentials.
</description>
<pubDate>Sun, 01 Jan 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158195</guid>
<dc:date>2023-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Reaction profiles for quantum chemistry-computed [3 + 2] cycloaddition reactions</title>
<link>https://hdl.handle.net/1721.1/158194</link>
<description>Reaction profiles for quantum chemistry-computed [3 + 2] cycloaddition reactions
Stuyver, Thijs; Jorner, Kjell; Coley, Connor W
Bio-orthogonal click chemistry based on [3 + 2] dipolar cycloadditions has had a profound impact on the field of biochemistry and significant effort has been devoted to identify promising new candidate reactions for this purpose. To gauge whether a prospective reaction could be a suitable bio-orthogonal click reaction, information about both on- and off-target activation and reaction energies is highly valuable. Here, we use an automated workflow, based on the autodE program, to compute over 5000 reaction profiles for [3 + 2] cycloadditions involving both synthetic dipolarophiles and a set of biologically-inspired structural motifs. Based on a succinct benchmarking study, the B3LYP-D3(BJ)/def2-TZVP//B3LYP-D3(BJ)/def2-SVP level of theory was selected for the DFT calculations, and standard conditions and an (aqueous) SMD model were imposed to mimic physiological conditions. We believe that this data, as well as the presented workflow for high-throughput reaction profile computation, will be useful to screen for new bio-orthogonal reactions, as well as for the development of novel machine learning models for the prediction of chemical reactivity more broadly.
</description>
<pubDate>Sun, 01 Jan 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158194</guid>
<dc:date>2023-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Computer-aided multi-objective optimization in small molecule discovery</title>
<link>https://hdl.handle.net/1721.1/158193</link>
<description>Computer-aided multi-objective optimization in small molecule discovery
Fromer, Jenna C; Coley, Connor W
Molecular discovery is a multi-objective optimization problem that requires identifying a molecule or set of molecules that balance multiple, often competing, properties. Multi-objective molecular design is commonly addressed by combining properties of interest into a single objective function using scalarization, which imposes assumptions about relative importance and uncovers little about the trade-offs between objectives. In contrast to scalarization, Pareto optimization does not require knowledge of relative importance and reveals the trade-offs between objectives. However, it introduces additional considerations in algorithm design. In this review, we describe pool-based and de novo generative approaches to multi-objective molecular discovery with a focus on Pareto optimization algorithms. We show how pool-based molecular discovery is a relatively direct extension of multi-objective Bayesian optimization and how the plethora of different generative models extend from single-objective to multi-objective optimization in similar ways using non-dominated sorting in the reward function (reinforcement learning) or to select molecules for retraining (distribution learning) or propagation (genetic algorithms). Finally, we discuss some remaining challenges and opportunities in the field, emphasizing the opportunity to adopt Bayesian optimization techniques into multi-objective de novo design.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158193</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>MolScribe: Robust Molecular Structure Recognition with Image-to-Graph Generation</title>
<link>https://hdl.handle.net/1721.1/158192</link>
<description>MolScribe: Robust Molecular Structure Recognition with Image-to-Graph Generation
Qian, Yujie; Guo, Jiang; Tu, Zhengkai; Li, Zhening; Coley, Connor W; Barzilay, Regina
Molecular structure recognition is the task of translating a molecular image into its graph structure. Significant variation in drawing styles and conventions exhibited in chemical literature poses a significant challenge for automating this task. In this paper, we propose MolScribe, a novel image-to-graph generation model that explicitly predicts atoms and bonds, along with their geometric layouts, to construct the molecular structure. Our model flexibly incorporates symbolic chemistry constraints to recognize chirality and expand abbreviated structures. We further develop data augmentation strategies to enhance the model robustness against domain shifts. In experiments on both synthetic and realistic molecular images, MolScribe significantly outperforms previous models, achieving 76-93% accuracy on public benchmarks. Chemists can also easily verify MolScribe's prediction, informed by its confidence estimation and atom-level alignment with the input image. MolScribe is publicly available through Python and web interfaces: https://github.com/thomas0809/MolScribe.
</description>
<pubDate>Mon, 10 Apr 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158192</guid>
<dc:date>2023-04-10T00:00:00Z</dc:date>
</item>
<item>
<title>Machine Learning‐Guided Computational Screening of New Candidate Reactions with High Bioorthogonal Click Potential</title>
<link>https://hdl.handle.net/1721.1/158191</link>
<description>Machine Learning‐Guided Computational Screening of New Candidate Reactions with High Bioorthogonal Click Potential
Stuyver, Thijs; Coley, Connor W
Bioorthogonal click chemistry has become an indispensable part of the biochemist's toolbox. Despite the wide variety of applications that have been developed in recent years, only a limited number of bioorthogonal click reactions have been discovered so far, most of them based on (substituted) azides. In this work, we present a computational workflow to discover new candidate reactions with promising kinetic and thermodynamic properties for bioorthogonal click applications. Sampling only around 0.05 % of an overall search space of over 10,000,000 dipolar cycloadditions, we develop a machine learning model able to predict DFT‐computed activation and reaction energies within ∼2–3 kcal/mol across the entire space. Applying this model to screen the full search space through iterative rounds of learning, we identify a broad pool of candidate reactions with rich structural diversity, which can be used as a starting point or source of inspiration for future experimental development of both azide‐based and non‐azide‐based bioorthogonal click reactions.
</description>
<pubDate>Tue, 16 May 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158191</guid>
<dc:date>2023-05-16T00:00:00Z</dc:date>
</item>
<item>
<title>"My Very Subjective Human Interpretation": Domain Expert Perspectives on Navigating the Text Analysis Loop for Topic Models</title>
<link>https://hdl.handle.net/1721.1/158190</link>
<description>"My Very Subjective Human Interpretation": Domain Expert Perspectives on Navigating the Text Analysis Loop for Topic Models
Schofield, Alexandra; Wu, Siqi; Bayard de Volo, Theo; Kuze, Tatsuki; Gomez, Alfredo; Sultana, Sharifa
Practitioners dealing with large text collections frequently use topic models such as Latent Dirichlet Allocation (LDA) and Non-negative Matrix Factorization (NMF) in their projects to explore trends. Despite twenty years of accrued advancement in natural language processing tools, these models are found to be slow and challenging to apply to text exploration projects. In our work, we engaged with practitioners (n=15) who use topic modeling to explore trends in large text collections to understand their project workflows and investigate which factors often slow down the processes and how they deal with such errors and interruptions in automated topic modeling. Our findings show that practitioners are required to diagnose and resolve context-specific problems with preparing data and models and need control for these steps, especially for data cleaning and parameter selection. Our major findings resonate with existing work across CSCW, computational social science, machine learning, data science, and digital humanities. They also leave us questioning whether automation is actually a useful goal for tools designed for topic models and text exploration.
</description>
<pubDate>Fri, 10 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158190</guid>
<dc:date>2025-01-10T00:00:00Z</dc:date>
</item>
<item>
<title>Exploring the Affordances of Sequence Mining in Educational Games</title>
<link>https://hdl.handle.net/1721.1/158189</link>
<description>Exploring the Affordances of Sequence Mining in Educational Games
Gomez, Manuel J.; Ruip?rez-Valiente, Jos? A.; Martinez, Pedro A.; Kim, Yoon Jeon
Games have become one of the most popular mediums across cultures and ages and the use of educational games is growing. There is ample evidence that supports the benefits of using games for learning and assessment. However, we do not usually find games incorporated into educational environments. One of the main problems that teachers face is to actually know how students are interacting with the game as they cannot analyze properly the effect of the activity on the students. To improve this issue, we can use the data generated by the interaction of students with such educational games to analyze the sequences and errors by transforming raw data into meaningful sequences that are interpretable and actionable for teachers. In this study we use a data collection from our game Shadowspect and implement learning analytics with process and sequence mining techniques to generate two metrics that aim to help teachers make proper assessment and better understand the process.
TEEM’20, October 21–23, 2020, Salamanca, Spain
</description>
<pubDate>Wed, 21 Oct 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158189</guid>
<dc:date>2020-10-21T00:00:00Z</dc:date>
</item>
<item>
<title>Artificial Intelligence for Retrosynthesis Prediction</title>
<link>https://hdl.handle.net/1721.1/158188</link>
<description>Artificial Intelligence for Retrosynthesis Prediction
Jiang, Yinjie; Yu, Yemin; Kong, Ming; Mei, Yu; Yuan, Luotian; Huang, Zhengxing; Kuang, Kun; Wang, Zhihua; Yao, Huaxiu; Zou, James; Coley, Connor W; Wei, Ying
In recent years, there has been a dramatic rise in interest in retrosynthesis prediction with artificial intelligence (AI) techniques. Unlike conventional retrosynthesis prediction performed by chemists and by rule-based expert systems, AI-driven retrosynthesis prediction automatically learns chemistry knowledge from off-the-shelf experimental datasets to predict reactions and retrosynthesis routes. This provides an opportunity to address many conventional challenges, including heavy reliance on extensive expertise, the sub-optimality of routes, and prohibitive computational cost. This review describes the current landscape of AI-driven retrosynthesis prediction. We first discuss formal definitions of the retrosynthesis problem and review the outstanding research challenges therein. We then review the related AI techniques and recent progress that enable retrosynthesis prediction. Moreover, we propose a novel landscape that provides a comprehensive categorization of different retrosynthesis prediction components and survey how AI reshapes each component. We conclude by discussing promising areas for future research.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158188</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>RxnScribe: A Sequence Generation Model for Reaction Diagram Parsing</title>
<link>https://hdl.handle.net/1721.1/158184</link>
<description>RxnScribe: A Sequence Generation Model for Reaction Diagram Parsing
Qian, Yujie; Guo, Jiang; Tu, Zhengkai; Coley, Connor W; Barzilay, Regina
Reaction diagram parsing is the task of extracting reaction schemes from a diagram in the chemistry literature. The reaction diagrams can be arbitrarily complex; thus, robustly parsing them into structured data is an open challenge. In this paper, we present RxnScribe, a machine learning model for parsing reaction diagrams of varying styles. We formulate this structured prediction task with a sequence generation approach, which condenses the traditional pipeline into an end-to-end model. We train RxnScribe on a dataset of 1378 diagrams and evaluate it with cross validation, achieving an 80.0% soft match F1 score, with significant improvements over previous models. Our code and data are publicly available at https://github.com/thomas0809/RxnScribe.
</description>
<pubDate>Mon, 10 Jul 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158184</guid>
<dc:date>2023-07-10T00:00:00Z</dc:date>
</item>
<item>
<title>Data Sharing in Chemistry: Lessons Learned and a Case for Mandating Structured Reaction Data</title>
<link>https://hdl.handle.net/1721.1/158183</link>
<description>Data Sharing in Chemistry: Lessons Learned and a Case for Mandating Structured Reaction Data
Mercado, Rocío; Kearnes, Steven M; Coley, Connor W
The past decade has seen a number of impressive developments in predictive chemistry and reaction informatics driven by machine learning applications to computer-aided synthesis planning. While many of these developments have been made even with relatively small, bespoke data sets, in order to advance the role of AI in the field at scale, there must be significant improvements in the reporting of reaction data. Currently, the majority of publicly available data is reported in an unstructured format and heavily imbalanced toward high-yielding reactions, which influences the types of models that can be successfully trained. In this Perspective, we analyze several data curation and sharing initiatives that have seen success in chemistry and molecular biology. We discuss several factors that have contributed to their success and how we can take lessons from these case studies and apply them to reaction data. Finally, we spotlight the Open Reaction Database and summarize key actions the community can take toward making reaction data more findable, accessible, interoperable, and reusable (FAIR), including the use of mandates from funding agencies and publishers.
</description>
<pubDate>Mon, 24 Jul 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158183</guid>
<dc:date>2023-07-24T00:00:00Z</dc:date>
</item>
<item>
<title>Computer‐aided evaluation and exploration of chemical spaces constrained by reaction pathways</title>
<link>https://hdl.handle.net/1721.1/158182</link>
<description>Computer‐aided evaluation and exploration of chemical spaces constrained by reaction pathways
Levin, Itai; Fortunato, Michael E; Tan, Kian L; Coley, Connor W
The processes of molecular design and synthetic route selection are necessarily intertwined during discovery. Computational tools have been developed to facilitate synthesis planning, but in a discovery setting, finding a single route to a single molecule of interest may be less important than finding a route that enables rapid access to a library of analogs. Here, we demonstrate how we can estimate route “diversifiability” and use it as a criterion during route selection. We illustrate how the chemical space of synthetically accessible analogs is influenced by properties of alternative starting materials or constraints on their cost. Finally, we integrate these analyses with a synthesizability‐constrained hit expansion workflow in a virtual screening pipeline for focused library expansion around putative hits to support molecular optimization. As medicinal chemistry and adjacent fields shift toward more autonomous design and synthesis of new molecules, it will be increasingly important to embed considerations of synthesizability into molecular design to ensure that computational recommendations are actionable.
</description>
<pubDate>Fri, 01 Dec 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158182</guid>
<dc:date>2023-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Protein codes promote selective subcellular compartmentalization</title>
<link>https://hdl.handle.net/1721.1/158180</link>
<description>Protein codes promote selective subcellular compartmentalization
Kilgore, Henry R.; Chinn, Itamar; Mikhael, Peter G.; Mitnikov, Ilan; Van Dongen, Catherine; Zylberberg, Guy; Afeyan, Lena; Banani, Salman F.; Wilson-Hawken, Susana; Ihn Lee, Tong; Barzilay, Regina; Young, Richard A.
Cells have evolved mechanisms to distribute ~10 billion protein molecules to&#13;
subcellular compartments where diverse proteins involved in shared functions must&#13;
assemble. Here, we demonstrate that proteins with shared functions share amino&#13;
acid sequence codes that guide them to compartment destinations. A protein&#13;
language model, ProtGPS, was developed that predicts with high performance the&#13;
compartment localization of human proteins excluded from the training set.&#13;
ProtGPS successfully guided generation of novel protein sequences that selectively&#13;
assemble in the nucleolus. ProtGPS identified pathological mutations that change&#13;
this code and lead to altered subcellular localization of proteins. Our results&#13;
indicate that protein sequences contain not only a folding code, but also a&#13;
previously unrecognized code governing their distribution to diverse subcellular&#13;
compartments.
</description>
<pubDate>Thu, 06 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158180</guid>
<dc:date>2025-02-06T00:00:00Z</dc:date>
</item>
<item>
<title>Opportunities for Machine Learning and Artificial Intelligence to Advance Synthetic Drug Substance Process Development</title>
<link>https://hdl.handle.net/1721.1/158179</link>
<description>Opportunities for Machine Learning and Artificial Intelligence to Advance Synthetic Drug Substance Process Development
Griffin, Daniel J; Coley, Connor W; Frank, Scott A; Hawkins, Joel M; Jensen, Klavs F
The goals of this Perspective are threefold: (1) to inform a broad audience, including machine learning (ML) and artificial intelligence (AI) academics and professionals, about synthetic drug substance process development, (2) to break down the general synthetic drug substance process development task into more tractable subtasks, and (3) to highlight areas in which machine learning and artificial intelligence might be beneficially developed and applied. Application of machine learning and artificial intelligence to chemical synthesis of medicinal compounds has long been discussed and has resulted in the development of a number of computer-aided synthesis planning tools by both academic groups and commercial enterprises. The focus of these efforts has primarily centered on the task of retrosynthetic analysis, as seen from the perspective of a medicinal chemist. This has left significant unrealized opportunities in the application of machine learning and artificial intelligence to aid the process chemist or engineer in commercial drug substance process development.
</description>
<pubDate>Fri, 17 Nov 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158179</guid>
<dc:date>2023-11-17T00:00:00Z</dc:date>
</item>
<item>
<title>Dataset Design for Building Models of Chemical Reactivity</title>
<link>https://hdl.handle.net/1721.1/158178</link>
<description>Dataset Design for Building Models of Chemical Reactivity
Raghavan, Priyanka; Haas, Brittany C; Ruos, Madeline E; Schleinitz, Jules; Doyle, Abigail G; Reisman, Sarah E; Sigman, Matthew S; Coley, Connor W
Models can codify our understanding of chemical reactivity and serve a useful purpose in the development of new synthetic processes via, for example, evaluating hypothetical reaction conditions or in silico substrate tolerance. Perhaps the most determining factor is the composition of the training data and whether it is sufficient to train a model that can make accurate predictions over the full domain of interest. Here, we discuss the design of reaction datasets in ways that are conducive to data-driven modeling, emphasizing the idea that training set diversity and model generalizability rely on the choice of molecular or reaction representation. We additionally discuss the experimental constraints associated with generating common types of chemistry datasets and how these considerations should influence dataset design and model building.
</description>
<pubDate>Wed, 27 Dec 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158178</guid>
<dc:date>2023-12-27T00:00:00Z</dc:date>
</item>
<item>
<title>A physics-inspired approach to the understanding of molecular representations and models</title>
<link>https://hdl.handle.net/1721.1/158177</link>
<description>A physics-inspired approach to the understanding of molecular representations and models
Dicks, Luke; Graff, David E; Jordan, Kirk E; Coley, Connor W; Pyzer-Knapp, Edward O
The story of machine learning in general, and its application to molecular design in particular, has been a tale of evolving representations of data. Understanding the implications of the use of a particular representation – including the existence of so-called ‘activity cliffs’ for cheminformatics models – is the key to their successful use for molecular discovery. In this work we present a physics-inspired methodology which exploits analogies between model response surfaces and energy landscapes to richly describe the relationship between the representation and the model. From these similarities, a metric emerges which is analogous to the commonly used frustration metric from the chemical physics community. This new property shows state-of-the-art prediction of model error, whilst belonging to a novel class of roughness measure that extends beyond the known data allowing the trivial identification of activity cliffs even in the absence of related training or evaluation data.
</description>
<pubDate>Mon, 01 Jan 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158177</guid>
<dc:date>2024-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Uniform volumetric single-cell processing for organ-scale molecular phenotyping</title>
<link>https://hdl.handle.net/1721.1/158176</link>
<description>Uniform volumetric single-cell processing for organ-scale molecular phenotyping
Yun, Dae Hee; Park, Young-Gyun; Cho, Jae Hun; Kamentsky, Lee; Evans, Nicholas B; DiNapoli, Nicholas; Xie, Katherine; Choi, Seo Woo; Albanese, Alexandre; Tian, Yuxuan; Sohn, Chang Ho; Zhang, Qiangge; Kim, Minyoung E; Swaney, Justin; Guan, Webster; Park, Juhyuk; Drummond, Gabi; Choi, Heejin; Ruelas, Luzdary; Feng, Guoping; Chung, Kwanghun
Extending single-cell analysis to intact tissues while maintaining organ-scale spatial information poses a major challenge due to unequal chemical processing of densely packed cells. Here we introduce Continuous Redispersion of Volumetric Equilibrium (CuRVE) in nanoporous matrices, a framework to address this challenge. CuRVE ensures uniform processing of all cells in organ-scale tissues by perpetually maintaining dynamic equilibrium of the tissue's gradually shifting chemical environment. The tissue chemical reaction environment changes at a continuous, slow rate, allowing redispersion of unevenly distributed chemicals and preserving chemical equilibrium tissue wide at any given moment. We implemented CuRVE to immunologically label whole mouse and rat brains and marmoset and human tissue blocks within 1 day. We discovered highly variable regionalized reduction of parvalbumin immunoreactive cells in wild-type adult mice, a phenotype missed by the commonly used genetic labeling. We envision that our platform will advance volumetric single-cell processing and analysis, facilitating comprehensive single-cell level investigations within their spatial context in organ-scale tissues.
</description>
<pubDate>Fri, 24 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158176</guid>
<dc:date>2025-01-24T00:00:00Z</dc:date>
</item>
<item>
<title>Toward a One-interaction Data-driven Guide: Putting co-Speech Gesture Evidence to Work for Ambiguous Route Instructions</title>
<link>https://hdl.handle.net/1721.1/158175</link>
<description>Toward a One-interaction Data-driven Guide: Putting co-Speech Gesture Evidence to Work for Ambiguous Route Instructions
DePalma, Nicholas; Smith, H; Chernova, Sonia; Hodgins, Jessica
While recent work on gesture synthesis in agent and robot literature has treated gesture as co-speech and thus dependent on verbal utterances, we present evidence that gesture may leverage model context (i.e. the navigational task) and is not solely dependent on verbal utterance. This effect is particularly evident within ambiguous verbal utterances. Decoupling this dependency may allow future systems to synthesize clarifying gestures that clarify the ambiguous verbal utterance while enabling research in better understanding the semantics of the gesture. We bring together evidence from our own experiences in this domain that allow us to see for the first time what kind of end-to-end concerns models need to be developed to synthesize gesture for one-shot interactions while still preserving user outcomes and allowing for ambiguous utterances by the robot. We discuss these issues within the context of "cardinal direction gesture plans" which represent instructions that refer to the actions the human must follow in the future.
HRI ’21 Companion, March 8–11, 2021, Boulder, CO, USA
</description>
<pubDate>Mon, 08 Mar 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158175</guid>
<dc:date>2021-03-08T00:00:00Z</dc:date>
</item>
<item>
<title>A System for Interleaving Discussion and Summarization in Online Collaboration</title>
<link>https://hdl.handle.net/1721.1/158174</link>
<description>A System for Interleaving Discussion and Summarization in Online Collaboration
Tian, Sunny; Zhang, Amy; Karger, David
In many instances of online collaboration, ideation and deliberation about what to write happen separately from the synthesis of the deliberation into a cohesive document. However, this may result in a final document that has little connection to the discussion that came before. In this work, we present interleaved discussion and summarization, a process where discussion and summarization are woven together in a single space, and collaborators can switch back and forth between discussing ideas and summarizing discussion until it results in a final document that incorporates and references all discussion points. We implement this process into a tool called Wikum+ that allows groups working together on a project to create living summaries-artifacts that can grow as new collaborators, ideas, and feedback arise and shrink as collaborators come to consensus. We conducted studies where groups of six people each collaboratively wrote a proposal using Wikum+ and a proposal using a messaging platform along with Google Docs. We found that Wikum+'s integration of discussion and summarization helped users be more organized, allowing for light-weight coordination and iterative improvements throughout the collaboration process. A second study demonstrated that in larger groups, Wikum+ is more inclusive of all participants and more comprehensive in the final document compared to traditional tools.
</description>
<pubDate>Tue, 05 Jan 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158174</guid>
<dc:date>2021-01-05T00:00:00Z</dc:date>
</item>
<item>
<title>The Decline of Computers as a General Purpose Technology</title>
<link>https://hdl.handle.net/1721.1/158173</link>
<description>The Decline of Computers as a General Purpose Technology
Thompson, Neil; Spanuth, Svenja
The general-purposeness of today?s computers comes from the technical breakthroughs of computer scientists like von Neumann and Turing, but also from a mutually-reinforcing economic cycle, where product improvement and market growth fuel each other. &#13;
This article argues that technological and economic forces are now pushing computing away from being general purpose and towards specialization. This process, driven by the breakdown in Moore?s Law, has already begun and threatens to fragment computing into 'fast lane' applications that get powerful specialized processors and 'slow lane' applications that get stuck using general purpose processors whose progress fades.
</description>
<pubDate>Mon, 22 Feb 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158173</guid>
<dc:date>2021-02-22T00:00:00Z</dc:date>
</item>
<item>
<title>Learning from and about scientists: Consensus messaging shapes perceptions of climate change and climate scientists</title>
<link>https://hdl.handle.net/1721.1/158172</link>
<description>Learning from and about scientists: Consensus messaging shapes perceptions of climate change and climate scientists
Orchinik, Reed; Dubey, Rachit; Gershman, Samuel J; Powell, Derek M; Bhui, Rahul
Despite overwhelming scientific consensus on the existence of human-caused climate change, public opinion among Americans remains split. Directly informing people of scientific consensus is among the most prominent strategies for climate communication, yet the reasons for its effectiveness and its limitations are not fully understood. Here, we propose that consensus messaging provides information not only about the existence of climate change but also traits of climate scientists themselves. In a large (n=2,545) nationally representative survey experiment, we examine how consensus information affects belief in human-caused climate change by shaping perceptions of climate scientist credibility. In the control group (n=847), we first show that people learn both from and about climate scientists when presented with consensus and that perceived scientist credibility (especially skill) mediates up to about 40% of the total effect of consensus information on climate belief. We demonstrate that perceptions of climate scientists are malleable with two novel interventions that increase belief in climate change above and beyond consensus information.
</description>
<pubDate>Wed, 30 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158172</guid>
<dc:date>2024-10-30T00:00:00Z</dc:date>
</item>
<item>
<title>Learning reaction-transport coupling from thermal waves</title>
<link>https://hdl.handle.net/1721.1/158171</link>
<description>Learning reaction-transport coupling from thermal waves
Kim, Suyong; Deng, Sili
Although thermal waves are ubiquitous in nature and engineering, the development of diagnostic tools capable of elucidating the roles of reaction and transport remains an unmet need. This limits our comprehension of the physics and ability to predict wave dynamics. Here we demonstrate that thermal properties and chemical kinetics can be learned directly from observing thermal wave dynamics, using partial differential equation-constrained optimization. This enables the determination of unobserved reaction rates without the need for a comprehensive measurement of all state variables, given the model space constrained by governing equations. Examples include steady planar waves and unsteady pulsating waves of which dynamics are commonly observed in nature. We show successful learning of thermal properties and chemical kinetics and reconstruction of wave dynamics with the inferred properties, which enables the comprehension of the intricate reaction-transport coupling from thermal data.
</description>
<pubDate>Fri, 15 Nov 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158171</guid>
<dc:date>2024-11-15T00:00:00Z</dc:date>
</item>
<item>
<title>Hierarchically conductive electrodes unlock stable and scalable CO2 electrolysis</title>
<link>https://hdl.handle.net/1721.1/158170</link>
<description>Hierarchically conductive electrodes unlock stable and scalable CO2 electrolysis
Rufer, Simon; Nitzsche, Michael P; Garimella, Sanjay; Lake, Jack R; Varanasi, Kripa K
Electrochemical CO2 reduction has emerged as a promising CO2 utilization technology, with Gas Diffusion Electrodes becoming the predominant architecture to maximize performance. Such electrodes must maintain robust hydrophobicity to prevent flooding, while also ensuring high conductivity to minimize ohmic losses. Intrinsic material tradeoffs have led to two main architectures: carbon paper is highly conductive but floods easily; while expanded Polytetrafluoroethylene is flooding resistant but non-conductive, limiting electrode sizes to just 5 cm2. Here we demonstrate a hierarchically conductive electrode architecture which overcomes these scaling limitations by employing inter-woven microscale conductors within a hydrophobic expanded Polytetrafluoroethylene membrane. We develop a model which captures the spatial variability in voltage and product distribution on electrodes due to ohmic losses and use it to rationally design the hierarchical architecture which can be applied independent of catalyst chemistry or morphology. We demonstrate C2+ Faradaic efficiencies of ~75% and reduce cell voltage by as much as 0.9 V for electrodes as large as 50 cm2 by employing our hierarchically conductive electrode architecture.
</description>
<pubDate>Wed, 13 Nov 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158170</guid>
<dc:date>2024-11-13T00:00:00Z</dc:date>
</item>
<item>
<title>Untangling Mechanized Proofs</title>
<link>https://hdl.handle.net/1721.1/158169</link>
<description>Untangling Mechanized Proofs
Pit-Claudel, Cl?ment
Proof assistants like Coq, Lean, or HOL4 rely heavily on stateful meta-programs called scripts to assemble proofs. Unlike pen-and-paper proofs, proof scripts only describe the steps to take (induct on x, apply a theorem, …), not the states that these steps lead to; as a result, plain proof scripts are essentially incomprehensible without the assistance of an interactive user interface able to run the script and show the corresponding proof states.&#13;
Until now, the standard process to communicate a proof without forcing readers to execute its script was to manually copy-paste intermediate proof states into the script, as source code comments — a tedious and error-prone exercise. Additional prose (such as for a book or tutorial) was likewise embedded in comments, preserving executability at the cost of a mediocre text-editing experience.&#13;
This paper describes a new approach to the development and dissemination of literate proof scripts, with a focus on the Coq proof assistant. Specifically, we describe two contributions: a compiler that interleaves Coq’s output with the original proof script to produce interactive webpages that are complete, self-contained presentations of Coq proofs; and a new literate programming toolkit that allows authors to switch seamlessly between prose- and code-oriented views of the same sources, by translating back and forth between reStructuredText documents and literate Coq source files. In combination, these tools offer a new way to write, communicate, and preserve proofs, combining the flexibility of procedural proof scripts and the intelligibility of declarative proofs.
SLE ’20, November 16–17, 2020, Virtual, USA
</description>
<pubDate>Mon, 16 Nov 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158169</guid>
<dc:date>2020-11-16T00:00:00Z</dc:date>
</item>
<item>
<title>Acquisition of a new language: an enriched case study documents language growth without external input in a young Korean child’s acquisition of English</title>
<link>https://hdl.handle.net/1721.1/158168</link>
<description>Acquisition of a new language: an enriched case study documents language growth without external input in a young Korean child’s acquisition of English
Lust, Barbara; Flynn, Suzanne; Kim, Ahyoung Alicia
This paper explores a case of suspension of data input during the acquisition of a second language by a young Korean child acquiring English in an English-only nursery school in the United States. Data suspension occurred naturally when the child returned to Korea for a summer where only Korean was spoken. Systematic investigations using an enriched case study methodology which assessed the nature of the child’s English target language acquisition both before and after the Korean Summer revealed significant advances in his English after the Korean Summer despite the absence of English input during this time. Several hypotheses regarding the nature and explanation of this advance are tested. It is argued that significant internal linguistic integration leading to systematization of linguistic knowledge occurred in the absence of synchronous language data input, demonstrating the significance of internal computational processes over and above language data input in the language acquisition process. Results have implications for understanding the fundamental nature of language acquisition.
</description>
<pubDate>Mon, 01 Jan 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158168</guid>
<dc:date>2024-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Marsh restoration in front of seawalls is an economically justified nature-based solution for coastal protection</title>
<link>https://hdl.handle.net/1721.1/158167</link>
<description>Marsh restoration in front of seawalls is an economically justified nature-based solution for coastal protection
Lee, Ernie IH; Nepf, Heidi
A marsh-fronted seawall is a hybrid nature-based coastal protection solution because it attenuates wave energy, reduces erosion, and provides ecosystem services. However, we still have a limited understanding of how to quantify the marsh wave attenuation benefits for economic analysis. Here, we incorporate a prediction of wave attenuation that accounts for species-specific morphology and structural stiffness into a 1-D wave model and validate it with field measurements. Our results show that the wave attenuation varies by a factor of two across different vegetation species. Further, we performed a benefit-cost analysis, in which the economic benefits represent the environmental services value and avoided seawall heightening cost that would otherwise be required to deliver the same overtopping rate without vegetation. We applied the model to a real-world, marsh-fronted seawall design at Juniper Cove, Massachusetts. Although the benefit of marsh-fronted seawalls is sensitive to discount rate, they have benefit-cost ratios greater than one, indicating that it is an economically justified nature-based solution. Further, we found that wave attenuation and benefit-cost ratio are more sensitive to water depth than wave height. Our study demonstrates the importance of considering the coastal protection of marshes and economic benefits in one framework.
</description>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158167</guid>
</item>
<item>
<title>Sitetack: a deep learning model that improves PTM prediction by using known PTMs</title>
<link>https://hdl.handle.net/1721.1/158166</link>
<description>Sitetack: a deep learning model that improves PTM prediction by using known PTMs
Gutierrez, Clair S; Kassim, Alia A; Gutierrez, Benjamin D; Raines, Ronald T
Motivation&#13;
Post-translational modifications (PTMs) increase the diversity of the proteome and are vital to organismal life and therapeutic strategies. Deep learning has been used to predict PTM locations. Still, limitations in datasets and their analyses compromise success.&#13;
&#13;
Results&#13;
We evaluated the use of known PTM sites in prediction via sequence-based deep learning algorithms. For each PTM, known locations of that PTM were encoded as a separate amino acid before sequences were encoded via word embedding and passed into a convolutional neural network that predicts the probability of that PTM at a given site. Without labeling known PTMs, our models are on par with others. With labeling, however, we improved significantly upon extant models. Moreover, knowing PTM locations can increase the predictability of a different PTM. Our findings highlight the importance of PTMs for the installation of additional PTMs. We anticipate that including known PTM locations will enhance the performance of other proteomic machine learning algorithms.&#13;
&#13;
Availability and implementation&#13;
Sitetack is available as a web tool at https://sitetack.net; the source code, representative datasets, instructions for local use, and select models are available at https://github.com/clair-gutierrez/sitetack.
</description>
<pubDate>Fri, 01 Nov 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158166</guid>
<dc:date>2024-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Generating Molecular Fragmentation Graphs with Autoregressive Neural Networks</title>
<link>https://hdl.handle.net/1721.1/158165</link>
<description>Generating Molecular Fragmentation Graphs with Autoregressive Neural Networks
Goldman, Samuel; Li, Janet; Coley, Connor W
The accurate prediction of tandem mass spectra from molecular structures has the potential to unlock new metabolomic discoveries by augmenting the community's libraries of experimental reference standards. Cheminformatic spectrum prediction strategies use a "bond-breaking" framework to iteratively simulate mass spectrum fragmentations, but these methods are (a) slow due to the need to exhaustively and combinatorially break molecules and (b) inaccurate as they often rely upon heuristics to predict the intensity of each resulting fragment; neural network alternatives mitigate computational cost but are black-box and not inherently more accurate. We introduce a physically grounded neural approach that learns to predict each breakage event and score the most relevant subset of molecular fragments quickly and accurately. We evaluate our model by predicting spectra from both public and private standard libraries, demonstrating that our hybrid approach offers state-of-the-art prediction accuracy, improved metabolite identification from a database of candidates, and higher interpretability when compared to previous breakage methods and black-box neural networks. The grounding of our approach in physical fragmentation events shows especially great promise for elucidating natural product molecules with more complex scaffolds.
</description>
<pubDate>Tue, 27 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158165</guid>
<dc:date>2024-02-27T00:00:00Z</dc:date>
</item>
<item>
<title>Incorporating Synthetic Accessibility in Drug Design: Predicting Reaction Yields of Suzuki Cross-Couplings by Leveraging AbbVie’s 15-Year Parallel Library Data Set</title>
<link>https://hdl.handle.net/1721.1/158164</link>
<description>Incorporating Synthetic Accessibility in Drug Design: Predicting Reaction Yields of Suzuki Cross-Couplings by Leveraging AbbVie’s 15-Year Parallel Library Data Set
Raghavan, Priyanka; Rago, Alexander J; Verma, Pritha; Hassan, Majdi M; Goshu, Gashaw M; Dombrowski, Amanda W; Pandey, Abhishek; Coley, Connor W; Wang, Ying
Despite the increased use of computational tools to supplement medicinal chemists' expertise and intuition in drug design, predicting synthetic yields in medicinal chemistry endeavors remains an unsolved challenge. Existing design workflows could profoundly benefit from reaction yield prediction, as precious material waste could be reduced, and a greater number of relevant compounds could be delivered to advance the design, make, test, analyze (DMTA) cycle. In this work, we detail the evaluation of AbbVie's medicinal chemistry library data set to build machine learning models for the prediction of Suzuki coupling reaction yields. The combination of density functional theory (DFT)-derived features and Morgan fingerprints was identified to perform better than one-hot encoded baseline modeling, furnishing encouraging results. Overall, we observe modest generalization to unseen reactant structures within the 15-year retrospective library data set. Additionally, we compare predictions made by the model to those made by expert medicinal chemists, finding that the model can often predict both reaction success and reaction yields with greater accuracy. Finally, we demonstrate the application of this approach to suggest structurally and electronically similar building blocks to replace those predicted or observed to be unsuccessful prior to or after synthesis, respectively. The yield prediction model was used to select similar monomers predicted to have higher yields, resulting in greater synthesis efficiency of relevant drug-like molecules.
</description>
<pubDate>Wed, 05 Jun 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158164</guid>
<dc:date>2024-06-05T00:00:00Z</dc:date>
</item>
<item>
<title>Empowering natural product science with AI: leveraging multimodal data and knowledge graphs</title>
<link>https://hdl.handle.net/1721.1/158163</link>
<description>Empowering natural product science with AI: leveraging multimodal data and knowledge graphs
Meijer, David; Beniddir, Mehdi A; Coley, Connor W; Mejri, Yassine M; Öztürk, Meltem; van der Hooft, Justin JJ; Medema, Marnix H; Skiredj, Adam
Artificial intelligence (AI) is accelerating how we conduct science, from folding proteins with AlphaFold and summarizing literature findings with large language models, to annotating genomes and prioritizing newly generated molecules for screening using specialized software. However, the application of AI to emulate human cognition in natural product research and its subsequent impact has so far been limited. One reason for this limited impact is that available natural product data is multimodal, unbalanced, unstandardized, and scattered across many data repositories. This makes natural product data challenging to use with existing deep learning architectures that consume fairly standardized, often non-relational, data. It also prevents models from learning overarching patterns in natural product science. In this Viewpoint, we address this challenge and support ongoing initiatives aimed at democratizing natural product data by collating our collective knowledge into a knowledge graph. By doing so, we believe there will be an opportunity to use such a knowledge graph to develop AI models that can truly mimic natural product scientists' decision-making.
</description>
<pubDate>Fri, 16 Aug 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158163</guid>
<dc:date>2024-08-16T00:00:00Z</dc:date>
</item>
<item>
<title>Reproducing Reaction Mechanisms with Machine‐Learning Models Trained on a Large‐Scale Mechanistic Dataset</title>
<link>https://hdl.handle.net/1721.1/158162</link>
<description>Reproducing Reaction Mechanisms with Machine‐Learning Models Trained on a Large‐Scale Mechanistic Dataset
Joung, Joonyoung F; Fong, Mun Hong; Roh, Jihye; Tu, Zhengkai; Bradshaw, John; Coley, Connor W
Mechanistic understanding of organic reactions can facilitate reaction development, impurity prediction, and in principle, reaction discovery. While several machine learning models have sought to address the task of predicting reaction products, their extension to predicting reaction mechanisms has been impeded by the lack of a corresponding mechanistic dataset. In this study, we construct such a dataset by imputing intermediates between experimentally reported reactants and products using expert reaction templates and train several machine learning models on the resulting dataset of 5,184,184 elementary steps. We explore the performance and capabilities of these models, focusing on their ability to predict reaction pathways and recapitulate the roles of catalysts and reagents. Additionally, we demonstrate the potential of mechanistic models in predicting impurities, often overlooked by conventional models. We conclude by evaluating the generalizability of mechanistic models to new reaction types, revealing challenges related to dataset diversity, consecutive predictions, and violations of atom conservation.
</description>
<pubDate>Mon, 21 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158162</guid>
<dc:date>2024-10-21T00:00:00Z</dc:date>
</item>
<item>
<title>Bertha: Tunneling through the Network API</title>
<link>https://hdl.handle.net/1721.1/158161</link>
<description>Bertha: Tunneling through the Network API
Narayan, Akshay; Panda, Aurojit; Alizadeh, Mohammad; Balakrishnan, Hari; Krishnamurthy, Arvind; Shenker, Scott
Network APIs such as UNIX sockets, DPDK, Netmap, etc. assume that networks provide only end-to-end connectivity. However, networks increasingly include smart NICs and programmable switches that can implement both network and application functions. Several recent works have shown the benefit of offloading application functionality to the network, but using these approaches requires changing not just the applications, but also network and system configuration. In this paper we propose Bertha, a network API that provides a uniform abstraction for offloads, aiming to simplify their use.
HotNets ’20, November 4–6, 2020, Virtual Event, USA
</description>
<pubDate>Wed, 04 Nov 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158161</guid>
<dc:date>2020-11-04T00:00:00Z</dc:date>
</item>
<item>
<title>Differentiable Vector Graphics Rasterization for Editing and Learning</title>
<link>https://hdl.handle.net/1721.1/158158</link>
<description>Differentiable Vector Graphics Rasterization for Editing and Learning
Li, Tzu-Mao; Lukac, Mike; Gharbi, Michael; Ragan-Kelley, Jonathan
We introduce a differentiable rasterizer that bridges the vector graphics and raster image domains, enabling powerful raster-based loss functions, optimization procedures, and machine learning techniques to edit and generate vector content. We observe that vector graphics rasterization is differentiable after pixel prefiltering. Our differentiable rasterizer offers two prefiltering options: an analytical prefiltering technique and a multisampling anti-aliasing technique. The analytical variant is faster but can suffer from artifacts such as conflation. The multisampling variant is still efficient, and can render high-quality images while computing unbiased gradients for each pixel with respect to curve parameters.&#13;
We demonstrate that our rasterizer enables new applications, including a vector graphics editor guided by image metrics, a painterly rendering algorithm that fits vector primitives to an image by minimizing a deep perceptual loss function, new vector graphics editing algorithms that exploit well-known image processing methods such as seam carving, and deep generative models that generate vector content from raster-only supervision under a VAE or GAN training objective.
</description>
<pubDate>Thu, 26 Nov 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158158</guid>
<dc:date>2020-11-26T00:00:00Z</dc:date>
</item>
<item>
<title>Towards Spatially Varying Gloss Reproduction for 3D Printing</title>
<link>https://hdl.handle.net/1721.1/158157</link>
<description>Towards Spatially Varying Gloss Reproduction for 3D Printing
Piovarci, Michal; Foshey, Michael; Babaei, Vahid; Rusinkiewicz, Szymon; Matusik, Wojciech; Didyk, Piotr
3D printing technology is a powerful tool for manufacturing complex shapes with high-quality textures. Gloss, next to color and shape, is one of the most salient visual aspects of an object. Unfortunately, printing a wide range of spatially-varying gloss properties using state-of-the-art 3D printers is challenging as it relies on geometrical modifications to achieve the desired appearance. A common post-processing step is to apply off-the-shelf varnishes that modify the final gloss. The main difficulty in automating this process lies in the physical properties of the varnishes which owe their appearance to a high concentration of large particles and as such, they cannot be easily deposited with current 3D color printers. As a result, fine-grained control of gloss properties using today's 3D printing technologies is limited in terms of both spatial resolution and the range of achievable gloss. We address the above limitations and propose new printing hardware based on piezo-actuated needle valves capable of jetting highly viscous varnishes. Based on the new hardware setup, we present the complete pipeline for controlling the gloss of a given 2.5 D object, from printer calibration, through material selection, to the manufacturing of models with spatially-varying reflectance. Furthermore, we discuss the potential integration with current 3D printing technology. Apart from being a viable solution for 3D printing, our method offers an additional and essential benefit of separating color and gloss fabrication which makes the process more flexible and enables high-quality color and gloss reproduction.
</description>
<pubDate>Thu, 26 Nov 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158157</guid>
<dc:date>2020-11-26T00:00:00Z</dc:date>
</item>
<item>
<title>Maximizing Free Energy Gain</title>
<link>https://hdl.handle.net/1721.1/158156</link>
<description>Maximizing Free Energy Gain
Kolchinsky, Artemy; Marvian, Iman; Gokler, Can; Liu, Zi-Wen; Shor, Peter; Shtanko, Oles; Thompson, Kevin; Wolpert, David; Lloyd, Seth
Maximizing the amount of work harvested from an environment is important for a wide variety of biological and technological processes, from energy-harvesting processes such as photosynthesis to energy storage systems such as fuels and batteries. Here, we consider the maximization of free energy&amp;mdash;and by extension, the maximum extractable work&amp;mdash;that can be gained by a classical or quantum system that undergoes driving by its environment. We consider how the free energy gain depends on the initial state of the system while also accounting for the cost of preparing the system. We provide simple necessary and sufficient conditions for increasing the gain of free energy by varying the initial state. We also derive simple formulae that relate the free energy gained using the optimal initial state rather than another suboptimal initial state. Finally, we demonstrate that the problem of finding the optimal initial state may have two distinct regimes, one easy and one difficult, depending on the temperatures used for preparation and work extraction. We illustrate our results on a simple model of an information engine.
</description>
<pubDate>Mon, 20 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158156</guid>
<dc:date>2025-01-20T00:00:00Z</dc:date>
</item>
<item>
<title>Decoding Codon Bias: The Role of tRNA Modifications in Tissue-Specific Translation</title>
<link>https://hdl.handle.net/1721.1/158155</link>
<description>Decoding Codon Bias: The Role of tRNA Modifications in Tissue-Specific Translation
Ando, Daisuke; Rashad, Sherif; Begley, Thomas J.; Endo, Hidenori; Aoki, Masashi; Dedon, Peter C.; Niizuma, Kuniyasu
The tRNA epitranscriptome has been recognized as an important player in mRNA translation regulation. Our knowledge of the role of the tRNA epitranscriptome in fine-tuning translation via codon decoding at tissue or cell levels remains incomplete. We analyzed tRNA expression and modifications as well as codon optimality across seven mouse tissues. Our analysis revealed distinct enrichment patterns of tRNA modifications in different tissues. Queuosine (Q) tRNA modification was most enriched in the brain compared to other tissues, while mitochondrial tRNA modifications and tRNA expression were highest in the heart. Using this observation, we synthesized, and delivered in vivo, codon-mutated EGFP for Q-codons, where the C-ending Q-codons were replaced with U-ending codons. The protein levels of mutant EGFP were downregulated in liver, which is poor in Q, while in brain EGFP, levels did not change. These data show that understanding tRNA modification enrichments across tissues is not only essential for understanding codon decoding and bias but can also be utilized for optimizing gene and mRNA therapeutics to be more tissue-, cell-, or condition-specific.
</description>
<pubDate>Wed, 15 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158155</guid>
<dc:date>2025-01-15T00:00:00Z</dc:date>
</item>
<item>
<title>The Response Regulator OmpR Negatively Controls the Expression of Genes Implicated in Tilimycin and Tilivalline Cytotoxin Production in Klebsiella oxytoca</title>
<link>https://hdl.handle.net/1721.1/158154</link>
<description>The Response Regulator OmpR Negatively Controls the Expression of Genes Implicated in Tilimycin and Tilivalline Cytotoxin Production in Klebsiella oxytoca
Varela-Nájera, Ramón G.; De la Cruz, Miguel A.; Soria-Bustos, Jorge; González-Horta, Carmen; Delgado-Gardea, Ma Carmen E.; Yáñez-Santos, Jorge A.; Cedillo, María L.; Hirakawa, Hidetada; Fox, James G.; Sánchez-Ramírez, Blanca; Ares, Miguel A.
Klebsiella oxytoca toxigenic strains represent a critical health threat, mainly due to their link to antibiotic-associated hemorrhagic colitis. This serious condition results from the bacteria’s ability to produce tilimycin and tilivalline cytotoxins. Our research highlights the pivotal role of OmpR, a key regulator within the EnvZ/OmpR two-component system, in controlling the virulence factors associated with K. oxytoca. Our findings strongly indicate that OmpR is a repressor of the aroX and npsA genes, the first genes of aroX and NRPS operons, respectively, which are indispensable for producing these enterotoxins. Notably, in the absence of OmpR, we observe a significant increase in cytotoxic effects on Caco-2 cells. These observations identify OmpR as a crucial negative transcription regulator for both operons, effectively managing the release of these cytotoxins. This research deepens our understanding of the mechanisms of toxigenic K. oxytoca and opens promising avenues for targeting OmpR for new therapeutic interventions. By focusing on this innovative approach, we can develop more effective solutions to combat this pressing health challenge, ultimately improving patient outcomes against this pathogen.
</description>
<pubDate>Tue, 14 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158154</guid>
<dc:date>2025-01-14T00:00:00Z</dc:date>
</item>
<item>
<title>A Contactless Multi-Modal Sensing Approach for Material Assessment and Recovery in Building Deconstruction</title>
<link>https://hdl.handle.net/1721.1/158153</link>
<description>A Contactless Multi-Modal Sensing Approach for Material Assessment and Recovery in Building Deconstruction
Cabral, Sophia; Klimenka, Mikita; Bademosi, Fopefoluwa; Lau, Damon; Pender, Stefanie; Villaggi, Lorenzo; Stoddart, James; Donnelly, James; Storey, Peter; Benjamin, David
As material scarcity and environmental concerns grow, material reuse and waste reduction are gaining attention based on their potential to reduce carbon emissions and promote net-zero buildings. This study develops an innovative approach that combines multi-modal sensing technologies with machine learning to enable contactless assessment of in situ building materials for reuse potential. By integrating thermal imaging, red, green, and blue (RGB) cameras, as well as depth sensors, the system analyzes material conditions and reveals hidden geometries within existing buildings. This approach enhances material understanding by analyzing existing materials, including their compositions, histories, and assemblies. A case study on drywall deconstruction demonstrates that these technologies can effectively guide the deconstruction process, potentially reducing material costs and carbon emissions significantly. The findings highlight feasible scenarios for drywall reuse and offer insights into improving existing deconstruction techniques through automated feedback and visualization of cut lines and fastener positions. This research indicates that contactless assessment and automated deconstruction methods are technically viable, economically advantageous, and environmentally beneficial. Serving as an initial step toward novel methods to view and classify existing building materials, this study lays a foundation for future research, promoting sustainable construction practices that optimize material reuse and reduce negative environmental impact.
</description>
<pubDate>Tue, 14 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158153</guid>
<dc:date>2025-01-14T00:00:00Z</dc:date>
</item>
<item>
<title>Visual Acuity Outcomes and Influencing Factors in a Cohort of UK Real-World Diabetic Macular Oedema Patients During the First Two Years of Anti-VEGF Treatment</title>
<link>https://hdl.handle.net/1721.1/158152</link>
<description>Visual Acuity Outcomes and Influencing Factors in a Cohort of UK Real-World Diabetic Macular Oedema Patients During the First Two Years of Anti-VEGF Treatment
Wen, Qing; Karcher, Helene; Wright, David M.; Sinha, Samriddhi Buxy; Chakravarthy, Usha; Santos, Catarina; Igwe, Franklin; Salongcay, Recivall; Curran, Katie; Peto, Tunde
Background/Objectives: The visual acuity (VA) outcomes after the first and second years of anti-vascular endothelial growth factor (anti-VEGF) treatment in patients with diabetic macular oedema (DMO) were evaluated, and the factors associated with treatment success were investigated. Methods: Using Medisoft electronic medical records (UK), this retrospective cohort study analysed VA outcomes, changes, and determinants in DMO patients at year 1 and year 2 after initial anti-VEGF injection. Descriptive analysis examined baseline demographics and clinical characteristics, while regression models were used to assess associations between these factors and changes in VA. Results: 728 DMO patients (1035 eyes) treated with anti-VEGFs (ranibizumab, aflibercept, or bevacizumab) at the Northern Ireland Mater Macular Clinic from 2008 to 2021 were evaluated. The mean age was 64.5 (SD 12.8) years, and 59.6% were male. In the first year, the median annual injection number and interval were 6.0 (IQR 5.0&amp;ndash;8.0) and 6.1 weeks (IQR 5.4&amp;ndash;7.8), respectively, and in the second year, they were 3.0 (IQR 2.0&amp;ndash;5.0) and 10.0 weeks (IQR 6.5&amp;ndash;20.1). In the first two treatment years, 83.4% and 79.8% of eyes had improved/stable VA (ISVA) respectively. The injection number, interval, baseline VA, age, and proliferative diabetic retinopathy (PDR) significantly impacted VA outcomes. Conclusions: Our study confirms the effectiveness of anti-VEGF treatments in improving or maintaining vision for DMO patients, consistent with previous real-world clinical data. An elder age, a better baseline VA, low annual injection numbers (&amp;lt;5), and less frequent injection intervals (&amp;ge;12 weeks) were negatively associated with ISVA success in the first two years. These findings have implications for managing patient expectations, allocating resources, and understanding DMO clinical management.
</description>
<pubDate>Mon, 13 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158152</guid>
<dc:date>2025-01-13T00:00:00Z</dc:date>
</item>
<item>
<title>Stakeholders’ perceptions of and willingness to pay for circular economy in the construction sector</title>
<link>https://hdl.handle.net/1721.1/158151</link>
<description>Stakeholders’ perceptions of and willingness to pay for circular economy in the construction sector
Berglund-Brown, Juliana; Pandey, Akrisht; Duarte, Fabio; Ganitsky, Raquel; Kirchain, Randy; Zheng, Siqi
Adopting Circular Economy practices in the construction industry can help reduce greenhouse gas emissions. However, many barriers exist to adoption, and current perceptions of and willingness to pay for circularity have yet to be quantified. This study seeks to understand the various perceptions of circularity in the construction industry, characterize uncertainties and risks, and identify economic incentives and opportunities that could accelerate circular adoption via an industry survey of three stakeholder groups. 58 stakeholders filled out part of the survey, and 42 stakeholders completed the majority of questions. Real estate developers are willing to pay an average premium of 10% for construction costs if there’s a minimum embodied carbon reduction of 53%. Design and construction professionals and material suppliers were also surveyed. Reasons for adopting circular practices were primarily driven by client, design team, and net zero goals. The results of this survey begin to characterize the economic landscape of what is needed for a circular transition in the built environment.
</description>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158151</guid>
</item>
<item>
<title>A hypergraph model shows the carbon reduction potential of effective space use in housing</title>
<link>https://hdl.handle.net/1721.1/158149</link>
<description>A hypergraph model shows the carbon reduction potential of effective space use in housing
Weber, Ramon Elias; Mueller, Caitlin; Reinhart, Christoph
Humans spend over 90% of their time in buildings, which account for 40% of anthropogenic greenhouse gas emissions and are a leading driver of climate change. Incentivizing more sustainable construction, building codes are used to enforce indoor comfort standards and minimum energy efficiency requirements. However, they currently only reward measures such as equipment or envelope upgrades and disregard the actual spatial configuration and usage. Using a new hypergraph model that encodes building floorplan organization and facilitates automatic geometry creation, we demonstrate that space efficiency outperforms envelope upgrades in terms of operational carbon emissions in 72%, 61% and 33% of surveyed buildings in Zurich, New York, and Singapore. Using automatically generated floorplans in a case study in Zurich further increased access to daylight by up to 24%, revealing that auto-generated floorplans have the potential to improve the quality of residential spaces in terms of environmental performance and access to daylight.
</description>
<pubDate>Fri, 27 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158149</guid>
<dc:date>2024-09-27T00:00:00Z</dc:date>
</item>
<item>
<title>An implantable piezoelectric ultrasound stimulator (ImPULS) for deep brain activation</title>
<link>https://hdl.handle.net/1721.1/158148</link>
<description>An implantable piezoelectric ultrasound stimulator (ImPULS) for deep brain activation
Hou, Jason F; Nayeem, Md Osman Goni; Caplan, Kian A; Ruesch, Evan A; Caban-Murillo, Albit; Criado-Hidalgo, Ernesto; Ornellas, Sarah B; Williams, Brandon; Pearce, Ayeilla A; Dagdeviren, Huseyin E; Surets, Michelle; White, John A; Shapiro, Mikhail G; Wang, Fan; Ramirez, Steve; Dagdeviren, Canan
Precise neurostimulation can revolutionize therapies for neurological disorders. Electrode-based stimulation devices face challenges in achieving precise and consistent targeting due to the immune response and the limited penetration of electrical fields. Ultrasound can aid in energy propagation, but transcranial ultrasound stimulation in the deep brain has limited spatial resolution caused by bone and tissue scattering. Here, we report an implantable piezoelectric ultrasound stimulator (ImPULS) that generates an ultrasonic focal pressure of 100 kPa to modulate the activity of neurons. ImPULS is a fully-encapsulated, flexible piezoelectric micromachined ultrasound transducer that incorporates a biocompatible piezoceramic, potassium sodium niobate [(K,Na)NbO3]. The absence of electrochemically active elements poses a new strategy for achieving long-term stability. We demonstrated that ImPULS can i) excite neurons in a mouse hippocampal slice ex vivo, ii) activate cells in the hippocampus of an anesthetized mouse to induce expression of activity-dependent gene c-Fos, and iii) stimulate dopaminergic neurons in the substantia nigra pars compacta to elicit time-locked modulation of nigrostriatal dopamine release. This work introduces a non-genetic ultrasound platform for spatially-localized neural stimulation and exploration of basic functions in the deep brain.
</description>
<pubDate>Mon, 01 Jan 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158148</guid>
<dc:date>2024-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Wearable bio-adhesive metal detector array (BioMDA) for spinal implants</title>
<link>https://hdl.handle.net/1721.1/158147</link>
<description>Wearable bio-adhesive metal detector array (BioMDA) for spinal implants
Dynamic tracking of spinal instrumentation could facilitate real-time evaluation of hardware integrity and in so doing alert patients/clinicians of potential failure(s). Critically, no method yet exists to continually monitor the integrity of spinal hardware and by proxy the process of spinal arthrodesis; as such hardware failures are often not appreciated until clinical symptoms manifest. Accordingly, herein, we report on the development and engineering of a bio-adhesive metal detector array (BioMDA), a potential wearable solution for real-time, non-invasive positional analyses of osseous implants within the spine. The electromagnetic coupling mechanism and intimate interfacial adhesion enable the precise sensing of the metallic implants position without the use of radiation. The customized decoupling models developed facilitate the precise determination of the horizontal and vertical positions of the implants with incredible levels of accuracy (e.g., &lt;0.5 mm). These data support the potential use of BioMDA in real-time/dynamic postoperative monitoring of spinal implants.
</description>
<pubDate>Fri, 06 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158147</guid>
<dc:date>2024-09-06T00:00:00Z</dc:date>
</item>
<item>
<title>Place identity: a generative AI’s perspective</title>
<link>https://hdl.handle.net/1721.1/158146</link>
<description>Place identity: a generative AI’s perspective
Jang, Kee Moon; Chen, Junda; Kang, Yuhao; Kim, Junghwan; Lee, Jinhyung; Duarte, Fabio; Ratti, Carlo
Do cities have a collective identity? The latest advancements in generative artificial intelligence (AI) models have enabled the creation of realistic representations learned from vast amounts of data. In this study, we test the potential of generative AI as the source of textual and visual information in capturing the place identity of cities assessed by filtered descriptions and images. We asked questions on the place identity of 64 global cities to two generative AI models, ChatGPT and DALL·E2. Furthermore, given the ethical concerns surrounding the trustworthiness of generative AI, we examined whether the results were consistent with real urban settings. In particular, we measured similarity between text and image outputs with Wikipedia data and images searched from Google, respectively, and compared across cases to identify how unique the generated outputs were for each city. Our results indicate that generative models have the potential to capture the salient characteristics of cities that make them distinguishable. This study is among the first attempts to explore the capabilities of generative AI in simulating the built environment in regard to place-specific meanings. It contributes to urban design and geography literature by fostering research opportunities with generative AI and discussing potential limitations for future studies.
</description>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158146</guid>
</item>
<item>
<title>Passive Monitoring of Parkinson Tremor in Daily Life: A Prototypical Network Approach</title>
<link>https://hdl.handle.net/1721.1/158145</link>
<description>Passive Monitoring of Parkinson Tremor in Daily Life: A Prototypical Network Approach
Evers, Luc J. W.; Raykov, Yordan P.; Heskes, Tom M.; Krijthe, Jesse H.; Bloem, Bastiaan R.; Little, Max A.
Objective and continuous monitoring of Parkinson’s disease (PD) tremor in free-living conditions could benefit both individual patient care and clinical trials, by overcoming the snapshot nature of clinical assessments. To enable robust detection of tremor in the context of limited amounts of labeled training data, we propose to use prototypical networks, which can embed domain expertise about the heterogeneous tremor and non-tremor sub-classes. We evaluated our approach using data from the Parkinson@Home Validation study, including 8 PD patients with tremor, 16 PD patients without tremor, and 24 age-matched controls. We used wrist accelerometer data and synchronous expert video annotations for the presence of tremor, captured during unscripted daily life activities in and around the participants’ own homes. Based on leave-one-subject-out cross-validation, we demonstrate the ability of prototypical networks to capture free-living tremor episodes. Specifically, we demonstrate that prototypical networks can be used to enforce robust performance across domain-informed sub-classes, including different tremor phenotypes and daily life activities.
</description>
<pubDate>Thu, 09 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158145</guid>
<dc:date>2025-01-09T00:00:00Z</dc:date>
</item>
<item>
<title>Magnetoelectric Extracellular Vesicle Latency-Targeting (MELT) Nanotherapeutic for the Block-Lock-and-Kill HIV Eradication Strategy</title>
<link>https://hdl.handle.net/1721.1/158143</link>
<description>Magnetoelectric Extracellular Vesicle Latency-Targeting (MELT) Nanotherapeutic for the Block-Lock-and-Kill HIV Eradication Strategy
Andre, Mickensone; Kolishetti, Nagesh; Yndart, Adriana; Vashist, Arti; Nair, Madhavan; Raymond, Andrea D.
Background: Human immunodeficiency virus (HIV) establishes latent infections in cellular reservoirs, including microglia. HC69 cells, a microglial model of HIV latency, contain an HIV promoter long terminal repeat (LTR)-GFP reporter and were used for testing the efficacy of a two-step magnetoelectric nanoparticle (MENP) and extracellular vesicle (xEV) latency-targeting (MELT) nanotherapeutic. GFP expression in HC69 at rest is low (GFPLo), and upon exposure to LTR, transcription-activating agents (i.e., TNF-α) are induced to be high expressing (GFPHi). Methods: The first step of MELT utilized ZL0580, an HIV Tat inhibitor loaded into EVs (80%) via incubation. ZL0580-EVs were taken up by GFPLo and blocked LTR transcriptional reactivation by 50% and were 90% less toxic than ZL0580 alone. The second step in MELT involved conjugation of monomethyl auristatin E (MMAE) to MENPs. HPLC measurements showed 80% MMAE attachment to MENPs. Flow cytometry-based measurements of the membrane potential indicated that the membranes of GFPHi HC69 were 60% more polarized than GFPLo HC69 cells. More MMAE–MENPs were internalized by GFPLo HC69. Results: Using a mixed-cell blood–brain barrier (BBB) Transwell model, we demonstrated that 20% of MELT crossed the BBB, was taken up by HC69 cells, and reduced LTR reactivation by 10%. Conclusions: Overall, this study demonstrated that MELT can potentially be utilized as a nanotherapeutic to target HIV latency in microglia.
</description>
<pubDate>Thu, 09 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158143</guid>
<dc:date>2025-01-09T00:00:00Z</dc:date>
</item>
<item>
<title>The Walk of Guilt: Multimodal Deception Detection from Nonverbal Motion Behaviour</title>
<link>https://hdl.handle.net/1721.1/158142</link>
<description>The Walk of Guilt: Multimodal Deception Detection from Nonverbal Motion Behaviour
Alghowinem, Sharifa; Caldwell, Sabrina; Radwan, Ibrahim; Wagner, Michael; Gedeon, Tom
Detecting deceptive behaviour for surveillance and border protection is critical for a country’s security. With the advancement of technology in relation to sensors and artificial intelligence, recognising deceptive behaviour could be performed automatically. Following the success of affective computing in emotion recognition from verbal and nonverbal cues, we aim to apply a similar concept for deception detection. Recognising deceptive behaviour has been attempted; however, only a few studies have analysed this behaviour from gait and body movement. This research involves a multimodal approach for deception detection from gait, where we fuse features extracted from body movement behaviours from a video signal, acoustic features from walking steps from an audio signal, and the dynamics of walking movement using an accelerometer sensor. Using the video recording of walking from the Whodunnit deception dataset, which contains 49 subjects performing scenarios that elicit deceptive behaviour, we conduct multimodal two-category (guilty/not guilty) subject-independent classification. The classification results obtained reached an accuracy of up to 88% through feature fusion, with an average of 60% from both single and multimodal signals. Analysing body movement using single modality showed that the visual signal had the highest performance followed by the accelerometer and acoustic signals. Several fusion techniques were explored, including early, late, and hybrid fusion, where hybrid fusion not only achieved the highest classification results, but also increased the confidence of the results. Moreover, using a systematic framework for selecting the most distinguishing features of guilty gait behaviour, we were able to interpret the performance of our models. From these baseline results, we can conclude that pattern recognition techniques could help in characterising deceptive behaviour, where future work will focus on exploring the tuning and enhancement of the results and techniques.
</description>
<pubDate>Thu, 26 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158142</guid>
<dc:date>2024-12-26T00:00:00Z</dc:date>
</item>
<item>
<title>Investigating the Applicability of the Peak Density Thickness Parameter over the Equatorial Region</title>
<link>https://hdl.handle.net/1721.1/158141</link>
<description>Investigating the Applicability of the Peak Density Thickness Parameter over the Equatorial Region
Shammat, Mohamed O.; Reinisch, Bodo W.; Galkin, Ivan; Erickson, Philip J.; Weitzen, Jay A.; Rideout, William C.
The Peak Density Thickness (PDT) refers to a vertical region in the ionosphere encompassing the F2 peak, where electron density is at its maximum, and extending upward—maintaining a constant density—for a fixed altitude beyond this peak. This study builds on the previously established PDT concept, initially explored at midlatitudes using data from Millstone Hill, by evaluating its applicability and effectiveness over equatorial latitudes using data from the Jicamarca Incoherent Scatter Radar (ISR) in Lima, Peru. A comprehensive analysis of electron density profiles measured by the Jicamarca ISR, spanning 1997 to 2020, was conducted using the Madrigal database to extract the PDT parameter for the F2 layer. Findings from the Jicamarca ISR indicate that the PDT parameter peaks around solar noon, aligning with observations from Millstone Hill. For selected case studies, the Vary-Chap topside model was employed to reconstruct the ionospheric profile above the F2 peak and PDT, demonstrating the model’s enhanced effectiveness when incorporating the PDT parameter over equatorial regions. This research confirms the presence of PDT in equatorial regions, consistent with its behavior at midlatitudes, and underscores the importance of PDT in refining predictive ionospheric models across different latitudes.
</description>
<pubDate>Thu, 26 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158141</guid>
<dc:date>2024-12-26T00:00:00Z</dc:date>
</item>
<item>
<title>Smart City Products and Their Materials Assessment Using the Pentagon Framework</title>
<link>https://hdl.handle.net/1721.1/158140</link>
<description>Smart City Products and Their Materials Assessment Using the Pentagon Framework
Ponce, Pedro; Rojas, Mario; Mendez, Juana Isabel; Anthony, Brian; Bradley, Russel; Fayek, Aminah Robinson
Smart cities are complex urban environments that rely on advanced technology and data analytics to enhance city services&amp;rsquo; quality of life, sustainability, and efficiency. As these cities continue to evolve, there is a growing need for a structured framework to evaluate and integrate products that align with smart city objectives. This paper introduces the Pentagon Framework, a comprehensive evaluation method designed to ensure that products and their materials meet the specific needs of smart cities. The framework focuses on five key features&amp;mdash;smart, sustainable, sensing, social, and safe&amp;mdash;collectively called the Penta-S concept. These features provide a structured approach to categorizing and assessing products, ensuring alignment with the city&amp;rsquo;s goals for efficiency, sustainability, and user experience. The &lt;i&gt;Smart City Pentagon Framework Analyzer&lt;/i&gt; is also presented, a dedicated web application that facilitates interaction with the framework. It allows product data input, provides feedback on alignment with the Penta-S features, and suggests personality traits based on the OCEAN model. Complementing the web application, the &lt;i&gt;Smart City Penta-S Compliance Assistant&lt;/i&gt; API, developed through ChatGPT, offers a more profound, personalized evaluation of products, including the life cycle phase recommendations using the IPPMD model. This paper contributes to the development of smart city solutions by providing a flexible framework that can be applied to any product type, optimizing its life cycle, and ensuring compliance with the Pentagon Framework. This approach improves product integration and fosters user satisfaction by tailoring products and their materials to meet specific user preferences and needs within the smart city environment. The proposed framework emphasizes citizen-centric design and highlights its advantages over conventional evaluation methods, ultimately enhancing urban planning and smart city development.
</description>
<pubDate>Wed, 25 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158140</guid>
<dc:date>2024-12-25T00:00:00Z</dc:date>
</item>
<item>
<title>US federal resource allocations are inconsistent with concentrations of energy poverty</title>
<link>https://hdl.handle.net/1721.1/158135</link>
<description>US federal resource allocations are inconsistent with concentrations of energy poverty
Batlle, Carlos; Heller, Peter; Knittel, Christopher; Schittekatte, Tim
Recent data from the US Energy Information Administration reveals that nearly one in three households in the United States report experiencing energy poverty, and this number is only expected to rise. Federal assistance programs exist, but allocations across states have been nearly static since 1984, while the distribution of energy poverty is dynamic in location and time. We implement a LASSO-based machine learning approach using sociodemographic and geographical information to estimate energy burden in each US census tract for 2015 and 2020. We then compare the allocation to states from the Low Income Home Energy Assistance Program to an optimized allocation. We allocate funds to the most burdened households, providing them with enough assistance to reduce their energy expenditures so that their household energy burden is equal to a new maximum allowable energy burden. This markedly shifts funds from the northern cold-weather states to the southern warm-weather states.
</description>
<pubDate>Fri, 11 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158135</guid>
<dc:date>2024-10-11T00:00:00Z</dc:date>
</item>
<item>
<title>Acceleration by Stepsize Hedging: Multi-Step Descent and the Silver Stepsize Schedule</title>
<link>https://hdl.handle.net/1721.1/158132</link>
<description>Acceleration by Stepsize Hedging: Multi-Step Descent and the Silver Stepsize Schedule
Altschuler, Jason; Parrilo, Pablo
Can we accelerate the convergence of gradient descent without changing the algorithmÐjust by judiciously choosing stepsizes?&#13;
Surprisingly, we show that the answer is yes. Our proposed Silver Stepsize Schedule optimizes strongly convex functions in&#13;
�&#13;
log�&#13;
2 ≈ �&#13;
0.7864 iterations, where � = 1 +&#13;
√&#13;
2 is the silver ratio and � is the condition number. This is intermediate between&#13;
the textbook unaccelerated rate � and the accelerated rate �&#13;
1/2 due to Nesterov in 1983. The non-strongly convex setting is&#13;
conceptually identical, and standard black-box reductions imply an analogous partially accelerated rate �&#13;
− log�&#13;
2 ≈ �&#13;
−0.7864&#13;
.&#13;
We conjecture and provide partial evidence that these rates are optimal among all stepsize schedules.&#13;
The Silver Stepsize Schedule is constructed recursively in a fully explicit way. It is non-monotonic, fractal-like, and&#13;
approximately periodic of period �&#13;
log�&#13;
2&#13;
. This leads to a phase transition in the convergence rate: initially super-exponential&#13;
(acceleration regime), then exponential (saturation regime).&#13;
The core algorithmic intuition is hedging between individually suboptimal strategiesÐshort steps and long stepsÐsince bad&#13;
cases for the former are good cases for the latter, and vice versa. Properly combining these stepsizes yields faster convergence&#13;
due to the misalignment of worst-case functions. The key challenge in proving this speedup is enforcing long-range consistency&#13;
conditions along the algorithm’s trajectory. We do this by developing a technique that recursively glues constraints from&#13;
diferent portions of the trajectory, thus removing a key stumbling block in previous analyses of optimization algorithms.&#13;
More broadly, we believe that the concepts of hedging and multi-step descent have the potential to be powerful algorithmic&#13;
paradigms in a variety of contexts in optimization and beyond.&#13;
This paper publishes and extends the irst author’s 2018 Master’s Thesis (advised by the second author)Ðwhich established&#13;
for the irst time that judiciously choosing stepsizes can enable acceleration in convex optimization. Prior to this thesis, the&#13;
only such result was for the special case of quadratic optimization, due to Young in 1953.
</description>
<pubDate>Fri, 13 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158132</guid>
<dc:date>2024-12-13T00:00:00Z</dc:date>
</item>
<item>
<title>Intel’s Fall from Grace</title>
<link>https://hdl.handle.net/1721.1/158131</link>
<description>Intel’s Fall from Grace
Cusumano, Michael
Intel continues to dominate the microprocessor market for personal computers and datacenter servers for cloud services but it has fallen sharply behind Nvidia, the new platform leader for AI applications and GPU servers.  Intel's decline has two main causes, apart from the innovations at Nvidia. One involves the inability to adapt to new technologies and customers (i.e., mobile and AI) as they emerged.  A second involves the commitment to manufacture its own microprocessors even though advanced semiconductor manufacturing, led by Taiwan Semiconductor Manufacturing Corp. (TSMC), has evolved into a highly specialized capability, separable from design.
</description>
<pubDate>Wed, 01 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158131</guid>
<dc:date>2025-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>From My Vantage Point: Exploring The Effect of First-Person and Third-Person Perspectives on Social Acceptance in VR Roleplaying Games</title>
<link>https://hdl.handle.net/1721.1/158130</link>
<description>From My Vantage Point: Exploring The Effect of First-Person and Third-Person Perspectives on Social Acceptance in VR Roleplaying Games
Yildirim, Caglar; Sengun, Sercan; Kucuk, Eyup; Akhoroz, Mehmet; Harrell, D. Fox
Virtual reality (VR) roleplaying games designed to promote perspective taking typically involve players assuming the perspective of others from different backgrounds and experiencing a simulated scenario from their everyday life, with the goal of facilitating and enhancing empathy and social acceptance toward marginalized groups. One key question pertains to the extent to which players’ perspective during VR roleplaying games affects their social acceptance of the other. To address this question, we examined the effect of first-person vs. third-person perspective on presence, co-presence, and social acceptance during a VR roleplaying game. Two groups of participants played the same VR roleplaying game from either a first-person perspective or a third-person perspective. Results showed that compared to third-person perspective, first-person perspective led to greater co-presence during the game and engendered higher levels of social acceptance toward the character whose role participants played. These results highlight the importance of using first-person perspective in VR roleplaying games focusing on facilitating and enhancing social acceptance.
MUM ’24, December 01–04, 2024, Stockholm, Sweden
</description>
<pubDate>Sun, 01 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158130</guid>
<dc:date>2024-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Adversarial Network Optimization under Bandit Feedback: Maximizing Utility in Non-Stationary Multi-Hop Networks</title>
<link>https://hdl.handle.net/1721.1/158129</link>
<description>Adversarial Network Optimization under Bandit Feedback: Maximizing Utility in Non-Stationary Multi-Hop Networks
Dai, Yan; Huang, Longbo
Stochastic Network Optimization (SNO) concerns scheduling in stochastic queueing systems and has been widely studied in network theory. Classical SNO algorithms require network conditions to be stationary w.r.t. time, which fails to capture the non-stationary components in increasingly many real-world scenarios. Moreover, most existing algorithms in network optimization assume perfect knowledge of network conditions before decision, which again rules out applications where unpredictability in network conditions presents.&#13;
Motivated by these issues, this paper considers Adversarial Network Optimization (ANO) under bandit feedback. Specifically, we consider the task of i) maximizing some unknown and time-varying utility function associated with scheduler's actions, where ii) the underlying network topology is a non-stationary multi-hop network whose conditions change arbitrarily with time, and iii) only bandit feedback (the effect of actually deployed actions) is revealed after decision-making. We propose the UMO2 algorithm, which does not require any pre-decision knowledge or counterfactual feedback, ensures network stability, and also matches the utility maximization performance of any "mildly varying" reference policy up to a polynomially decaying gap. To our knowledge, no previous algorithm can handle multi-hop networks or achieve utility maximization guarantees in ANO problems with bandit feedback, whereas ours is able to do both.&#13;
Technically, our method builds upon a novel integration of online learning techniques into the Lyapunov drift-plus-penalty method. Specifically, we propose meticulous analytical techniques to jointly balance online learning and Lyapunov arguments, which is used to handle the complex inter-dependency among queues in multi-hop networks. To tackle the learning obstacles due to potentially unbounded queue sizes and negative queue differences, we design a new online linear optimization algorithm that automatically adapts to the unknown (potentially negative) loss magnitudes. Finally, we also propose a bandit convex optimization algorithm with novel queue-dependent learning rate scheduling that suites drastically varying queue lengths in utility maximization. Our new insights and techniques in online learning can also be of independent interest.
</description>
<pubDate>Fri, 13 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158129</guid>
<dc:date>2024-12-13T00:00:00Z</dc:date>
</item>
<item>
<title>Sketching With Your Voice: "Non-Phonorealistic" Rendering of Sounds via Vocal Imitation</title>
<link>https://hdl.handle.net/1721.1/158128</link>
<description>Sketching With Your Voice: "Non-Phonorealistic" Rendering of Sounds via Vocal Imitation
Caren, Matthew; Chandra, Kartik; Tenenbaum, Joshua; Ragan-Kelley, Jonathan; Ma, Karima
We present a method for automatically producing human-like vocal imitations of sounds: the equivalent of “sketching,” but for auditory rather than visual representation. Starting with a simulated model of the human vocal tract, we first try generating vocal imitations by tuning the model’s control parameters to make the synthesized vocalization match the target sound in terms of perceptually-salient auditory features. Then, to better match human intuitions, we apply a cognitive theory of communication to take into account how human speakers reason strategically about their listeners. Finally, we show through several experiments and user studies that when we add this type of communicative reasoning to our method, it aligns with human intuitions better than matching auditory features alone does. This observation has broad implications for the study of depiction in computer graphics.
SA Conference Papers ’24, December 03–06, 2024, Tokyo, Japan
</description>
<pubDate>Tue, 03 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158128</guid>
<dc:date>2024-12-03T00:00:00Z</dc:date>
</item>
<item>
<title>Manifold Sampling for Differentiable Uncertainty in Radiance Fields</title>
<link>https://hdl.handle.net/1721.1/158127</link>
<description>Manifold Sampling for Differentiable Uncertainty in Radiance Fields
Lyu, Linjie; Tewari, Ayush; Habermann, Marc; Saito, Shunsuke; Zollh?fer, Michael; Leimk?hler, Thomas; Theobalt, Christian
SA Conference Papers ’24, December 03–06, 2024, Tokyo, Japan
</description>
<pubDate>Tue, 03 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158127</guid>
<dc:date>2024-12-03T00:00:00Z</dc:date>
</item>
<item>
<title>Markov-Chain Monte Carlo Sampling of Visibility Boundaries for Differentiable Rendering</title>
<link>https://hdl.handle.net/1721.1/158126</link>
<description>Markov-Chain Monte Carlo Sampling of Visibility Boundaries for Differentiable Rendering
Xu, Peiyu; Bangaru, Sai; Li, Tzu-Mao; Zhao, Shuang
Physics-based differentiable rendering requires estimating boundary path integrals emerging from the shift of discontinuities (e.g., visibility boundaries). Previously, although the mathematical formulation of boundary path integrals has been established, efficient and robust estimation of these integrals has remained challenging. Specifically, state-of-the-art boundary sampling methods all rely on primary-sample-space guiding precomputed using sophisticated data structures—whose performance tends to degrade for finely tessellated geometries.&#13;
In this paper, we address this problem by introducing a new Markov-Chain-Monte-Carlo (MCMC) method. At the core of our technique is a local perturbation step capable of efficiently exploring highly fragmented primary sample spaces via specifically designed jumping rules. We compare the performance of our technique with several state-of-the-art baselines using synthetic differentiable-rendering and inverse-rendering experiments.
SA Conference Papers ’24, December 03–06, 2024, Tokyo, Japan
</description>
<pubDate>Tue, 03 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158126</guid>
<dc:date>2024-12-03T00:00:00Z</dc:date>
</item>
<item>
<title>An abundant bacterial phylum with nitrite-oxidizing potential in oligotrophic marine sediments</title>
<link>https://hdl.handle.net/1721.1/158125</link>
<description>An abundant bacterial phylum with nitrite-oxidizing potential in oligotrophic marine sediments
Zhao, Rui; Jørgensen, Steffen L; Babbin, Andrew R
Nitrite-oxidizing bacteria (NOB) are important nitrifiers whose activity regulates the availability of nitrite and dictates the magnitude of nitrogen loss in ecosystems. In oxic marine sediments, ammonia-oxidizing archaea (AOA) and NOB together catalyze the oxidation of ammonium to nitrate, but the abundance ratios of AOA to canonical NOB in some cores are significantly higher than the theoretical ratio range predicted from physiological traits of AOA and NOB characterized under realistic ocean conditions, indicating that some NOBs are yet to be discovered. Here we report a bacterial phylum Candidatus Nitrosediminicolota, members of which are more abundant than canonical NOBs and are widespread across global oligotrophic sediments. Ca. Nitrosediminicolota members have the functional potential to oxidize nitrite, in addition to other accessory functions such as urea hydrolysis and thiosulfate reduction. While one recovered species (Ca. Nitrosediminicola aerophilus) is generally confined within the oxic zone, another (Ca. Nitrosediminicola anaerotolerans) additionally appears in anoxic sediments. Counting Ca. Nitrosediminicolota as a nitrite-oxidizer helps to resolve the apparent abundance imbalance between AOA and NOB in oxic marine sediments, and thus its activity may exert controls on the nitrite budget.
</description>
<pubDate>Mon, 01 Jan 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158125</guid>
<dc:date>2024-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Empathy Toward Artificial Intelligence Versus Human Experiences and the Role of Transparency in Mental Health and Social Support Chatbot Design: Comparative Study</title>
<link>https://hdl.handle.net/1721.1/158124</link>
<description>Empathy Toward Artificial Intelligence Versus Human Experiences and the Role of Transparency in Mental Health and Social Support Chatbot Design: Comparative Study
Shen, Jocelyn; DiPaola, Daniella; Ali, Safinah; Sap, Maarten; Park, Hae Won; Breazeal, Cynthia
Background:&#13;
Empathy is a driving force in our connection to others, our mental well-being, and resilience to challenges. With the rise of generative artificial intelligence (AI) systems, mental health chatbots, and AI social support companions, it is important to understand how empathy unfolds toward stories from human versus AI narrators and how transparency plays a role in user emotions.&#13;
&#13;
Objective:&#13;
We aim to understand how empathy shifts across human-written versus AI-written stories, and how these findings inform ethical implications and human-centered design of using mental health chatbots as objects of empathy.&#13;
&#13;
Methods:&#13;
We conducted crowd-sourced studies with 985 participants who each wrote a personal story and then rated empathy toward 2 retrieved stories, where one was written by a language model, and another was written by a human. Our studies varied disclosing whether a story was written by a human or an AI system to see how transparent author information affects empathy toward the narrator. We conducted mixed methods analyses: through statistical tests, we compared user’s self-reported state empathy toward the stories across different conditions. In addition, we qualitatively coded open-ended feedback about reactions to the stories to understand how and why transparency affects empathy toward human versus AI storytellers.&#13;
&#13;
Results:&#13;
We found that participants significantly empathized with human-written over AI-written stories in almost all conditions, regardless of whether they are aware (t196=7.07, P&lt;.001, Cohen d=0.60) or not aware (t298=3.46, P&lt;.001, Cohen d=0.24) that an AI system wrote the story. We also found that participants reported greater willingness to empathize with AI-written stories when there was transparency about the story author (t494=–5.49, P&lt;.001, Cohen d=0.36).&#13;
&#13;
Conclusions:&#13;
Our work sheds light on how empathy toward AI or human narrators is tied to the way the text is presented, thus informing ethical considerations of empathetic artificial social support or mental health chatbots.
</description>
<pubDate>Mon, 01 Jan 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158124</guid>
<dc:date>2024-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Echocardiogram Vector Embeddings Via R3D Transformer for the Advancement of Automated Echocardiography</title>
<link>https://hdl.handle.net/1721.1/158123</link>
<description>Echocardiogram Vector Embeddings Via R3D Transformer for the Advancement of Automated Echocardiography
Chung, Daniel J; Lee, Somin Mindy; Kaker, Vasu; Zhao, Yongyi; Bin, Irbaz; Perera, Sudheesha; Sasankan, Prabhu; Tang, George; Kazzi, Brigitte; Kuo, Po-Chih; Celi, Leo A; Kpodonu, Jacques
BACKGROUND: Ejection fraction (EF) estimation informs patient plans in the ICU, and low EF can indicate ventricular systolic dysfunction, which increases the risk of adverse events including heart failure. Automated echocardiography models are an attractive solution for high-variance human EF estimation, and key to this goal are echocardiogram vector embeddings, which are a critical resource for computational researchers. OBJECTIVES: The authors aimed to extract the vector embeddings from each echocardiogram in the EchoNet dataset using a classifier trained to classify EF as healthy (&gt;50%) or unhealthy (&lt;= 50%) to create an embeddings dataset for computational researchers. METHODS: We repurposed an R3D transformer to classify whether patient EF is below or above 50%. Training, validation, and testing were done on the EchoNet dataset of 10,030 echocardiograms, and the resulting model generated embeddings for each of these videos. RESULTS: We extracted 400-dimensional vector embeddings for each of the 10,030 EchoNet echocardiograms using the trained R3D model, which achieved a test AUC of 0.916 and 87.5% accuracy, approaching the performance of comparable studies. CONCLUSIONS: We present 10,030 vector embeddings learned by this model as a resource to the cardiology research community, as well as the trained model itself. These vectors enable algorithmic improvements and multimodal applications within automated echocardiography, benefitting the research community and those with ventricular systolic dysfunction (https://github.com/Team-Echo-MIT/r3d-v0-embeddings).
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158123</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Can we achieve atmospheric chemical environments in the laboratory? An integrated model-measurement approach to chamber SOA studies</title>
<link>https://hdl.handle.net/1721.1/158122</link>
<description>Can we achieve atmospheric chemical environments in the laboratory? An integrated model-measurement approach to chamber SOA studies
Kenagy, Hannah S; Heald, Colette L; Tahsini, Nadia; Goss, Matthew B; Kroll, Jesse H
Secondary organic aerosol (SOA), atmospheric particulate matter formed from low-volatility products of volatile organic compound (VOC) oxidation, affects both air quality and climate. Current 3D models, however, cannot reproduce the observed variability in atmospheric organic aerosol. Because many SOA model descriptions are derived from environmental chamber experiments, our ability to represent atmospheric conditions in chambers directly affects our ability to assess the air quality and climate impacts of SOA. Here, we develop an approach that leverages global modeling and detailed mechanisms to design chamber experiments that mimic the atmospheric chemistry of organic peroxy radicals (RO2), a key intermediate in VOC oxidation. Drawing on decades of laboratory experiments, we develop a framework for quantitatively describing RO2 chemistry and show that no previous experimental approaches to studying SOA formation have accessed the relevant atmospheric RO2 fate distribution. We show proof-of-concept experiments that demonstrate how SOA experiments can access a range of atmospheric chemical environments and propose several directions for future studies.
</description>
<pubDate>Fri, 13 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158122</guid>
<dc:date>2024-09-13T00:00:00Z</dc:date>
</item>
<item>
<title>Crack densification in drying colloidal suspensions</title>
<link>https://hdl.handle.net/1721.1/158098</link>
<description>Crack densification in drying colloidal suspensions
Lilin, Paul; Ibrahim, Mario; Bischofberger, Irmgard
As sessile drops of aqueous colloidal suspensions dry, a close-packed particle deposit forms that grows from the edge of the drop toward the center. To compensate for evaporation over the solid’s surface, water flows radially through the deposit, generating a negative pore pressure in the deposit associated with tensile drying stresses that induce the formation of cracks. As these stresses increase during drying, existing cracks propagate and additional cracks form, until the crack density eventually saturates. We rationalize the dynamics of crack propagation and crack densification with a local energy balance between the elastic energy released by the crack, the energetic cost of fracture, and the elastic energy released by previously formed cracks. We show that the final spacing between radial cracks is proportional to the local thickness of the deposit, while the aspect ratio of the crack segments depends on the shape of the deposit.
</description>
<pubDate>Fri, 13 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158098</guid>
<dc:date>2024-09-13T00:00:00Z</dc:date>
</item>
<item>
<title>Sequence‐Sensitivity in Functional Synthetic Polymer Properties</title>
<link>https://hdl.handle.net/1721.1/158097</link>
<description>Sequence‐Sensitivity in Functional Synthetic Polymer Properties
Jin, Tianyi; Coley, Connor W; Alexander‐Katz, Alfredo
Recently, a new class of synthetic methyl methacrylate‐based random heteropolymers (MMA‐based RHPs) has displayed protein‐like properties. Their function appears to be insensitive to the precise sequence. Here, through atomistic molecular dynamics simulation, we show that there are universal protein‐like features of MMA‐based RHPs that are insensitive to the sequence, and mostly depend on the overall composition. In particular, we find that MMA‐based RHPs “fold” into globules with heterogeneous hydration patterns. However, the insensitivity to sequence identity observed in MMA‐based RHPs dramatically changes when we substitute the backbone architecture with acrylate or replace the oxygen atom in the side chain with a nitrogen atom (methacrylamide or acrylamide). In such scenarios, the sequence contributes significantly to the compactness and the hydration of monomers. Using principal component analysis and an intersection‐over‐union based index, we demonstrate that different sequences may not overlap in the property space, meaning that their properties are controlled by the sequence rather than fixed composition. We further investigate the sequence‐insensitive capability of the MMA‐based RHPs as previously reported on bacterial phospholipase OmpLA stabilization through heterodimerization. As experimentally observed, such polymers enhance the stability of OmpLA as reliably as its native bilayer environment. The design of such MMA‐based RHPs provides a sequence‐insensitive alternative to protein‐mimetic biomaterials that is orthogonal to the sequence‐structure‐function paradigm of proteins.
</description>
<pubDate>Fri, 10 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158097</guid>
<dc:date>2025-01-10T00:00:00Z</dc:date>
</item>
<item>
<title>Rapid prediction of conformationally-dependent DFT-level descriptors using graph neural networks for carboxylic acids and alkyl amines</title>
<link>https://hdl.handle.net/1721.1/158096</link>
<description>Rapid prediction of conformationally-dependent DFT-level descriptors using graph neural networks for carboxylic acids and alkyl amines
Haas, Brittany C; Hardy, Melissa A; Sowndarya S. V., Shree; Adams, Keir; Coley, Connor W; Paton, Robert S; Sigman, Matthew S
Data-driven reaction discovery and development is a growing field that relies on the use of molecular descriptors to capture key information about substrates, ligands, and targets. Broad adaptation of this strategy is hindered by the associated computational cost of descriptor calculation, especially when considering conformational flexibility. Descriptor libraries can be precomputed agnostic of application to reduce the computational burden of data-driven reaction development. However, as one often applies these models to evaluate novel hypothetical structures, it would be ideal to predict the descriptors of compounds on-the-fly. Herein, we report DFT-level descriptor libraries for conformational ensembles of 8528 carboxylic acids and 8172 alkyl amines towards this goal. Employing 2D and 3D graph neural network architectures trained on these libraries culminated in the development of predictive models for molecule-level descriptors, as well as the bond- and atom-level descriptors for the conserved reactive site (carboxylic acid or amine). The predictions were confirmed to be robust for an external validation set of medicinally-relevant carboxylic acids and alkyl amines. Additionally, a retrospective study correlating the rate of amide coupling reactions demonstrated the suitability of the predicted DFT-level descriptors for downstream applications. Ultimately, these models enable high-fidelity predictions for a vast number of potential substrates, greatly increasing accessibility to the field of data-driven reaction development.
</description>
<pubDate>Wed, 15 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158096</guid>
<dc:date>2025-01-15T00:00:00Z</dc:date>
</item>
<item>
<title>A Synthesis Algorithm for Modular Design of Pipelined Circuits</title>
<link>https://hdl.handle.net/1721.1/158095</link>
<description>A Synthesis Algorithm for Modular Design of Pipelined Circuits
Marinescu, Maria-Cristina; Rinard, Martin
</description>
<pubDate>Sat, 01 Jan 2000 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158095</guid>
<dc:date>2000-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Schedule optimization for chemical library synthesis</title>
<link>https://hdl.handle.net/1721.1/158094</link>
<description>Schedule optimization for chemical library synthesis
Ai, Qianxiang; Meng, Fanwang; Wang, Runzhong; Klein, J Cullen; Godfrey, Alexander G; Coley, Connor W
Automated chemistry platforms hold the potential to enable large-scale organic synthesis campaigns, such as producing a library of compounds for biological evaluation. The efficiency of such platforms will depend on the schedule according to which the synthesis operations are executed. In this work, we study the scheduling problem for chemical library synthesis, where operations from interdependent synthetic routes are scheduled to minimize the makespan—the total duration of the synthesis campaign. We formalize this problem as a flexible job-shop scheduling problem with chemistry-relevant constraints in the form of a mixed integer linear program (MILP), which we then solve in order to design an optimized schedule. The scheduler's ability to produce valid, optimal schedules is demonstrated by 720 simulated scheduling instances for realistically accessible chemical libraries. Reductions in makespan up to 58%, with an average reduction of 20%, are observed compared to the baseline scheduling approach.
</description>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158094</guid>
</item>
<item>
<title>High-level specification and efficient implementation of pipelined circuits</title>
<link>https://hdl.handle.net/1721.1/158093</link>
<description>High-level specification and efficient implementation of pipelined circuits
Marinescu, M-C; Rinard, M
© 2001 IEEE. This paper describes a novel approach to high-level synthesis of complex pipelined circuits, including pipelined circuits with feedback. This approach combines a high-level, modular specification language with an efficient implementation. In our system, the designer specifies the circuit as a set of independent modules connected by conceptually unbounded queues. Our synthesis algorithm automatically transforms this modular, asynchronous specification into a tightly coupled, fully synchronous implementation in synthesizable Verilog.
</description>
<pubDate>Mon, 01 Jan 2001 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158093</guid>
<dc:date>2001-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>High-level synthesis of pipelined circuits from modular queue-based specifications</title>
<link>https://hdl.handle.net/1721.1/158092</link>
<description>High-level synthesis of pipelined circuits from modular queue-based specifications
Marinescu, MC; Rinard, M
This paper describes a novel approach to high-level synthesis of complex pipelined circuits, including pipelined circuits with feedback. This approach combines a high-level, modular specification language with an efficient implementation. In our system, the designer specifies the circuit as a set of independent modules connected by conceptually unbounded queues. Our synthesis algorithm automatically transforms this modular, asynchronous specification into a tightly coupled, fully synchronous implementation in synthesizable Verilog.
</description>
<pubDate>Mon, 01 Jan 2001 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158092</guid>
<dc:date>2001-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Formal Framework for Modular Synchronous System Design</title>
<link>https://hdl.handle.net/1721.1/158091</link>
<description>A Formal Framework for Modular Synchronous System Design
Marinescu, Maria-Cristina V; Rinard, Martin C
We present the formal framework for a novel approach for specifying and automatically implementing systems such as digital circuits and network protocols. The goal is to reduce the design time and effort required to build correct, efficient, complex systems and to eliminate the need for the designer to deal directly with global synchronization and concurrency issues. Our compiler automatically transforms modular and asynchronous specifications of circuits written in our specification language, into tightly coupled, fully synchronous implementations in synthesizable Verilog. We formally state the correctness theorems and give an outline of the correctness proofs for two of the three main techniques that our compiler implements. © Springer-Verlag Berlin Heidelberg 2003.
</description>
<pubDate>Wed, 01 Jan 2003 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158091</guid>
<dc:date>2003-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>High-level automatic pipelining for sequential circuits</title>
<link>https://hdl.handle.net/1721.1/158090</link>
<description>High-level automatic pipelining for sequential circuits
Marinescu, Maria-Cristina V; Rinard, Martin
This paper presents a new approach for automatically pipelining sequential circuits. The approach repeatedly extracts a computation from the critical path, moves it into a new stage, then uses speculation to generate a stream of values that keep the pipeline full. The newly generated circuit retains enough state to recover from incorrect speculations by flushing the incorrect values from the pipeline, restoring the correct state, then restarting the computation. We also implement two extensions to this basic approach: stalling, which minimizes circuit area by eliminating speculation, and forwarding, which increases the throughput of the generated circuit by forwarding correct values to preceding pipeline stages. We have implemented a prototype synthesizer based on this approach. Our experimental results show that, starting with a non-pipelined or insufficiently pipelined specification, this synthesizer can effectively reduce the clock cycle time and improve the throughput of the generated circuit.
</description>
<pubDate>Mon, 01 Jan 2001 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158090</guid>
<dc:date>2001-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Large Étendue 3D Holographic Display with Content-adaptive Dynamic Fourier Modulation</title>
<link>https://hdl.handle.net/1721.1/158089</link>
<description>Large Étendue 3D Holographic Display with Content-adaptive Dynamic Fourier Modulation
Chao, Brian; Gopakumar, Manu; Choi, Suyeon; Kim, Jonghyun; Shi, Liang; Wetzstein, Gordon
Emerging holographic display technology offers unique capabilities for next-generation virtual reality systems. Current holographic near-eye displays, however, only support a small étendue, which results in a direct tradeoff between achievable field of view and eyebox size. Étendue expansion has recently been explored, but existing approaches are either fundamentally limited in the image quality that can be achieved or they require extremely high-speed spatial light modulators. We describe a new étendue expansion approach that combines multiple coherent sources with content-adaptive amplitude modulation of the hologram spectrum in the Fourier plane. To generate time-multiplexed phase and amplitude patterns for our spatial light modulators, we devise a pupil-aware gradient-descent-based computer-generated holography algorithm that is supervised by a large-baseline target light field. Compared with relevant baseline approaches, ours demonstrates significant improvements in image quality and étendue in simulation and with an experimental holographic display prototype.
SA Conference Papers ’24, December 03–06, 2024, Tokyo, Japan
</description>
<pubDate>Tue, 03 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158089</guid>
<dc:date>2024-12-03T00:00:00Z</dc:date>
</item>
<item>
<title>Equilibria, Efficiency, and Inequality in Network Formation for Hiring and Opportunity</title>
<link>https://hdl.handle.net/1721.1/158088</link>
<description>Equilibria, Efficiency, and Inequality in Network Formation for Hiring and Opportunity
Dwork, Cynthia; Hays, Chris; Kleinberg, Jon; Raghavan, Manish
Professional networks --- the social networks among people in a given line of work --- can serve as a conduit for job prospects and other opportunities. Here we propose a model for the formation of such networks and the transfer of opportunities within them. In our theoretical model, individuals strategically connect with others to maximize the probability that they receive opportunities from them. We explore how professional networks balance connectivity, where connections facilitate opportunity transfers to those who did not get them from outside sources, and congestion, where some individuals receive too many opportunities from their connections and waste some of them.&#13;
We show that strategic individuals are over-connected at equilibrium relative to a social optimum, leading to a price of anarchy for which we derive nearly tight asymptotic bounds. We also show that, at equilibrium, individuals form connections to those who provide similar benefit to them as they provide to others. Thus, our model provides a microfoundation in professional networking contexts for the fundamental sociological principle of homophily, that "similarity breeds connection" [McPherson et al., 2001], which in our setting is realized as a form of status homophily based on alignment in individual benefit. We further explore how, even if individuals are a priori equally likely to receive opportunities from outside sources, equilibria can be unequal, and we provide nearly tight bounds on how unequal they can be. Finally, we explore the ability for online platforms to intervene to improve social welfare and show that natural heuristics may result in adverse effects at equilibrium. Our simple model allows for a surprisingly rich analysis of coordination problems in professional networks and suggests many directions for further exploration.
EC ’24, July 8–11, 2024, New Haven, CT, USA
</description>
<pubDate>Mon, 08 Jul 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158088</guid>
<dc:date>2024-07-08T00:00:00Z</dc:date>
</item>
<item>
<title>Secure Sorting and Selection via Function Secret Sharing</title>
<link>https://hdl.handle.net/1721.1/158087</link>
<description>Secure Sorting and Selection via Function Secret Sharing
Agarwal, Amit; Boyle, Elette; Chandran, Nishanth; Gilboa, Niv; Gupta, Divya; Ishai, Yuval; Kelkar, Mahimna; Ma, Yiping
We revisit the problem of concretely efficient secure computation of sorting and selection (e.g., maximum, median, or top-k) on secret-shared data, focusing on the case of security against a single semi-honest party. Previous solutions either have a high communication overhead or many rounds of interaction, even when allowing input-independent preprocessing.&#13;
We propose a suite of 2-party and 3-party offline-online protocols that exploit the efficient aggregation feature of function secret sharing to minimize the online communication and rounds. In particular, most of our protocols are optimal in terms of both online communication and online rounds up to small constant factors.&#13;
We compare the performance of our protocols with prior works for different input parameters (number of items, bit length of items, batch size) and system parameters (CPU cores, network) and obtain up to 14x improvement in online run time for sorting and selection under some settings.
CCS ’24, October 14–18, 2024, Salt Lake City, UT, USA
</description>
<pubDate>Mon, 02 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158087</guid>
<dc:date>2024-12-02T00:00:00Z</dc:date>
</item>
<item>
<title>Manipulative Interference Attacks</title>
<link>https://hdl.handle.net/1721.1/158086</link>
<description>Manipulative Interference Attacks
Mergendahl, Samuel; Fickas, Stephen; Norris, Boyana; Skowyra, Richard
A μ-kernel is an operating system (OS) paradigm that facilitates a strong cybersecurity posture for embedded systems. Unlike a monolithic OS such as Linux, a μ-kernel reduces overall system privilege by deploying most OS functionality within isolated, userspace protection domains. Moreover, a μ-kernel ensures confidentiality and integrity between protection domains (i.e., spatial isolation), and offers timing predictability for real-time tasks in mixed-criticality systems (i.e., temporal isolation). One popular μ-kernel is seL4 which offers extensive formal guarantees of implementation correctness and flexible temporal budgeting mechanisms.&#13;
However, we show that an untrusted protection domain on a μ-kernel can abuse service requests to other protection domains in order to corrode system availability. We generalize this denial-of-service (DoS) attack strategy as Manipulative Interference Attacks (MIAs) and introduce techniques to efficiently identify instances of MIAs within a configured system. Specifically, we propose a novel hybrid approach that first leverages static analysis to identify software components with influenceable execution times, and second, uses an automatically generated model-based analysis to determine which compromised protection domains can manipulate the influenceable components and trigger MIAs. We investigate the risk of MIAs in several representative system examples including the seL4 Microkit, as well as a case study of seL4 software artifacts from the DARPA Cyber Assured Systems Engineering (CASE) program. In particular, we demonstrate that our analysis is efficient enough to discover practical instances of MIAs in real-world systems.
CCS ’24, October 14–18, 2024, Salt Lake City, UT, USA
</description>
<pubDate>Mon, 02 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158086</guid>
<dc:date>2024-12-02T00:00:00Z</dc:date>
</item>
<item>
<title>Graphical vs. Deep Generative Models: Measuring the Impact of Differentially Private Mechanisms and Budgets on Utility</title>
<link>https://hdl.handle.net/1721.1/158085</link>
<description>Graphical vs. Deep Generative Models: Measuring the Impact of Differentially Private Mechanisms and Budgets on Utility
Ganev, Georgi; Xu, Kai; De Cristofaro, Emiliano
Generative models trained with Differential Privacy (DP) can produce synthetic data while reducing privacy risks. However, navigating their privacy-utility tradeoffs makes finding the best models for specific settings/tasks challenging. This paper bridges this gap by profiling how DP generative models for tabular data distribute privacy budgets across rows and columns, which is one of the primary sources of utility degradation. We compare graphical and deep generative models, focusing on the key factors contributing to how privacy budgets are spent, i.e., underlying modeling techniques, DP mechanisms, and data dimensionality.&#13;
Through our measurement study, we shed light on the characteristics that make different models suitable for various settings and tasks. For instance, we find that graphical models distribute privacy budgets horizontally and thus cannot handle relatively wide datasets for a fixed training time; also, the performance on the task they were optimized for monotonically increases with more data but could also overfit. Deep generative models spend their budgets per iteration, so their behavior is less predictable with varying dataset dimensions, but are more flexible as they could perform better if trained on more features. Moreover, low levels of privacy (ε≥100) could help some models generalize, achieving better results than without applying DP. We believe our work will aid the deployment of DP synthetic data techniques by navigating through the best candidate models vis-à-vis the dataset features, desired privacy levels, and downstream tasks.
CCS ’24, October 14–18, 2024, Salt Lake City, UT, USA.
</description>
<pubDate>Mon, 02 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158085</guid>
<dc:date>2024-12-02T00:00:00Z</dc:date>
</item>
<item>
<title>Specification and Verification of Strong Timing Isolation of Hardware Enclaves</title>
<link>https://hdl.handle.net/1721.1/158084</link>
<description>Specification and Verification of Strong Timing Isolation of Hardware Enclaves
Lau, Stella; Bourgeat, Thomas; Pit-Claudel, Cl?ment; Chlipala, Adam
The process isolation enforceable by commodity hardware and operating systems is too weak to protect secrets from malicious code running on the same machine: attacks exploit timing side channels derived from contention on shared microarchitectural resources to extract secrets. With appropriate hardware support, however, we can construct isolated enclaves and safeguard independent processes from interference through timing side channels, a step towards confidentiality and integrity guarantees.&#13;
In this paper, we describe our work on formally specifying and verifying that a synthesizable hardware architecture implements strong timing isolation for enclaves. We reason about the cycle-accurate semantics of circuits with respect to a trustworthy formulation of strong isolation based on "air-gapped machines" and develop a modular proof strategy that sidesteps the need to prove functional correctness of processors. We apply our method on a synthesizable, multicore, pipelined RISC-V design formalized in Coq.
CCS ’24, October 14–18, 2024, Salt Lake City, UT, USA.
</description>
<pubDate>Mon, 02 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158084</guid>
<dc:date>2024-12-02T00:00:00Z</dc:date>
</item>
<item>
<title>Exploiting Temporal Vulnerabilities for Unauthorized Access in Intent-based Networking</title>
<link>https://hdl.handle.net/1721.1/158083</link>
<description>Exploiting Temporal Vulnerabilities for Unauthorized Access in Intent-based Networking
Weintraub, Ben; Kim, Jiwon; Tao, Ran; Nita-Rotaru, Cristina; Okhravi, Hamed; Tian, Dave (Jing); Ujcich, Benjamin
Intent-based networking (IBN) enables network administrators to express high-level goals and network policies without needing to specify low-level forwarding configurations, topologies, or protocols. Administrators can define intents that capture the overall behavior they want from the network, and an IBN controller compiles such intents into low-level configurations that get installed in the network and implement the desired behavior.&#13;
We discovered that current IBN specifications and implementations do not specify that flow rule installation orderings should be enforced, which leads to temporal vulnerabilities where, for a limited time, attackers can exploit indeterminate connectivity behavior to gain unauthorized network access.&#13;
In this paper, we analyze the causes of such temporal vulnerabilities and their security impacts with a representative case study via the ONOS IBN implementation. We devise the Phantom Link attack and demonstrate a working exploit to highlight the security impacts. To defend against such attacks, we propose Spotlight, a detection method that can alert a system administrator of risky intent updates prone to exploitable temporal vulnerabilities. Spotlight is effective in identifying risky updates using realistic network topologies and policies. We show that Spotlight can detect risky updates in a mean time of 0.65 seconds for topologies of over 1,300 nodes.
CCS ’24, October 14–18, 2024, Salt Lake City, UT, USA
</description>
<pubDate>Mon, 02 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158083</guid>
<dc:date>2024-12-02T00:00:00Z</dc:date>
</item>
<item>
<title>High-Throughput Three-Party DPFs with Applications to ORAM and Digital Currencies</title>
<link>https://hdl.handle.net/1721.1/158082</link>
<description>High-Throughput Three-Party DPFs with Applications to ORAM and Digital Currencies
Zyskind, Guy; Yanai, Avishay; Pentland, Alex
specific and general secure computation. While two-party DPF constructions are readily available for those applications with satisfiable performance, the three-party ones are left behind in both security and efficiency. In this paper we close this gap and propose the first three-party DPF construction that matches the state-of-the-art two-party DPF on all metrics. Namely, it is secure against a malicious adversary corrupting both the dealer and one out of the three evaluators, its function's shares are of the same size and evaluation takes the same time as in the best two-party DPF. Compared to the state-of-the-art three-party DPF, our construction enjoys 40-120× smaller function's share size and shorter evaluation time, for function domains of 216 -240, respectively.&#13;
Apart from DPFs as a stand-alone tool, our construction finds immediate applications to private information retrieval (PIR), writing (PIW) and oblivious RAM (ORAM). To further showcase its applicability, we design and implement an ORAM with access policy, an extension to ORAMs where a policy is being checked before accessing the underlying database. The policy we plug-in is the one suitable for account-based digital currencies, and in particular to central bank digital currencies (CBDCs). Our protocol offers the first design and implementation of a large scale privacy-preserving account-based digital currency. While previous works supported anonymity sets of 64-256 clients and less than 10 transactions per second (tps), our protocol supports anonymity sets in the millions, performing {500,200,58} tps for anonymity sets of {216, 218, 220}, respectively.&#13;
Toward that application, we introduce a new primitive called updatable DPF, which enables a direct computation of a dot product between a DPF and a vector; we believe that updatable DPF and the new dot-product protocol will find interest in other applications.
CCS ’24, October 14–18, 2024, Salt Lake City, UT, USA
</description>
<pubDate>Mon, 02 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158082</guid>
<dc:date>2024-12-02T00:00:00Z</dc:date>
</item>
<item>
<title>Formal Privacy Proof of Data Encoding: The Possibility and Impossibility of Learnable Encryption</title>
<link>https://hdl.handle.net/1721.1/158081</link>
<description>Formal Privacy Proof of Data Encoding: The Possibility and Impossibility of Learnable Encryption
Xiao, Hanshen; Suh, G. Edward; Devadas, Srinivas
We initiate a formal study on the concept of learnable obfuscation and aim to answer the following question: is there a type of data encoding that maintains the "learnability" of encoded samples, thereby enabling direct model training on transformed data, while ensuring the privacy of both plaintext and the secret encoding function? This long-standing open problem has prompted many efforts to design such an encryption function, for example, NeuraCrypt and TransNet. Nonetheless, all existing constructions are heuristic without formal privacy guarantees, and many successful reconstruction attacks are known on these constructions assuming an adversary with substantial prior knowledge.&#13;
We present both generic possibility and impossibility results pertaining to learnable obfuscation. On one hand, we demonstrate that any non-trivial, property-preserving transformation which enables effectively learning over encoded samples cannot offer cryptographic computational security in the worst case. On the other hand, from the lens of information-theoretical security, we devise a series of new tools to produce provable and useful privacy guarantees from a set of heuristic obfuscation methods, including matrix masking, data mixing and permutation, through noise perturbation. Under the framework of PAC Privacy, we show how to quantify the leakage from the learnable obfuscation built upon obfuscation and perturbation methods against adversarial inference. Significantly sharpened utility-privacy tradeoffs are achieved compared to state-of-the-art accounting methods when measuring privacy against data reconstruction and membership inference attacks.
CCS ’24, October 14–18, 2024, Salt Lake City, UT, USA
</description>
<pubDate>Mon, 02 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158081</guid>
<dc:date>2024-12-02T00:00:00Z</dc:date>
</item>
<item>
<title>"Data comes from the real world": A Constructionist Approach to Mainstreaming K12 Data Science Education</title>
<link>https://hdl.handle.net/1721.1/158080</link>
<description>"Data comes from the real world": A Constructionist Approach to Mainstreaming K12 Data Science Education
Ravi, Prerna; Parks, Robert; Masla, John; Abelson, Harold; Breazeal, Cynthia
Data science is emerging as a crucial 21st-century competence, influencing professional practices from citing evidence when advocating for social change to developing artificial intelligence (AI) models. For middle and high school students, data science can put formerly decontextualized subjects into real-world scenarios. Many existing curricula, however, lack authenticity and personal relevance for students. A critique of data science courseware cites the lack of "author proximity," in which students do not contribute to the data's production or see their personal experiences reflected in the data. This paper introduces a novel data science curriculum to scaffold middle and high school students in undertaking real-world data science practices. Through project-based learning modules, the curriculum engages students in investigating solutions to community-based problems through visualization and analysis of live sensor data and public data sets. Materials include formative assessments to help educators (especially those from non-math and computing backgrounds) measure their students' abilities to identify statistical patterns, critically evaluate data biases, and make predictions. As we pilot and co-design with teachers, we will look closely at whether the curriculum's resources can successfully support non-technical practitioners engaging in an integrated curriculum.
SIGCSE Virtual 2024, December 5–8, 2024, Virtual Event, NC, USA
</description>
<pubDate>Thu, 05 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158080</guid>
<dc:date>2024-12-05T00:00:00Z</dc:date>
</item>
<item>
<title>AI Mastery May Not Be For Everyone, But AI Literacy Should Be</title>
<link>https://hdl.handle.net/1721.1/158079</link>
<description>AI Mastery May Not Be For Everyone, But AI Literacy Should Be
Hollands, Fiona; DiPaola, Daniella; Breazeal, Cynthia; Ali, Safinah
Despite the abundance of advice from policy bodies, professional associations, advocacy groups, and scholars on how K-12 schools should assimilate AI and provide AI education, practical plans are lacking from K-12 education leaders themselves. Education leaders must make strategic decisions about how to prepare teachers and students for an AI-infused future. Simultaneously, educators need immediate support and guidance on how to manage the arrival of tools that render some existing educational practices obsolete and prompt the need to teach new skills and awareness. Near term, it may be unrealistic to expect all students to master the ability to develop AI applications; universal AI literacy is a more feasible goal. We introduce a set of short-format, modular AI literacy courses and report how they were implemented and affected teachers' and students' knowledge and perceptions of AI. Using an online questionnaire, we collected data from 265 individuals worldwide who accessed the courses, including 190 teachers who implemented them with over 11,800 students. We conducted 17 teacher interviews to gather feedback and to better understand how courses were adapted for local contexts. Teachers reported an increase in their own and their students' knowledge of AI concepts; and increased optimism about the potential benefits of AI to society and their ability to influence the future of AI. Key takeaways are that AI literacy instruction should be designed for adaptability to local contexts and cultures and that steps should be taken to institutionalize the integration of AI literacy into the regular school curriculum.
SIGCSE Virtual 2024, December 5–8, 2024, Virtual Event, NC, USA
</description>
<pubDate>Thu, 05 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158079</guid>
<dc:date>2024-12-05T00:00:00Z</dc:date>
</item>
<item>
<title>Automated and Blind Detection of Low Probability of Intercept RF Anomaly Signals</title>
<link>https://hdl.handle.net/1721.1/158078</link>
<description>Automated and Blind Detection of Low Probability of Intercept RF Anomaly Signals
Gusain, Kuanl; Hassan, Zoheb; Couto, David; Malek, Mai Abdel; Shah, Vijay K; Zheng, Lizhong; Reed, Jeffrey H.
Automated spectrum monitoring necessitates the accurate detection of low probability of intercept (LPI) radio frequency (RF) anomaly signals to identify unwanted interference in wireless networks. However, detecting these unforeseen low-power RF signals is fundamentally challenging due to the scarcity of labeled RF anomaly data. In this paper, we introduce WANDA (Wireless ANomaly Detection Algorithm), an automated framework designed to detect LPI RF anomaly signals in low signal-to-interference ratio (SIR) environments without relying on labeled data. WANDA operates through a two-step process: (i) Information extraction, where a convolutional neural network (CNN) utilizing soft Hirschfeld-Gebelein-Rényi correlation (HGR) as the loss function extracts informative features from RF spectrograms; and (ii) Anomaly detection, where the extracted features are applied to a one-class support vector machine (SVM) classifier to infer RF anomalies. To validate the effectiveness of WANDA, we present a case study focused on detecting unknown Bluetooth signals within the WiFi spectrum using a practical dataset. Experimental results demonstrate that WANDA outperforms other methods in detecting anomaly signals across a range of SIR values (-10 dB to 20 dB).
ACM MobiCom ’24, November 18–22, 2024, Washington D.C., DC, USA
</description>
<pubDate>Wed, 04 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158078</guid>
<dc:date>2024-12-04T00:00:00Z</dc:date>
</item>
<item>
<title>Around the Corner mmWave Imaging in Practical Environments</title>
<link>https://hdl.handle.net/1721.1/158077</link>
<description>Around the Corner mmWave Imaging in Practical Environments
Dodds, Laura; Shanbhag, Hailan; Guan, Junfeng; Gupta, Saurabh; Hassanieh, Haitham
We present the design, implementation, and evaluation of RFlect, a mmWave imaging system capable of producing around-the-corner high-resolution images in practical environments. RFlect leverages signals reflected off complex surfaces (e.g., poles, concave surfaces, or composition of multiple surfaces) to image objects that are not in the RF line-of-sight. RFlect models the reflections and introduces reconstruction algorithms for different types of surfaces. It also leverages a novel method for precisely mapping the location and geometry of the reflecting surface. We also derive the theoretical resolution and coverage for different reflecting surface geometries. We built a prototype of RFlect and performed extensive evaluations to demonstrate its ability to reconstruct the shape of objects around the corner, with an average Chamfer Distance of 2cm and 3D F-Score of 88.6%.
ACM MobiCom ’24, November 18–22, 2024, Washington D.C., DC, USA
</description>
<pubDate>Wed, 04 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158077</guid>
<dc:date>2024-12-04T00:00:00Z</dc:date>
</item>
<item>
<title>Memory Checking Requires Logarithmic Overhead</title>
<link>https://hdl.handle.net/1721.1/158076</link>
<description>Memory Checking Requires Logarithmic Overhead
Boyle, Elette; Komargodski, Ilan; Vafa, Neekon
We study the complexity of memory checkers with computational security and prove the first general tight lower bound.         Memory checkers, first introduced over 30 years ago by Blum, Evans, Gemmel, Kannan, and Naor (FOCS '91, Algorithmica '94), allow a user to store and maintain a large memory on a remote and unreliable server by using small trusted local storage. The user can issue instructions to the server and after every instruction, obtain either the correct value or a failure (but not an incorrect answer) with high probability. The main complexity measure of interest is the size of the local storage and the number of queries the memory checker makes upon every logical instruction. The most efficient known construction has query complexity $O(\log n/\log\log n)$ and local space proportional to a computational security parameter, assuming one-way functions, where $n$ is the logical memory size. Dwork, Naor, Rothblum, and Vaikuntanathan (TCC '09) showed that for a restricted class of ``deterministic and non-adaptive' memory checkers, this construction is optimal, up to constant factors. However, going beyond the small class of deterministic and non-adaptive constructions has remained a major open problem.     In this work, we fully resolve the complexity of memory checkers by showing that \emph{any} construction with local space $p$ and query complexity $q$ must satisfy       $$ p \ge \frac{n}{(\log n)^{O(q)}} \;. $$      This implies, as a special case, that $q\ge \Omega(\log n/\log\log n)$ in any scheme, assuming that $p\le n^{1-\varepsilon}$ for $\varepsilon&gt;0$. The bound applies to any scheme with computational security, completeness $2/3$, and inverse polynomial in $n$ soundness (all of which make our lower bound only stronger). We further extend the lower bound to schemes where the read complexity $q_r$ and write complexity $q_w$ differ. For instance, we show the tight bound that if $q_r=O(1)$ and $p\le n^{1-\varepsilon}$ for $\varepsilon&gt;0$, then $q_w\ge n^{\Omega(1)}$. This is the first lower bound, for any non-trivial class of constructions, showing a read-write query complexity trade-off.        Our proof is via a delicate compression argument showing that a ``too good to be true' memory checker can be used to compress random bits of information. We draw inspiration from tools recently developed for lower bounds for relaxed locally decodable codes. However, our proof itself significantly departs from these works, necessitated by the differences between settings.
</description>
<pubDate>Tue, 17 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158076</guid>
<dc:date>2024-12-17T00:00:00Z</dc:date>
</item>
<item>
<title>Principles and Guidelines for Evaluating Social Robot Navigation Algorithms</title>
<link>https://hdl.handle.net/1721.1/158075</link>
<description>Principles and Guidelines for Evaluating Social Robot Navigation Algorithms
Francis, Anthony; P?rez-D'Arpino, Claudia; Li, Chengshu; Xia, Fei; Alahi, Alexandre; Alami, Rachid; Bera, Aniket; Biswas, Abhijat; Biswas, Joydeep; Chandra, Rohan; Chiang, Hao-Tien; Everett, Michael; Ha, Sehoon; Hart, Justin; How, Jonathan; Karnan, Haresh; Lee, Tsang-Wei; Manso, Luis; Mirsky, Reuth; Pirk, S?ren
A major challenge to deploying robots widely is navigation in human-populated environments, commonly referred to as social robot navigation. While the field of social navigation has advanced tremendously in recent years, the fair evaluation of algorithms that tackle social navigation remains hard because it involves not just robotic agents moving in static environments but also dynamic human agents and their perceptions of the appropriateness of robot behavior. In contrast, clear, repeatable, and accessible benchmarks have accelerated progress in fields like computer vision, natural language processing and traditional robot navigation by enabling researchers to fairly compare algorithms, revealing limitations of existing solutions and illuminating promising new directions. We believe the same approach can benefit social navigation. In this paper, we pave the road towards common, widely accessible, and repeatable benchmarking criteria to evaluate social robot navigation. Our contributions include (a) a definition of a socially navigating robot as one that respects the principles of safety, comfort, legibility, politeness, social competency, agent understanding, proactivity, and responsiveness to context, (b) guidelines for the use of metrics, development of scenarios, benchmarks, datasets, and simulators to evaluate social navigation, and (c) a design of a social navigation metrics framework to make it easier to compare results from different simulators, robots and datasets.
</description>
<pubDate>Fri, 27 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158075</guid>
<dc:date>2024-12-27T00:00:00Z</dc:date>
</item>
<item>
<title>Snooping Underwater Communications via Low-Cost mmWave Radars</title>
<link>https://hdl.handle.net/1721.1/158074</link>
<description>Snooping Underwater Communications via Low-Cost mmWave Radars
Mollahosseini, Poorya; Afzal, Sayed Saad; Adib, Fadel; Ghasempour, Yasaman
This study examines how an airborne device can intercept underwater acoustic signals exchanged between submerged nodes. It challenges the conventional belief that acoustic communications under the water are safe against eavesdropping since acoustics do not cross the water-air boundary. We show that an airborne mmWave radar can detect and decode underwater acoustic signals by picking up minute surface vibrations induced by these signals. The proof-of-concept was tested in controlled (pool) and uncontrolled (lake) environments, proving that an airborne adversary can identify modulation type, bitrate, and decode symbols from an uncooperative underwater transmitter using its radar sensing capabilities. We demonstrate that the secrecy of underwater links depends on modulation type, providing insights into countermeasures to enhance the security of underwater acoustic communications.
ACM MobiCom ’24, November 18–22, 2024, Washington D.C., DC, USA
</description>
<pubDate>Wed, 04 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158074</guid>
<dc:date>2024-12-04T00:00:00Z</dc:date>
</item>
<item>
<title>Effects of maze appearance on maze solving</title>
<link>https://hdl.handle.net/1721.1/158073</link>
<description>Effects of maze appearance on maze solving
Semizer, Yelda; Yu, Dian; Wan, Qianqian; Balas, Benjamin; Rosenholtz, Ruth
As mazes are typically complex, cluttered stimuli, solving them is likely limited by visual crowding. Thus, several aspects of the appearance of the maze – the thickness, spacing, and curvature of the paths, as well as the texture of both paths and walls – likely influence the performance. In the current study, we investigate the effects of perceptual aspects of maze design on maze-solving performance to understand the role of crowding and visual complexity. We conducted two experiments using a set of controlled stimuli to examine the effects of path and wall thickness, as well as the style of rendering used for both paths and walls. Experiment 1 finds that maze-solving time increases with thicker paths (thus thinner walls). Experiment 2 replicates this finding while also showing that maze-solving time increases when mazes have wavy walls, which are likely more crowded, rather than straight walls. Our findings imply a role of both crowding and figure/ground segmentation in mental maze solving and suggest reformulating the growth cone models.
</description>
<pubDate>Fri, 10 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158073</guid>
<dc:date>2025-01-10T00:00:00Z</dc:date>
</item>
<item>
<title>GCBF+: A Neural Graph Control Barrier Function Framework for Distributed Safe Multi-Agent Control</title>
<link>https://hdl.handle.net/1721.1/158072</link>
<description>GCBF+: A Neural Graph Control Barrier Function Framework for Distributed Safe Multi-Agent Control
Zhang, Songyuan; So, Oswin; Garg, Kunal; Fan, Chuchu
Distributed, scalable, and safe control of large-scale multi-agent systems is a challenging problem. In this paper, we design a distributed framework for safe multi-agent control in large-scale environments with obstacles, where a large number of agents are required to maintain safety using only local information and reach their goal locations. We introduce a new class of certificates, termed graph control barrier function (GCBF), which are based on the well-established control barrier function theory for safety guarantees and utilize a graph structure for scalable and generalizable distributed control of MAS. We develop a novel theoretical framework to prove the safety of an arbitrary-sized MAS with a single GCBF. We propose a new training framework GCBF+ that uses graph neural networks to parameterize a candidate GCBF and a distributed control policy. The proposed framework is distributed and is capable of taking point clouds from LiDAR, instead of actual state information, for real-world robotic applications. We illustrate the efficacy of the proposed method through various hardware experiments on a swarm of drones with objectives ranging from exchanging positions to docking on a moving target without collision. Additionally, we perform extensive numerical experiments, where the number and density of agents, as well as the number of obstacles, increase. Empirical results show that in complex environments with agents with nonlinear dynamics (e.g., Crazyflie drones), GCBF+ outperforms the hand-crafted CBF-based method with the best performance by up to 20% for relatively small-scale MAS with up to 256 agents, and leading reinforcement learning (RL) methods by up to 40% for MAS with 1024 agents. Furthermore, the proposed method does not compromise on the performance, in terms of goal reaching, for achieving high safety rates, which is a common trade-off in RL-based methods.
</description>
<pubDate>Wed, 15 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158072</guid>
<dc:date>2025-01-15T00:00:00Z</dc:date>
</item>
<item>
<title>SeaScan: An Energy-Efficient Underwater Camera for Wireless 3D Color Imaging</title>
<link>https://hdl.handle.net/1721.1/158065</link>
<description>SeaScan: An Energy-Efficient Underwater Camera for Wireless 3D Color Imaging
Naeem, Nazish; Rademacher, Jack; Patnaik, Ritik; Boroushaki, Tara; Adib, Fadel
We present the design, implementation, and evaluation of SeaScan, an energy-efficient camera for 3D imaging of underwater environments. At the core of SeaScan's design is a trinocular lensing system, which employs three ultra-low-power monochromatic image sensors to reconstruct color images. Each of the sensors is equipped with a different filter (red, green, and blue) for color capture. The design introduces multiple innovations to enable reconstructing 3D color images from the captured monochromatic ones. This includes an ML-based cross-color alignment architecture to combine the monochromatic images. It also includes a cross-refractive compensation technique that overcomes the distortion of the wide-angle imaging of the low-power CMOS sensors in underwater environments. We built an end-to-end prototype of SeaScan, including color filter integration, 3D reconstruction, compression, and underwater backscatter communication. Our evaluation in real-world underwater environments demonstrates that SeaScan can capture underwater color images with as little as 23.6 mJ, which represents 37X reduction in energy consumption in comparison to the lowest-energy state-of-the-art underwater imaging system. We also report qualitative and quantitative evaluation of SeaScan's color reconstruction and demonstrate its success in comparison to multiple potential alternative techniques (both geometric and ML-based) in the literature. SeaScan's ability to image underwater environments at such low energy opens up important applications in long-term monitoring for ocean climate change, seafood production, and scientific discovery.
ACM MobiCom ’24, November 18–22, 2024, Washington D.C., DC, USA
</description>
<pubDate>Wed, 04 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158065</guid>
<dc:date>2024-12-04T00:00:00Z</dc:date>
</item>
<item>
<title>SURF: Eavesdropping on Underwater Communications from the Air</title>
<link>https://hdl.handle.net/1721.1/158064</link>
<description>SURF: Eavesdropping on Underwater Communications from the Air
Mollahosseini, Poorya; Afzal, Sayed Saad; Adib, Fadel; Ghasempour, Yasaman
This paper investigates how an airborne node can eavesdrop on the underwater acoustic communication between submerged nodes. Conventionally, such eavesdropping has been assumed impossible as acoustic signals do not cross the water-air boundary. Here, we demonstrate that underwater acoustic communications signals can be picked up and (under certain conditions) decoded using an airborne mmWave radar due to the minute vibrations induced by the communication signals on the water surface. We implemented and evaluated a proof-of-concept prototype of our method and tested it in controlled (pool) and uncontrolled environments (lake). Our results demonstrate that an airborne device can identify the modulation and bitrate of acoustic transmissions from an uncooperative underwater transmitter (victim), and even decode the transmitted symbols. Unlike conventional over-the-air communications, our results indicate that the secrecy of underwater links varies depending on the modulation type and provide insights into the underlying reasons behind these differences. We also highlight the theoretical limitations of such a threat model, and how these results may have a significant impact on the stealthiness of underwater communications, with particular concern to submarine warfare, underwater operations (e.g., oil &amp; gas, search &amp; rescue, mining), and conservation of endangered species. Finally, our investigation uncovers countermeasures that can be used to improve or restore the stealthiness of underwater acoustic communications against such threats.
ACM MobiCom ’24, November 18–22, 2024, Washington D.C., DC, USA
</description>
<pubDate>Wed, 04 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158064</guid>
<dc:date>2024-12-04T00:00:00Z</dc:date>
</item>
<item>
<title>AI-Augmented Predictions: LLM Assistants Improve  Human Forecasting Accuracy</title>
<link>https://hdl.handle.net/1721.1/158063</link>
<description>AI-Augmented Predictions: LLM Assistants Improve  Human Forecasting Accuracy
Schoenegger, Philipp; Park, Peter; Karger, Ezra; Trott, Sean; Tetlock, Philip
Large language models (LLMs) match and sometimes exceed human performance in many domains. This study explores the potential of LLMs to augment human judgment in a forecasting task. We evaluate the effect on human forecasters of two LLM assistants: one designed to provide high-quality ("superforecasting") advice, and the other designed to be overconfident and base-rate neglecting, thus providing noisy forecasting advice. We compare participants using these assistants to a control group that received a less advanced model that did not provide numerical predictions or engage in explicit discussion of predictions. Participants (N = 991) answered a set of six forecasting questions and had the option to consult their assigned LLM assistant throughout. Our preregistered analyses show that interacting with each of our frontier LLM assistants significantly enhances prediction accuracy by between 24% and 28% compared to the control group. Exploratory analyses showed a pronounced outlier effect in one forecasting item, without which we find that the superforecasting assistant increased accuracy by 41%, compared with 29% for the noisy assistant. We further examine whether LLM forecasting augmentation disproportionately benefits less skilled forecasters, degrades the wisdom-of-the-crowd by reducing prediction diversity, or varies in effectiveness with question difficulty. Our data do not consistently support these hypotheses. Our results suggest that access to a frontier LLM assistant, even a noisy one, can be a helpful decision aid in cognitively demanding tasks compared to a less powerful model that does not provide specific forecasting advice. However, the effects of outliers suggest that further research into the robustness of this pattern is needed.
</description>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158063</guid>
</item>
<item>
<title>Finite- and infinite-volume study of DDπ scattering</title>
<link>https://hdl.handle.net/1721.1/158062</link>
<description>Finite- and infinite-volume study of DDπ scattering
Dawid, Sebastian M.; Romero-López, Fernando; Sharpe, Stephen R.
We develop a comprehensive framework for extracting the pole position and properties of the doubly-charmed tetraquark T cc + 3875 from lattice QCD data using the relativistic three-particle formalism. This approach incorporates the effect of the one-pion exchange diagram in DDπ and DD∗ scattering, making it applicable at energies coinciding with the left-hand cut in the partial-wave projected DD∗ amplitude. We present an example application of this framework to existing lattice QCD data at mπ = 280 MeV. We solve the integral equations describing the DDπ reaction, use LSZ reduction to determine the corresponding DD∗ amplitude, and find the values of the infinite-volume two- and three-body K matrices that lead to agreement with lattice DD∗ phase shifts within their uncertainties. Using these K matrices in the three-particle quantization condition, we describe the finite- volume DD∗ spectrum and find good agreement with the lattice QCD energies. Our results suggest that, at this pion mass, the tetraquark appears as a pair of subthreshold complex poles whose precise location strongly depends on the value of the DDπ three-particle K matrix.
</description>
<pubDate>Thu, 09 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158062</guid>
<dc:date>2025-01-09T00:00:00Z</dc:date>
</item>
<item>
<title>Global surgery and climate change: how global surgery can prioritise both the health of the planet and its people</title>
<link>https://hdl.handle.net/1721.1/158061</link>
<description>Global surgery and climate change: how global surgery can prioritise both the health of the planet and its people
Chen, Sophia; Zolo, Yvan; Ngulube, Lumbani; Isiagi, Moses; Maswime, Salome
Climate change is an emerging global health crisis, disproportionately affecting low- and middle-income countries (LMICs) where health outcomes are increasingly compromised by environmental stressors such as pollution, natural disasters, and human migration. With a focus on promoting health equity, Global Surgery advocates for expanding access to surgical care and enhancing health outcomes, particularly in resource-limited and disaster-affected areas like LMICs. The healthcare industry—and more specifically, surgical care—significantly contributes to the global carbon footprint, primarily through resource-intensive settings, i.e. operating rooms that generate greenhouse gases and substantial medical waste. Therefore, Global Surgery efforts aimed at improving surgical access through an increase in surgical volumes may inadvertently exacerbate health challenges for vulnerable populations by further contributing to environmental degradation. This predicament is particularly pronounced in LMICs, who already suffer from a disproportionate share of the global burden of disease, and where the demand for surgery is rising without corresponding resilient infrastructure. LMICs face a double jeopardy of health inequity coupled with climate vulnerability. As a movement positioned to improve health around the world, Global Surgery has an increasingly significant role in envisioning and ensuring a sustainable future. Global Surgery initiatives must prioritise sustainable infrastructure in both high-income countries (HICs) and LMICs, all while accounting for the unequal polluting contributions between HICs and LMICs and, consequently, moral responsibilities moving forward. Moreover, through targeting upstream causes of poor health at urban and perioperative levels, Global Surgery’s interventions may help to reduce the global burden of disease—avoiding preventable surgeries and their carbon footprints from the outset. Altogether, Global Surgery and climate change are two matters of social justice whose solutions must synergistically centralise the health of both the planet and its most vulnerable people.
</description>
<pubDate>Sat, 11 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158061</guid>
<dc:date>2025-01-11T00:00:00Z</dc:date>
</item>
<item>
<title>Measurement of the double-diferential inclusive jet cross section in proton-proton collisions at √s = 5.02 TeV</title>
<link>https://hdl.handle.net/1721.1/158044</link>
<description>Measurement of the double-diferential inclusive jet cross section in proton-proton collisions at √s = 5.02 TeV
Hayrapetyan, A.; Tumasyan, A.; Adam, W.; Andrejkovic, J. W.; Bergauer, T.; Chatterjee, S.; Damanakis, K.; Dragicevic, M.; Escalante Del Valle, A.; Hussain, P. S.; Jeitler, M.; Krammer, N.; Lechner, L.; Liko, D.; Mikulec, I.; Schieck, J.; Schöfbeck, R.; The CMS collaboration
The inclusive jet cross section is measured as a function of jet transverse momentum pT and rapidity y. The measurement is performed using proton-proton collision data at s = 5.02 TeV, recorded by the CMS experiment at the LHC, corresponding to an integrated luminosity of 27.4 pb−1. The jets are reconstructed with the anti-kT algorithm using a distance parameter of R = 0.4, within the rapidity interval |y| &lt; 2, and across the kinematic range 0.06 &lt; pT &lt; 1 TeV. The jet cross section is unfolded from detector to particle level using the determined jet response and resolution. The results are compared to predictions of perturbative quantum chromodynamics, calculated at both next-to-leading order and next-to-next-to-leading order. The predictions are corrected for nonperturbative effects, and presented for a variety of parton distribution functions and choices of the renormalization/factorization scales and the strong coupling αS.
</description>
<pubDate>Thu, 02 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158044</guid>
<dc:date>2025-01-02T00:00:00Z</dc:date>
</item>
<item>
<title>Long-lived particle reconstruction downstream of the LHCb magnet</title>
<link>https://hdl.handle.net/1721.1/158039</link>
<description>Long-lived particle reconstruction downstream of the LHCb magnet
LHCb collaboration
Charged-particle trajectories are usually reconstructed with the LHCb detector using combined information&#13;
from the tracking devices placed upstream and downstream&#13;
of the 4 T m dipole magnet. Trajectories reconstructed using&#13;
only information from the tracker downstream of the dipole&#13;
magnet, which are referred to as T tracks, have not been used&#13;
for physics analysis to date. The challenges of the reconstruction of long-lived particles with T tracks for physics use are&#13;
discussed and solutions are proposed. The feasibility and the&#13;
tracking performance are studied using samples of long-lived&#13;
 and K0&#13;
S hadrons decaying between 6.0 and 7.6 m downstream of the proton–proton collision point, thereby traversing most of the magnetic field region and providing maximal sensitivity to magnetic and electric dipole moments. The&#13;
reconstruction can be expanded upstream to about 2.5 m for&#13;
use in direct searches of exotic long-lived particles. The data&#13;
used in this analysis have been recorded between 2015 and&#13;
2018 and correspond to an integrated luminosity of 6 fb−1.&#13;
The results obtained demonstrate the possibility to further&#13;
extend the decay volume and the physics reach of the LHCb&#13;
experiment.
</description>
<pubDate>Mon, 06 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158039</guid>
<dc:date>2025-01-06T00:00:00Z</dc:date>
</item>
<item>
<title>Taurine prevents mitochondrial dysfunction and protects mitochondria from reactive oxygen species and deuterium toxicity</title>
<link>https://hdl.handle.net/1721.1/158031</link>
<description>Taurine prevents mitochondrial dysfunction and protects mitochondria from reactive oxygen species and deuterium toxicity
Seneff, Stephanie; Kyriakopoulos, Anthony M.
Taurine, although not a coding amino acid, is the most common free amino acid in the body. Taurine has multiple and complex functions in protecting mitochondria against oxidative-nitrosative stress. In this comprehensive review paper, we introduce a novel potential role for taurine in protecting from deuterium (heavy hydrogen) toxicity. This can be of crucial impact to either normal or cancer cells that have highly different mitochondrial redox status. Deuterium is an isotope of hydrogen with a neutron as well as a proton, making it about twice as heavy as hydrogen. We first explain the important role that the gut microbiome and the gut sulfomucin barrier play in deuterium management. We describe the synergistic effects of taurine in the gut to protect against the deleterious accumulation of deuterium in the mitochondria, which disrupts ATP synthesis by ATPase pumps. Moreover, taurine’s derivatives, N-chlorotaurine (NCT) and N-bromotaurine (NBrT), produced through spontaneous reaction of taurine with hypochlorite and hypobromite, have fascinating regulatory roles to protect from oxidative stress and beyond. We describe how taurine could potentially alleviate deuterium stress, primarily through metabolic collaboration among various gut microflora to produce deuterium depleted nutrients and deuterium depleted water, and in this way protect against leaky gut barrier, inflammatory bowel disease, and colon cancer.
</description>
<pubDate>Fri, 10 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158031</guid>
<dc:date>2025-01-10T00:00:00Z</dc:date>
</item>
<item>
<title>Vehicle Routing Problem Formulation for Efficient Tracking of Objects in Low Earth Orbit</title>
<link>https://hdl.handle.net/1721.1/158022</link>
<description>Vehicle Routing Problem Formulation for Efficient Tracking of Objects in Low Earth Orbit
Shtofenmakher, Allan; Balakrishnan, Hamsa
The increasing number of resident space objects (RSOs) in low Earth orbit (LEO) endangers the sustainable use of space and necessitates continuous surveillance to prevent collisions. The U.S. Space Surveillance Network (SSN) tracks tens of thousands of LEO RSOs using a suite of ground-based sensors; however, the algorithms that task and schedule these sensors have not improved significantly in the last twenty years. In that time, the number of catalogued LEO RSOs has more than doubled, calling for more efficient tasking algorithms. Prior research has primarily focused on improving the tasking of ground-based sensors for tracking RSOs in geosynchronous Earth orbit (GEO). In this paper, we extend recent work on a vehicle routing problem (VRP) formulation for optimal tasking and scheduling of ground-based radars for tracking GEO RSOs and apply it to tracking LEO RSOs. We introduce a modified VRP formulation, which features discrete time indexing and leverages sparse, binary feasibility matrices for reduced computation time, and present results for several simulations. We show that our approach can compute global and regional optima for tracking (a) 100 targets using 4 ground-based sensors over a 5-hour time horizon in under 5 minutes on a laptop computer and (b) 10,000 targets using 27 ground-based sensors over a 24-hour time horizon in about 4 hours on a high-performance computing cluster.
AIAA SCITECH 2025 Forum, Session: Spacecraft and Launch Guidance, Navigation, and Control III 6-10 January 2025 Orlando, FL
</description>
<pubDate>Fri, 03 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/158022</guid>
<dc:date>2025-01-03T00:00:00Z</dc:date>
</item>
<item>
<title>Reversibly Switching Hydrogen-Responsive Palladium-Graphene Composite Membranes</title>
<link>https://hdl.handle.net/1721.1/157990</link>
<description>Reversibly Switching Hydrogen-Responsive Palladium-Graphene Composite Membranes
Kim, Lohyun; Persad, Aaron; Cheng, Chi; Field, Randall; Karnik, Rohit
</description>
<pubDate>Wed, 01 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157990</guid>
<dc:date>2025-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Approaching coupled-cluster accuracy for molecular electronic structures with multi-task learning (preprint)</title>
<link>https://hdl.handle.net/1721.1/157988</link>
<description>Approaching coupled-cluster accuracy for molecular electronic structures with multi-task learning (preprint)
Tang, Hao; Xiao, Brian; He, Wenhao; Subasic, Pero; Harutyunyan, Avetik R.; Wang, Yao; Liu, Fang; Xu, Haowei; Li, Ju
</description>
<pubDate>Fri, 27 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157988</guid>
<dc:date>2024-12-27T00:00:00Z</dc:date>
</item>
<item>
<title>A Digital Phenotypic Assessment in Neuro-Oncology (DANO): A Pilot Study on Sociability Changes in Patients Undergoing Treatment for Brain Malignancies</title>
<link>https://hdl.handle.net/1721.1/157962</link>
<description>A Digital Phenotypic Assessment in Neuro-Oncology (DANO): A Pilot Study on Sociability Changes in Patients Undergoing Treatment for Brain Malignancies
Siddi, Francesca; Emedom-Nnamdi, Patrick; Catalino, Michael P.; Rana, Aakanksha; Boaro, Alessandro; Dawood, Hassan Y.; Sala, Francesco; Onnela, Jukka-Pekka; Smith, Timothy R.
first_pageDownload PDFsettingsOrder Article Reprints&#13;
Open AccessArticle&#13;
A Digital Phenotypic Assessment in Neuro-Oncology (DANO): A Pilot Study on Sociability Changes in Patients Undergoing Treatment for Brain Malignancies †&#13;
by Francesca Siddi 1,2,*,Patrick Emedom-Nnamdi 3,Michael P. Catalino 4,Aakanksha Rana 1,5ORCID,Alessandro Boaro 1,2ORCID,Hassan Y. Dawood 1ORCID,Francesco Sala 2,Jukka-Pekka Onnela 3,‡ andTimothy R. Smith 1,‡&#13;
1&#13;
Computational Neuroscience Outcomes Center, Department of Neurosurgery, Brigham and Women’s Hospital, and Harvard Medical School, Boston, MA 02115, USA&#13;
2&#13;
Section of Neurosurgery, Department of Neurosciences, Biomedicine and Movement Sciences, University of Verona, 37129 Verona, Italy&#13;
3&#13;
Department of Biostatistics, Harvard T.H. Chan School of Public Health, Boston, MA 02115, USA&#13;
4&#13;
Department of Neurosurgery, University of Virginia, Charlottesville, VA 22908, USA&#13;
5&#13;
McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139, USA&#13;
*&#13;
Author to whom correspondence should be addressed.&#13;
†&#13;
Previous Presentations: This work was virtually presented as an oral poster presentation at the 2021 Annual Meeting of the European Association of Neurosurgical Societies (eEANS), Virtual Congress, 1–7 October 2021; EP13028.&#13;
‡&#13;
These authors contributed equally to this work.&#13;
Cancers 2025, 17(1), 139; https://doi.org/10.3390/cancers17010139&#13;
Submission received: 15 October 2024 / Revised: 24 December 2024 / Accepted: 3 January 2025 / Published: 4 January 2025&#13;
(This article belongs to the Special Issue Novel Diagnostic and Therapeutic Approaches in Diffuse Gliomas)&#13;
Downloadkeyboard_arrow_down Browse Figures Versions Notes&#13;
&#13;
Simple Summary&#13;
Nowadays, smartphones are the principal tool for interactions between people. Mobile health applications might be used to study the cognitive functions in the neuro-oncological population. Many brain tumor patients have cognitive challenges that have an impact on sociability. Digital phenotyping is able to characterize social and spatial dimensions of human behavior from mobile phone call records. The aim of this study was to start to explore this technology in brain cancer patients, focusing on sociability data. The results of this pilot study indicate that a digital assessment in neuro-oncology can be used to characterize and follow the social activity of patients’ lives. Changes in the patient’s social network relate to disease progression, suggesting a new tool to improve the complex evaluation of underserved brain cancer patients.&#13;
Abstract&#13;
Background: The digital phenotyping tool has great potential for the deep characterization of neurological and quality-of-life assessments in brain tumor patients. Phone communication activities (details on call and text use) can provide insight into the patients’ sociability. Methods: We prospectively collected digital-phenotyping data from six brain tumor patients. The data were collected using the Beiwe application installed on their personal smartphones. We constructed several daily sociability features from phone communication logs, including the number of incoming and outgoing text messages and calls, the length of messages and duration of calls, message reciprocity, the number of communication partners, and number of missed calls. We compared variability in these sociability features against those obtained from a control group, matched for age and sex, selected among patients with a herniated disc. Results: In brain tumor patients, phone-based communication appears to deteriorate with time, as evident in the trend for total outgoing minutes, total outgoing calls, and call out-degree. Conclusions: These measures indicate a possible decrease in sociability over time in brain tumor patients that may correlate with survival. This exploratory analysis suggests that a quantifiable digital sociability phenotype exists and is comparable for patients with different survival outcomes. Overall, assessing neurocognitive function using digital phenotyping appears promising.
</description>
<pubDate>Sat, 04 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157962</guid>
<dc:date>2025-01-04T00:00:00Z</dc:date>
</item>
<item>
<title>Evaluation and Analysis of Next-Generation FY-4A LPW Products over Various Climatic Regions in China</title>
<link>https://hdl.handle.net/1721.1/157961</link>
<description>Evaluation and Analysis of Next-Generation FY-4A LPW Products over Various Climatic Regions in China
Zhang, Wenyuan; Xiao, Xinyu; Peng, Jinsong; Zhang, Shubi; Shehaj, Endrit; Moeller, Gregor
Atmospheric water vapor, a significant constituent of the atmosphere, affects the energy balance between Earth’s atmosphere and space, and its changes play a crucial role in the greenhouse effect. Layer precipitable water (LPW), which represents the column-integral water vapor within a vertical range, is increasingly recognized as a key indicator of atmospheric water vapor distributions and variations. Due to its capability for layer-wise monitoring, LPW products have the potential to offer valuable insights into the characteristics and evolution of climatic regions through refined atmospheric spatiotemporal information. However, the observational quality and spatiotemporal variations of LPW products across different climate zones, e.g., the diverse climatic regions in China, have not been systematically assessed. In this paper, we aim to evaluate and analyze the climatic and seasonal variations of FY-4A LPW products across five climatic regions in China, contributing to a deeper understanding of water vapor variability and providing valuable data for climate change research. A surface pressure calibration algorithm for ERA5 data is developed to calculate accurate ERA5 LPW products. The results show that all four FY-4A LPWs are consistent with ERA5 LPWs, with an overall root mean square error (RMSE) of 2.58, 0.90, 1.30, and 1.01 mm, respectively. Furthermore, FY-4A LPWs are underestimated in the temperate monsoon area and overestimated in the subtropical and tropical monsoon regions, while FY-4A observations agree well with ERA5 reanalysis in temperate continental and plateau mountain zones. These analyses highlight the remarkable climate dependency of FY-4A LPWs and their potential for climate-related studies.
</description>
<pubDate>Mon, 23 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157961</guid>
<dc:date>2024-12-23T00:00:00Z</dc:date>
</item>
<item>
<title>Age- and Sex-Based Developmental Biomarkers in Eye Movements</title>
<link>https://hdl.handle.net/1721.1/157960</link>
<description>Age- and Sex-Based Developmental Biomarkers in Eye Movements
Carrick, Frederick Robert; Hunfalvay, Melissa; Bolte, Takumi; Azzolino, Sergio F.; Abdulrahman, Mahera; Hankir, Ahmed; Antonucci, Matthew M.; Al-Rumaihi, Nouf
Background: Eye movement research serves as a critical tool for assessing brain function, diagnosing neurological and psychiatric disorders, and understanding cognition and behavior. Sex differences have largely been under reported or ignored in neurological research. However, eye movement features provide biomarkers that are useful for disease classification with superior accuracy and robustness compared to previous classifiers for neurological diseases. Neurological diseases have a sex specificity, yet eye movement analysis has not been specific to our understanding of sex differences. Methods: The study involved subjects recruited from 804 sites equipped with RightEye Vision Systems, primarily located in optometry practices across the United States. Subjects completed six eye movement assessments: circular smooth pursuit (CSP), horizontal smooth pursuit (HSP), vertical smooth pursuit (VSP), horizontal saccades (HS), vertical saccades (VS), and fixation stability (FS). Eye movements were analyzed and classified in accordance with age and sex by multiple t-tests and linear regression models. Results: This study represented a large sample size of 23,557 subjects, with 11,871 males and 11,686 females representing ages from birth through 80 years of age. We observed statistically significant differences for all eye movement functions between males and females. Conclusions: We demonstrate that eye movements are sex-specific and offer normative data to compare sex-specific eye movement function by age. Novel baseline metrics can be compared to individual performance, regardless of sex. This study represents significant progress in linking eye movements with brain function and clinical syndromes, allowing researchers and clinicians to stratify individuals by age and sex.
</description>
<pubDate>Sat, 21 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157960</guid>
<dc:date>2024-12-21T00:00:00Z</dc:date>
</item>
<item>
<title>A Learning Probabilistic Boolean Network Model of a Smart Grid with Applications in System Maintenance</title>
<link>https://hdl.handle.net/1721.1/157959</link>
<description>A Learning Probabilistic Boolean Network Model of a Smart Grid with Applications in System Maintenance
Rivera Torres, Pedro Juan; Chen, Chen; Macías-Aguayo, Jaime; Rodríguez González, Sara; Prieto Tejedor, Javier; Llanes Santiago, Orestes; García, Carlos Gershenson; Kanaan Izquierdo, Samir
Probabilistic Boolean Networks can capture the dynamics of complex biological systems as well as other non-biological systems, such as manufacturing systems and smart grids. In this proof-of-concept manuscript, we propose a Probabilistic Boolean Network architecture with a learning process that significantly improves the prediction of the occurrence of faults and failures in smart-grid systems. This idea was tested in a Probabilistic Boolean Network model of the WSCC nine-bus system that incorporates Intelligent Power Routers on every bus. The model learned the equality and negation functions in the different experiments performed. We take advantage of the complex properties of Probabilistic Boolean Networks to use them as a positive feedback adaptive learning tool and to illustrate that these networks could have a more general use than previously thought. This multi-layered PBN architecture provides a significant improvement in terms of performance for fault detection, within a positive-feedback network structure that is more tolerant of noise than other techniques.
</description>
<pubDate>Thu, 19 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157959</guid>
<dc:date>2024-12-19T00:00:00Z</dc:date>
</item>
<item>
<title>Exploring the Dynamics of Canine-Assisted Interactions: A Wearable Approach to Understanding Interspecies Well-Being</title>
<link>https://hdl.handle.net/1721.1/157958</link>
<description>Exploring the Dynamics of Canine-Assisted Interactions: A Wearable Approach to Understanding Interspecies Well-Being
Holder, Timothy R. N.; Nichols, Colt; Summers, Emily; Roberts, David L.; Bozkurt, Alper
first_pageDownload PDFsettingsOrder Article Reprints&#13;
Open AccessFeature PaperArticle&#13;
Exploring the Dynamics of Canine-Assisted Interactions: A Wearable Approach to Understanding Interspecies Well-Being&#13;
by Timothy R. N. Holder 1ORCID,Colt Nichols 2ORCID,Emily Summers 2,David L. Roberts 3ORCID andAlper Bozkurt 2,*ORCID&#13;
1&#13;
Department of Aeronautics and Astronautics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA&#13;
2&#13;
Department of Electrical and Computer Engineering, North Carolina State University, Raleigh, NC 27695, USA&#13;
3&#13;
Department of Computer Science, North Carolina State University, Raleigh, NC 27695, USA&#13;
*&#13;
Author to whom correspondence should be addressed.&#13;
Animals 2024, 14(24), 3628; https://doi.org/10.3390/ani14243628&#13;
Submission received: 11 October 2024 / Revised: 15 November 2024 / Accepted: 27 November 2024 / Published: 16 December 2024&#13;
(This article belongs to the Special Issue Animal–Computer Interaction: New Horizons in Animal Welfare)&#13;
Downloadkeyboard_arrow_down Browse Figures Versions Notes&#13;
&#13;
Simple Summary&#13;
This study utilizes electronic sensors to investigate the outcomes of Canine Assisted Interactions (CAI), a growing therapeutic field, for both human and animal participants. It represents the first attempt to deploy synchronized wearable systems on both humans and dogs, allowing for the continuous and simultaneous collection of physiological and behavioral data during interactions. Leveraging this data, the research examines the real-time dynamics of CAIs, moving beyond traditional survey-based pre- and post-session evaluations. Three innovative visualization tools—a subsession heatmap, a synchrony table, and a metric correlation matrix—are introduced to better characterize the interactions and bonding within human-dog dyads. Preliminary exploratory analyses provide insights that inspire further investigation into CAI mechanisms. This research marks a significant step forward in using multimodal data collection to deepen our understanding of human-animal interactions, particularly in therapeutic settings.&#13;
Abstract&#13;
Canine-assisted interactions (CAIs) have been explored to offer therapeutic benefits to human participants in various contexts, from addressing cancer-related fatigue to treating post-traumatic stress disorder. Despite their widespread adoption, there are still unresolved questions regarding the outcomes for both humans and animals involved in these interactions. Previous attempts to address these questions have suffered from core methodological weaknesses, especially due to absence of tools for an efficient objective evaluation and lack of focus on the canine perspective. In this article, we present a first-of-its-kind system and study to collect simultaneous and continuous physiological data from both of the CAI interactants. Motivated by our extensive field reviews and stakeholder feedback, this comprehensive wearable system is composed of custom-designed and commercially available sensor devices. We performed a repeated-measures pilot study, to combine data collected via this system with a novel dyadic behavioral coding method and short- and long-term surveys. We evaluated these multimodal data streams independently, and we further correlated the psychological, physiological, and behavioral metrics to better elucidate the outcomes and dynamics of CAIs. Confirming previous field results, human electrodermal activity is the measure most strongly distinguished between the dyads’ non-interaction and interaction periods. Valence, arousal, and the positive affect of the human participant significantly increased during interaction with the canine participant. Also, we observed in our pilot study that (a) the canine heart rate was more dynamic than the human’s during interactions, (b) the surveys proved to be the best indicator of the subjects’ affective state, and (c) the behavior coding approaches best tracked the bond quality between the interacting dyads. Notably, we found that most of the interaction sessions were characterized by extended neutral periods with some positive and negative peaks, where the bonded pairs might display decreased behavioral synchrony. We also present three new representations of the internal and overall dynamics of CAIs for adoption by the broader field. Lastly, this paper discusses ongoing options for further dyadic analysis, interspecies emotion prediction, integration of contextually relevant environmental data, and standardization of human–animal interaction equipment and analytical approaches. Altogether, this work takes a significant step forward on a promising path to our better understanding of how CAIs improve well-being and how interspecies psychophysiological states can be appropriately measured.
</description>
<pubDate>Mon, 16 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157958</guid>
<dc:date>2024-12-16T00:00:00Z</dc:date>
</item>
<item>
<title>Arthroscopic Bone Block and Arthroscopic Latarjet for Anterior Shoulder Dislocation&amp;mdash;Technical Note with Tricks and Tips for Conversion and Successful Surgery</title>
<link>https://hdl.handle.net/1721.1/157957</link>
<description>Arthroscopic Bone Block and Arthroscopic Latarjet for Anterior Shoulder Dislocation&amp;mdash;Technical Note with Tricks and Tips for Conversion and Successful Surgery
Longo, Umile Giuseppe; Marcello, Gianmarco; Nazarian, Ara; DeAngelis, Joseph; D’Hooghe, Margaux; D’Hooghe, Pieter
Background: The treatment of patients affected by recurrent anterior shoulder instability has received more attention in the last ten years, focusing on the management of bone loss, which is crucial in predicting postoperative recurrence risk. Recently, various bone grafting techniques and different fixation methods have been developed to preserve native anatomy and reduce complications. Nowadays, glenoid bone reconstruction is usually carried out via the Latarjet procedure or free bone block technique. While the Latarjet procedure has traditionally been considered the best option, the bone block has been demonstrated to be a successful procedure. Even though the indication to perform a free bone block or a Latarjet procedure may be given preoperatively, in cases where the choice between the two procedures is unclear, the decision can be made intraoperatively, given the possibility to switch from one to another. This technical note aims to outline our techniques for the arthroscopic Latarjet procedure and the arthroscopic free bone block, as well as discuss the indications, benefits and downsides of each procedure. Technical tips and tricks are provided. Methods: A step-by-step thorough description of bone block and Latarjet procedures is provided, as well as a comparison of advantages and disadvantages of each technique and tips to avoid complications. Respective indications are discussed. Results: Both the procedures have benefits and downsides. The arthroscopic Latarjet procedure is the most effective in addressing anterior shoulder instability, but is more elaborate, has a shallow learning curve and can have a high complication rate. The bone block technique is an anatomic procedure with a shorter learning curve but has fewer indications. Conclusion: The Latarjet is currently considered the gold standard for glenoid bone grafting. The bone block technique can allegedly be seen as being “in the middle” of the soft tissue repair and Latarjet procedures. Many factors should be considered when choosing the right surgical technique, and treatment plans must be customized for each patient. More studies with long-term follow-up are needed to evaluate the efficacy of arthroscopic bone grafting procedures in various subtypes of patients based on bipolar bone loss assessment and individual risk factors.
</description>
<pubDate>Fri, 08 Nov 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157957</guid>
<dc:date>2024-11-08T00:00:00Z</dc:date>
</item>
<item>
<title>Domain Adversarial Convolutional Neural Network Improves the Accuracy and Generalizability of Wearable Sleep Assessment Technology</title>
<link>https://hdl.handle.net/1721.1/157956</link>
<description>Domain Adversarial Convolutional Neural Network Improves the Accuracy and Generalizability of Wearable Sleep Assessment Technology
Nunes, Adonay S.; Patterson, Matthew R.; Gerstel, Dawid; Khan, Sheraz; Guo, Christine C.; Neishabouri, Ali
Wearable accelerometers are widely used as an ecologically valid and scalable solution for long-term at-home sleep monitoring in both clinical research and care. In this study, we applied a deep learning domain adversarial convolutional neural network (DACNN) model to this task and demonstrated that this new model outperformed existing sleep algorithms in classifying sleep–wake and estimating sleep outcomes based on wrist-worn accelerometry. This model generalized well to another dataset based on different wearable devices and activity counts, achieving an accuracy of 80.1% (sensitivity 84% and specificity 58%). Compared to commonly used sleep algorithms, this model resulted in the smallest error in wake after sleep onset (MAE of 48.7, Cole–Kripke of 86.2, Sadeh of 108.2, z-angle of 57.5) and sleep efficiency (MAE of 11.8, Cole–Kripke of 18.4, Sadeh of 23.3, z-angle of 9.3) outcomes. Despite being around for many years, accelerometer-alone devices continue to be useful due to their low cost, long battery life, and ease of use. Improving the accuracy and generalizability of sleep algorithms for accelerometer wrist devices is of utmost importance. We here demonstrated that domain adversarial convolutional neural networks can improve the overall accuracy, especially the specificity, of sleep–wake classification using wrist-worn accelerometer data, substantiating its use as a scalable and valid approach for sleep outcome assessment in real life.
</description>
<pubDate>Sat, 14 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157956</guid>
<dc:date>2024-12-14T00:00:00Z</dc:date>
</item>
<item>
<title>Deep Water Subsea Energy Storage, Lessons Learned from the Offshore Oil and Gas Industry</title>
<link>https://hdl.handle.net/1721.1/157955</link>
<description>Deep Water Subsea Energy Storage, Lessons Learned from the Offshore Oil and Gas Industry
Juhlin, Rasmus; Slocum, Alexander H.; Assadi, Mohsen
In a future where a large portion of power will be supplied by highly intermittent sources such as solar- and wind-power, energy storage will form a crucial part of the power mix ensuring that there is enough flexibility in the system to cope with the intermittency. With further development of pumped storage hydro constrained by the lack of remaining suitable topography, a novel Subsea Pumped Hydro Storage concept has emerged as a promising solution to utilize the ocean space for large-scale energy storage. While previous publications address thermodynamic efficiency limits, there is a notable lack of research on turbine selection, design, and cost estimation based on best practices. This paper presents a comprehensive overview of current state-of-the-art subsea engineering and its significant achievements pioneered by the oil and gas industry. This paper introduces a robust methodological framework for calculating the costs of concrete SPHS tanks, factoring in longevity and best installation practices for structures designed to endure for half a century. The results indicate that with an optimized design, the cost of an SPSH concrete storage tank is approximately $0.15/Wh. This work lays the groundwork for future advancements in SPHS, building on the substantial progress within subsea engineering over recent decades, and marks a significant step towards realizing the potential of this concept in the renewable energy landscape.
</description>
<pubDate>Thu, 12 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157955</guid>
<dc:date>2024-12-12T00:00:00Z</dc:date>
</item>
<item>
<title>Advances in 2D Molybdenum Disulfide Transistors for Flexible and Wearable Electronics</title>
<link>https://hdl.handle.net/1721.1/157954</link>
<description>Advances in 2D Molybdenum Disulfide Transistors for Flexible and Wearable Electronics
Kwak, Kyoungwon; Yoon, Hyewon; Hong, Seongin; Kang, Byung Ha
As the trajectory of developing advanced electronics is shifting towards wearable electronics, various methods for implementing flexible and bendable devices capable of conforming to curvilinear surfaces have been widely investigated. In particular, achieving high-performance and stable flexible transistors remains a significant technical challenge, as transistors are fundamental components of electronics, playing a key role in overall performance. Among the wide range of candidates for flexible transistors, two-dimensional (2D) molybdenum disulfide (MoS2)-based transistors have emerged as potential solutions to address these challenges. Unlike other 2D materials, the 2D MoS2 offers numerous advantages, such as high carrier mobility, a tunable bandgap, superior mechanical strength, and exceptional chemical stability. This review emphasizes the novel techniques of the fabrication process, structure, and material to achieve flexible MoS2 transistor-based applications. Furthermore, the distinctive feature of this review is its focus on studies published in high-impact journals over the past decade, emphasizing their methods for developing MoS2 transistors into various applications. Finally, the review addresses technical challenges and provides an outlook for flexible and wearable MoS2 transistors.
</description>
<pubDate>Thu, 05 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157954</guid>
<dc:date>2024-12-05T00:00:00Z</dc:date>
</item>
<item>
<title>Enabling Perspective-Aware Ai with Contextual Scene Graph Generation</title>
<link>https://hdl.handle.net/1721.1/157953</link>
<description>Enabling Perspective-Aware Ai with Contextual Scene Graph Generation
Platnick, Daniel; Alirezaie, Marjan; Rahnama, Hossein
This paper advances contextual image understanding within perspective-aware Ai (PAi), an emerging paradigm in human–computer interaction that enables users to perceive and interact through each other’s perspectives. While PAi relies on multimodal data—such as text, audio, and images—challenges in data collection, alignment, and privacy have led us to focus on enabling the contextual understanding of images. To achieve this, we developed perspective-aware scene graph generation with LLM post-processing (PASGG-LM). This framework extends traditional scene graph generation (SGG) by incorporating large language models (LLMs) to enhance contextual understanding. PASGG-LM integrates classical scene graph outputs with LLM post-processing to infer richer contextual information, such as emotions, activities, and social contexts. To test PASGG-LM, we introduce the context-aware scene graph generation task, where the goal is to generate a context-aware situation graph describing the input image. We evaluated PASGG-LM pipelines using state-of-the-art SGG models, including Motifs, Motifs-TDE, and RelTR, and showed that fine-tuning LLMs, particularly GPT-4o-mini and Llama-3.1-8B, improves performance in terms of R@K, mR@K, and mAP. Our method is capable of generating scene graphs that capture complex contextual aspects, advancing human–machine interaction by enhancing the representation of diverse perspectives. Future directions include refining contextual scene graph models and expanding multi-modal data integration for PAi applications in domains such as healthcare, education, and social robotics.
</description>
<pubDate>Mon, 02 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157953</guid>
<dc:date>2024-12-02T00:00:00Z</dc:date>
</item>
<item>
<title>SPECTER: efficient evaluation of the spectral EMD</title>
<link>https://hdl.handle.net/1721.1/157951</link>
<description>SPECTER: efficient evaluation of the spectral EMD
Gambhir, Rikab; Larkoski, Andrew J.; Thaler, Jesse
The Energy Mover’s Distance (EMD) has seen use in collider physics as a metric between events and as a geometric method of defining infrared and collinear safe observables. Recently, the Spectral Energy Mover’s Distance (SEMD) has been proposed as a more analytically tractable alternative to the EMD. In this work, we obtain a closed-form expression for the Riemannian-like p = 2 SEMD metric between events, eliminating the need to numerically solve an optimal transport problem. Additionally, we show how the SEMD can be used to define event and jet shape observables by minimizing the distance between events and parameterized energy flows (similar to the EMD), and we obtain closed-form expressions for several of these observables. We also present the Specter framework, an efficient and highly parallelized implementation of the SEMD metric and SEMD-derived shape observables as an analogue of the previously-introduced Shaper for EMD-based computations. We demonstrate that computing the SEMD with Specter can be up to a thousand times faster than computing the EMD with standard optimal transport libraries.
</description>
<pubDate>Mon, 30 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157951</guid>
<dc:date>2024-12-30T00:00:00Z</dc:date>
</item>
<item>
<title>Resurgence in Liouville theory</title>
<link>https://hdl.handle.net/1721.1/157950</link>
<description>Resurgence in Liouville theory
Benjamin, Nathan; Collier, Scott; Maloney, Alexander; Meruliya, Viraj
Liouville conformal field theory is a prototypical example of an exactly solvable quantum field theory, in the sense that the correlation functions in an arbitrary background can be determined exactly using only the constraints of unitarity and crossing symmetry. For example, the three point correlation functions are given by the famous formula of Dorn-Otto-Zamolodchikov-Zamolodchikov (DOZZ). Unlike many other exactly solvable theories, Liouville theory has a continuously tunable parameter — essentially ℏ — which is related to the central charge of the theory. Here we investigate the nature of the perturbative expansion in powers of ℏ, which is the loop expansion around a semi-classical solution. We show that the perturbative coefficients grow factorially, as expected of a Feynman diagram expansion, and take the form of an asymptotic series. We identify the singularities in the Borel plane, and show that they are associated with complex instanton solutions of Liouville theory; they correspond precisely to the complex solutions described by Harlow, Maltz, and Witten. Both single- and multi-valued solutions of Liouville appear. We show that the perturbative loop expansions around these different saddle points mix in the way expected for a trans-series expansion. Thus Liouville theory provides a calculable example of a quantum field theory where perturbative and instanton contributions can be summed up and assembled into a finite answer.
</description>
<pubDate>Fri, 03 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157950</guid>
<dc:date>2025-01-03T00:00:00Z</dc:date>
</item>
<item>
<title>Model-independent search for pair production of new bosons decaying into muons in proton-proton collisions a √s = 13 TeV</title>
<link>https://hdl.handle.net/1721.1/157949</link>
<description>Model-independent search for pair production of new bosons decaying into muons in proton-proton collisions a √s = 13 TeV
Hayrapetyan, A.; Tumasyan, A.; Adam, W.; Andrejkovic, J. W.; Bergauer, T.; Chatterjee, S.; Damanakis, K.; Dragicevic, M.; Hussain, P. S.; Jeitler, M.; Krammer, N.; Li, A.; Liko, D.; Mikulec, I.; Schieck, J.; Schöfbeck, R.; Schwarz, D.; Sonawane, M.; The CMS collaboration
The results of a model-independent search for the pair production of new bosons within a mass range of 0.21 &lt; m &lt; 60 GeV, are presented. This study utilizes events with a four-muon final state. We use two data sets, comprising 41.5 fb−1 and 59.7 fb−1 of proton-proton collisions at s = 13 TeV, recorded in 2017 and 2018 by the CMS experiment at the CERN LHC. The study of the 2018 data set includes a search for displaced signatures of a new boson within the proper decay length range of 0 &lt; cτ &lt; 100 mm. Our results are combined with a previous CMS result, based on 35.9 fb−1 of proton-proton collisions at s = 13 TeV collected in 2016. No significant deviation from the expected background is observed. Results are presented in terms of a model-independent upper limit on the product of cross section, branching fraction, and acceptance. The findings are interpreted across various benchmark models, such as an axion-like particle model, a vector portal model, the next-to-minimal supersymmetric standard model, and a dark supersymmetric scenario, including those predicting a non-negligible proper decay length of the new boson. In all considered scenarios, substantial portions of the parameter space are excluded, expanding upon prior results.
</description>
<pubDate>Mon, 23 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157949</guid>
<dc:date>2024-12-23T00:00:00Z</dc:date>
</item>
<item>
<title>Analysis of Λ b 0 → pK−μ+μ− decays</title>
<link>https://hdl.handle.net/1721.1/157948</link>
<description>Analysis of Λ b 0 → pK−μ+μ− decays
Aaij, R.; Abdelmotteleb, A. S. W.; Abellan Beteta, C.; Abudinén, F.; Ackernley, T.; Adefisoye, A. A.; Adeva, B.; Adinolfi, M.; Adlarson, P.; Agapopoulou, C.; Aidala, C. A.; Ajaltouni, Z.; Akar, S.; Akiba, K.; Albicocco, P.; Albrecht, J.; The LHCb collaboration
The differential branching fraction and angular coefficients of Λ b 0 → pK−μ+μ− decays are measured in bins of the dimuon mass squared and dihadron mass. The analysis is performed using a data set corresponding to 9 fb−1 of integrated luminosity collected with the LHCb detector between 2011 and 2018. The data are consistent with receiving contributions from a mixture of Λ resonances with different spin-parity quantum numbers. The angular coefficients show a pattern of vector-axial vector interference that is a characteristic of the type of flavour-changing neutral-current transition relevant for these decays.
</description>
<pubDate>Thu, 19 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157948</guid>
<dc:date>2024-12-19T00:00:00Z</dc:date>
</item>
<item>
<title>Imaging the wakes of jets with energy-energy-energy correlators</title>
<link>https://hdl.handle.net/1721.1/157947</link>
<description>Imaging the wakes of jets with energy-energy-energy correlators
Bossi, Hannah; Kudinoor, Arjun S.; Moult, Ian; Pablos, Daniel; Rai, Ananya; Rajagopal, Krishna
As the partons in a high energy jet propagate through the droplet of quark-gluon plasma (QGP) produced in a heavy-ion collision they lose energy to, kick, and are kicked by the medium. The resulting modifications to the parton shower encode information about the microscopic nature of QGP. A direct consequence, however, is that the momentum and energy lost by the parton shower are gained by the medium and, since QGP is a strongly coupled liquid, this means that the jet excites a wake in the droplet of QGP. After freezeout, this wake becomes soft hadrons with net momentum in the jet direction meaning that what an experimentalist later reconstructs as a jet includes hadrons originating from both the modified parton shower and its wake. This has made it challenging to find experimental observables that provide an unambiguous view of the dynamical response of a droplet of QGP to a jet shooting through it. Recent years have seen significant substantial advances in the theoretical and experimental understanding of the substructure of jets, in particular, using correlation functions, E n → 1 ⋯ E n → k , of the energy flux operator in proton-proton collisions and, recently, in heavy-ion collisions. So far, such studies have focused primarily on the two-point correlator, which allows for the identification of the angular scale of the underlying dynamics. Higher-point correlators hold the promise of mapping out the dynamics themselves. In this paper we perform the first study of the shape-dependent three-point energy-energy-energy correlator in heavy-ion collisions. Using the Hybrid Model to simulate the interactions of high energy jets with the QGP medium, we show that the three-point correlator presents us with a striking new opportunity. We find that hadrons originating from wakes are the dominant contribution to the three-point correlator in the kinematic regime in which the three points are well-separated in angle, forming a roughly equilateral triangle. This equilateral region of the correlator is far from the region populated by collinear vacuum emissions, making it a canvas on which jet wakes are laid out, where experimentalists can map their shapes. Our work provides a key step towards the systematic use of energy correlators to image and unravel the dynamical response of a droplet of QGP that has been probed by a passing jet, and motivates numerous experimental and theoretical studies.
</description>
<pubDate>Tue, 10 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157947</guid>
<dc:date>2024-12-10T00:00:00Z</dc:date>
</item>
<item>
<title>Multiple testing for signal-agnostic searches for new physics with machine learning</title>
<link>https://hdl.handle.net/1721.1/157946</link>
<description>Multiple testing for signal-agnostic searches for new physics with machine learning
Grosso, Gaia; Letizia, Marco
In this work, we address the question of how to enhance signal-agnostic searches by leveraging multiple testing strategies. Specifically, we consider hypothesis tests relying on machine learning, where model selection can introduce a bias towards specific families of new physics signals. Focusing on the New Physics Learning Machine, a methodology to perform a signal-agnostic likelihood-ratio test, we explore a number of approaches to multiple testing, such as combining p-values and aggregating test statistics. Our findings show that it is beneficial to combine different tests, characterised by distinct choices of hyperparameters, and that performances comparable to the best available test are generally achieved, while also providing a more uniform response to various types of anomalies. This study proposes a methodology that is valid beyond machine learning approaches and could in principle be applied to a larger class model-agnostic analyses based on hypothesis testing.
</description>
<pubDate>Sat, 04 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157946</guid>
<dc:date>2025-01-04T00:00:00Z</dc:date>
</item>
<item>
<title>An empirical design theory for compact drip irrigation emitters</title>
<link>https://hdl.handle.net/1721.1/157945</link>
<description>An empirical design theory for compact drip irrigation emitters
Ghodgaonkar, Aditya; Welsh, Emily; Judge, Benjamin; Winter V, Amos G.
With freshwater reserves rapidly diminishing, sustainable irrigation technologies such as drip irrigation must be widely adopted to meet the food demand of a growing global population. Drip irrigation uses a network of pressurized tubes with flow-regulating devices called emitters to minimize conveyance losses, saving up to 65% water compared to flood and furrow irrigation. However, its widespread adoption remains limited due to its high initial capital costs, up to 55% of which are driven by the emitters and tubes. The plastic material consumed by the emitters and tubes is a major driver of their cost. To directly address this cost barrier, this paper details a hydraulic design theory for compact emitters having a common commercial architecture: uniform depth labyrinths with symmetric, triangular teeth. The theory uses geometric symmetry, manufacturing considerations, and clogging constraints to identify three design parameters in emitters that can be used to tune their hydraulic performance without significantly affecting their material volume: the tooth tip gap, labyrinth depth, and the number of tooth pairs. This knowledge allows designers to minimize emitter volume and set architecture a priori, and then use an empirically derived hydraulic model that uses the selected parameters as input arguments to tune flow rate independently. This ensures faster and simpler design iterations. The theory enabled a reduction in emitter material consumption by 67% compared to at least one commercial emitter, potentially cutting the initial capital cost of drip irrigation by up to 10%, making this already sustainable irrigation technology more globally accessible.
</description>
<pubDate>Sat, 04 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157945</guid>
<dc:date>2025-01-04T00:00:00Z</dc:date>
</item>
<item>
<title>Microbial methanogenesis fueled by freshwater infiltration and oil biodegradation in the Siljan impact structure, Sweden</title>
<link>https://hdl.handle.net/1721.1/157943</link>
<description>Microbial methanogenesis fueled by freshwater infiltration and oil biodegradation in the Siljan impact structure, Sweden
van Dam, Femke; Kietäväinen, Riikka; Westmeijer, George; Reinhardt, Manuel; Ono, Shuhei; Dopson, Mark; Ketzer, Marcelo; McIntosh, Jennifer C.; Drake, Henrik
Deeply fractured rocks of meteorite impact craters are suggested as prime niches for subsurface microbial colonization. Methane can be a product of such microbial communities and seeps of methane from impact craters on Earth are of strong interest as they act as analogs for Mars. Previous studies report signs of ancient microbial methanogenesis in the Devonian Siljan meteorite impact structure in Sweden, but the proportion of microbial methane, metabolic pathways, and potential modern activity remain elusive. In this study, gas composition, hydrochemistry, oil organic geochemistry, and microbial community analyses are reported in 400 m deep fractures of the Siljan impact structure. The results showed a dominantly microbial origin for methane, which was supported by highly negative δ13CCH4 and positive δ13CCO2 values along with multiply substituted isotopologues (Δ13CH3D) that indicated disequilibrium fractionation due to microbial kinetic isotope effects. The presence of C2 to C5 hydrocarbons suggested a minor thermogenic input in the gas mix. Characterization of the microbial community via 16S rRNA gene amplicon sequencing and real-time PCR indicated a low abundance of several methanogenic archaeal populations, which is common for settings with active methanogenesis. Evidence of oil biodegradation suggested that secondary microbial hydrocarbon utilization was involved in the methanogenesis. Low sulfate and high alkalinity in the groundwaters also suggested a dominantly microbial methane formation driven by infiltration of freshwater that was coupled to sulfate reduction and secondary utilization of early mature thermogenic hydrocarbons.
</description>
<pubDate>Fri, 03 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157943</guid>
<dc:date>2025-01-03T00:00:00Z</dc:date>
</item>
<item>
<title>Opening the AI Black Box: Distilling Machine-Learned Algorithms into Code</title>
<link>https://hdl.handle.net/1721.1/157939</link>
<description>Opening the AI Black Box: Distilling Machine-Learned Algorithms into Code
Michaud, Eric J.; Liao, Isaac; Lad, Vedang; Liu, Ziming; Mudide, Anish; Loughridge, Chloe; Guo, Zifan Carl; Kheirkhah, Tara Rezaei; Vukelić, Mateja; Tegmark, Max
Can we turn AI black boxes into code? Although this mission sounds extremely challenging, we show that it is not entirely impossible by presenting a proof-of-concept method, MIPS, that can synthesize programs based on the automated mechanistic interpretability of neural networks trained to perform the desired task, auto-distilling the learned algorithm into Python code. We test MIPS on a benchmark of 62 algorithmic tasks that can be learned by an RNN and find it highly complementary to GPT-4: MIPS solves 32 of them, including 13 that are not solved by GPT-4 (which also solves 30). MIPS uses an integer autoencoder to convert the RNN into a finite state machine, then applies Boolean or integer symbolic regression to capture the learned algorithm. As opposed to large language models, this program synthesis technique makes no use of (and is therefore not limited by) human training data such as algorithms and code from GitHub. We discuss opportunities and challenges for scaling up this approach to make machine-learned models more interpretable and trustworthy.
</description>
<pubDate>Mon, 02 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157939</guid>
<dc:date>2024-12-02T00:00:00Z</dc:date>
</item>
<item>
<title>NiNC Catalysts in CO2-to-CO Electrolysis</title>
<link>https://hdl.handle.net/1721.1/157938</link>
<description>NiNC Catalysts in CO2-to-CO Electrolysis
Zhang, Hao; Qi, Menghui; Wang, Yong
CO2-to-CO electrolyzer technology converts carbon dioxide into carbon monoxide using electrochemical methods, offering significant environmental and energy benefits by aiding in greenhouse gas mitigation and promoting a carbon circular economy. Recent study by Strasser et al. in Nature Chemical Engineering presents a high-performance CO2-to-CO electrolyzer utilizing a NiNC catalyst with nearly 100% faradaic efficiency, employing innovative diagnostic tools like the carbon crossover coefficient (CCC) to address transport-related failures and optimize overall efficiency. Strasser’s research demonstrates the potential of NiNC catalysts, particularly NiNC-IMI, for efficient CO production in CO2-to-CO electrolyzers, highlighting their high selectivity and performance. However, challenges such as localized CO2 depletion and mass transport limitations underscore the need for further optimization and development of diagnostic tools like CCC. Strategies for optimizing catalyst structure and operational parameters offer avenues for enhancing the performance and reliability of electrochemical CO2 reduction catalysts.
</description>
<pubDate>Thu, 26 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157938</guid>
<dc:date>2024-12-26T00:00:00Z</dc:date>
</item>
<item>
<title>Bias in machine learning applications to address non-communicable diseases at a population-level: a scoping review</title>
<link>https://hdl.handle.net/1721.1/157937</link>
<description>Bias in machine learning applications to address non-communicable diseases at a population-level: a scoping review
Birdi, Sharon; Rabet, Roxana; Durant, Steve; Patel, Atushi; Vosoughi, Tina; Shergill, Mahek; Costanian, Christy; Ziegler, Carolyn P.; Ali, Shehzad; Buckeridge, David; Ghassemi, Marzyeh; Gibson, Jennifer; John-Baptiste, Ava; Macklin, Jillian; McCradden, Melissa; McKenzie, Kwame
Background Machine learning (ML) is increasingly used in population and public health to support epidemiological studies, surveillance, and evaluation. Our objective was to conduct a scoping review to identify studies that use ML in population health, with a focus on its use in non-communicable diseases (NCDs). We also examine potential algorithmic biases in model design, training, and implementation, as well as efforts to mitigate these biases. Methods We searched the peer-reviewed, indexed literature using Medline, Embase, Cochrane Central Register of Controlled Trials and Cochrane Database of Systematic Reviews, CINAHL, Scopus, ACM Digital Library, Inspec, Web of Science’s Science Citation Index, Social Sciences Citation Index, and the Emerging Sources Citation Index, up to March 2022. Results The search identified 27 310 studies and 65 were included. Study aims were separated into algorithm comparison (n = 13, 20%) or disease modelling for population-health-related outputs (n = 52, 80%). We extracted data on NCD type, data sources, technical approach, possible algorithmic bias, and jurisdiction. Type 2 diabetes was the most studied NCD. The most common use of ML was for risk modeling. Mitigating bias was not extensively addressed, with most methods focused on mitigating sex-related bias. Conclusion This review examines current applications of ML in NCDs, highlighting potential biases and strategies for mitigation. Future research should focus on communicable diseases and the transferability of ML models in low and middle-income settings. Our findings can guide the development of guidelines for the equitable use of ML to improve population health outcomes.
</description>
<pubDate>Sat, 28 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157937</guid>
<dc:date>2024-12-28T00:00:00Z</dc:date>
</item>
<item>
<title>Robust Reinforcement Learning Strategies with Evolving Curriculum for Efficient Bus Operations in Smart Cities</title>
<link>https://hdl.handle.net/1721.1/157936</link>
<description>Robust Reinforcement Learning Strategies with Evolving Curriculum for Efficient Bus Operations in Smart Cities
Tang, Yuhan; Qu, Ao; Jiang, Xuan; Mo, Baichuan; Cao, Shangqing; Rodriguez, Joseph; Koutsopoulos, Haris N; Wu, Cathy; Zhao, Jinhua
Public transit systems are critical to the quality of urban life, and enhancing their efficiency is essential for building cost-effective and sustainable smart cities. Historically, researchers sought reinforcement learning (RL) applications to mitigate bus bunching issues with holding strategies. Nonetheless, these attempts often led to oversimplifications and misalignment with the goal of reducing the total time passengers spent in the system, resulting in less robust or non-optimal solutions. In this study, we introduce a novel setting where each bus, supervised by an RL agent, can appropriately form aggregated policies from three strategies (holding, skipping station, and turning around to serve the opposite direction). It&amp;rsquo;s difficult to learn them all together, due to learning complexity, we employ domain knowledge and develop a gradually expanding action space curriculum, enabling agents to learn these strategies incrementally. We incorporate Long Short-Term Memory (LSTM) in our model considering the temporal interrelation among these actions. To address the inherent uncertainties of real-world traffic systems, we impose Domain Randomization (DR) on variables such as passenger demand and bus schedules. We conduct extensive numerical experiments with the integration of synthetic and real-world data to evaluate our model. Our methodology proves effective, enhancing bus schedule reliability and reducing total passenger waiting time by over 15%, thereby improving bus operation efficiency and smoothering operations of buses that align with sustainable goals. This work highlights the potential of robust RL combined with curriculum learning for optimizing public transport in smart cities, offering a scalable solution for real-world multi-agent systems.
</description>
<pubDate>Fri, 29 Nov 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157936</guid>
<dc:date>2024-11-29T00:00:00Z</dc:date>
</item>
<item>
<title>When Cities Go Nuclear: Exploring the Applications of Nuclear Batteries Toward Energy Transformation</title>
<link>https://hdl.handle.net/1721.1/157935</link>
<description>When Cities Go Nuclear: Exploring the Applications of Nuclear Batteries Toward Energy Transformation
Paul, Sanjana; Klimenka, Mikita; Duarte, Fabio; Crawford, Carmen; Gorman, Claire; Ratti, Carlo; Buongiorno, Jacopo
Global society faces the pressing question of how to eliminate reliance on fossil fuels while meeting increasing energy demand. In comparison to solar and wind energy, nuclear power has been largely ignored in urban studies research. However, nuclear energy has recently regained attention through the emergence of Small Modular Reactors (SMRs), and as the stakes of decarbonization become increasingly essential. To evaluate situations in which SMRs bring value to urban energy mixes, this paper focuses on Nuclear Batteries (NBs), a specific class of SMRs, that can fit in standard shipping containers. First, we outline an evaluation framework for the use and application of NBs; second, we present use cases for NBs in real-world situations, from disaster relief to grid reinforcement; and third, we discuss the social challenges around this technology.
</description>
<pubDate>Mon, 25 Nov 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157935</guid>
<dc:date>2024-11-25T00:00:00Z</dc:date>
</item>
<item>
<title>Glioblastoma-cortical organoids recapitulate cell state heterogeneity and intercellular transfer</title>
<link>https://hdl.handle.net/1721.1/157902</link>
<description>Glioblastoma-cortical organoids recapitulate cell state heterogeneity and intercellular transfer
Mangena, Vamsi; Chanoch-Myers, Rony; Sartore, Rafaela; Paulsen, Bruna; Gritsch, Simon; Weisman, Hannah; Hara, Toshiro; Breakefield, Xandra O; Breyne, Koen; Regev, Aviv; Chung, Kwanghun; Arlotta, Paola; Tirosh, Itay; Suva, Mario L
Glioblastoma is characterized by heterogeneous malignant cells that are functionally integrated within the neuroglial microenvironment. Here, we model this ecosystem by growing glioblastoma into long-term cultured human cortical organoids that contain the major neuroglial cell types found in the cerebral cortex. Single-cell RNA-seq analysis suggests that, compared to matched gliomasphere models, glioblastoma cortical organoids (GCO) more faithfully recapitulate the diversity and expression programs of malignant cell states found in patient tumors. Additionally, we observe widespread transfer of glioblastoma transcripts and GFP proteins to non-malignant cells in the organoids. Mechanistically, this transfer involves extracellular vesicles and is biased towards defined glioblastoma cell states and astroglia cell types. These results extend previous glioblastoma-organoid modeling efforts and suggest widespread intercellular transfer in the glioblastoma neuroglial microenvironment.
</description>
<pubDate>Mon, 07 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157902</guid>
<dc:date>2024-10-07T00:00:00Z</dc:date>
</item>
<item>
<title>MindScape Study: Integrating LLM and Behavioral Sensing for Personalized AI-Driven Journaling Experiences</title>
<link>https://hdl.handle.net/1721.1/157901</link>
<description>MindScape Study: Integrating LLM and Behavioral Sensing for Personalized AI-Driven Journaling Experiences
Nepal, Subigya; Pillai, Arvind; Campbell, William; Massachi, Talie; Heinz, Michael; Kunwar, Ashmita; Choi, Eunsol Soul; Xu, Xuhai "Orson"; Kuc, Joanna; Huckins, Jeremy; Holden, Jason; Preum, Sarah M.; Depp, Colin; Jacobson, Nicholas; Czerwinski, Mary; Granholm, Eric; Campbell, Andrew
Mental health concerns are prevalent among college students, highlighting the need for effective interventions that promote self-awareness and holistic well-being. MindScape pioneers a novel approach to AI-powered journaling by integrating passively collected behavioral patterns such as conversational engagement, sleep, and location with Large Language Models (LLMs). This integration creates a highly personalized and context-aware journaling experience, enhancing self-awareness and well-being by embedding behavioral intelligence into AI. We present an 8-week exploratory study with 20 college students, demonstrating the MindScape app's efficacy in enhancing positive affect (7%), reducing negative affect (11%), loneliness (6%), and anxiety and depression, with a significant week-over-week decrease in PHQ-4 scores (-0.25 coefficient), alongside improvements in mindfulness (7%) and self-reflection (6%). The study highlights the advantages of contextual AI journaling, with participants particularly appreciating the tailored prompts and insights provided by the MindScape app. Our analysis also includes a comparison of responses to AI-driven contextual versus generic prompts, participant feedback insights, and proposed strategies for leveraging contextual AI journaling to improve well-being on college campuses. By showcasing the potential of contextual AI journaling to support mental health, we provide a foundation for further investigation into the effects of contextual AI journaling on mental health and well-being.
</description>
<pubDate>Thu, 21 Nov 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157901</guid>
<dc:date>2024-11-21T00:00:00Z</dc:date>
</item>
<item>
<title>Beyond Detection: Towards Actionable Sensing Research in Clinical Mental Healthcare</title>
<link>https://hdl.handle.net/1721.1/157900</link>
<description>Beyond Detection: Towards Actionable Sensing Research in Clinical Mental Healthcare
Adler, Daniel; Yang, Yuewen; Viranda, Thalia; Xu, Xuhai; Mohr, David; Van Meter, Anna; Tartaglia, Julia; Jacobson, Nicholas; Wang, Fei; Estrin, Deborah; Choudhury, Tanzeem
Researchers in ubiquitous computing have long promised that passive sensing will revolutionize mental health measurement by detecting individuals in a population experiencing a mental health disorder or specific symptoms. Recent work suggests that detection tools do not generalize well when trained and tested in more heterogeneous samples. In this work, we contribute a narrative review and findings from two studies with 41 mental health clinicians to understand these generalization challenges. Our findings motivate research on actionable sensing, as an alternative to detection research, studying how passive sensing can be used alongside traditional mental health measures to support actions in clinical care. Specifically, we identify how passive sensing can support clinical actions by revealing patients' presenting problems for treatment and identifying targets for behavior change and symptom reduction, but passive data needs to be contextualized with patients to be appropriately interpreted and used in care. We conclude by suggesting research at the intersection of actionable sensing and mental healthcare, to align technical research in ubiquitous computing with clinical actions and needs.
</description>
<pubDate>Thu, 21 Nov 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157900</guid>
<dc:date>2024-11-21T00:00:00Z</dc:date>
</item>
<item>
<title>Sensor2Text: Enabling Natural Language Interactions for Daily Activity Tracking Using Wearable Sensors</title>
<link>https://hdl.handle.net/1721.1/157899</link>
<description>Sensor2Text: Enabling Natural Language Interactions for Daily Activity Tracking Using Wearable Sensors
Chen, Wenqiang; Cheng, Jiaxuan; Wang, Leyao; Zhao, Wei; Matusik, Wojciech
Visual Question-Answering, a technology that generates textual responses from an image and natural language question, has progressed significantly. Notably, it can aid in tracking and inquiring about daily activities, crucial in healthcare monitoring, especially for elderly patients or those with memory disabilities. However, video poses privacy concerns and has a limited field of view. This paper presents Sensor2Text, a model proficient in tracking daily activities and engaging in conversations using wearable sensors. The approach outlined here tackles several challenges, including low information density in wearable sensor data, insufficiency of single wearable sensors in human activities recognition, and model's limited capacity for Question-Answering and interactive conversations. To resolve these obstacles, transfer learning and student-teacher networks are utilized to leverage knowledge from visual-language models. Additionally, an encoder-decoder neural network model is devised to jointly process language and sensor data for conversational purposes. Furthermore, Large Language Models are also utilized to enable interactive capabilities. The model showcases the ability to identify human activities and engage in Q&amp;A dialogues using various wearable sensor modalities. It performs comparably to or better than existing visual-language models in both captioning and conversational tasks. To our knowledge, this represents the first model capable of conversing about wearable sensor data, offering an innovative approach to daily activity tracking that addresses privacy and field-of-view limitations associated with current vision-based solutions.
</description>
<pubDate>Thu, 21 Nov 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157899</guid>
<dc:date>2024-11-21T00:00:00Z</dc:date>
</item>
<item>
<title>EyeTrAES: Fine-grained, Low-Latency Eye Tracking via Adaptive Event Slicing</title>
<link>https://hdl.handle.net/1721.1/157898</link>
<description>EyeTrAES: Fine-grained, Low-Latency Eye Tracking via Adaptive Event Slicing
Sen, Argha; Bandara, Nuwan; Gokarn, Ila; Kandappu, Thivya; Misra, Archan
Eye-tracking technology has gained significant attention in recent years due to its wide range of applications in human-computer interaction, virtual and augmented reality, and wearable health. Traditional RGB camera-based eye-tracking systems often struggle with poor temporal resolution and computational constraints, limiting their effectiveness in capturing rapid eye movements. To address these limitations, we propose EyeTrAES, a novel approach using neuromorphic event cameras for high-fidelity tracking of natural pupillary movement that shows significant kinematic variance. One of EyeTrAES's highlights is the use of a novel adaptive windowing/slicing algorithm that ensures just the right amount of descriptive asynchronous event data accumulation within an event frame, across a wide range of eye movement patterns. EyeTrAES then applies lightweight image processing functions over accumulated event frames from just a single eye to perform pupil segmentation and tracking (as opposed to gaze-based techniques that require simultaneous tracking of both eyes). We show that these two techniques boost pupil tracking fidelity by 6+%, achieving IoU~=92%, while incurring at least 3x lower latency than competing pure event-based eye tracking alternatives. We additionally demonstrate that the microscopic pupillary motion captured by EyeTrAES exhibits distinctive variations across individuals and can thus serve as a biometric fingerprint. For robust user authentication, we train a lightweight per-user Random Forest classifier using a novel feature vector of short-term pupillary kinematics, comprising a sliding window of pupil (location, velocity, acceleration) triples. Experimental studies with two different datasets (capturing eye movement across a range of environmental contexts) demonstrate that the EyeTrAES-based authentication technique can simultaneously achieve high authentication accuracy (~=0.82) and low processing latency (~=12ms), and significantly outperform multiple state-of-the-art competitive baselines.
</description>
<pubDate>Thu, 21 Nov 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157898</guid>
<dc:date>2024-11-21T00:00:00Z</dc:date>
</item>
<item>
<title>Vista: Machine Learning based Database Performance Troubleshooting Framework in Amazon RDS</title>
<link>https://hdl.handle.net/1721.1/157897</link>
<description>Vista: Machine Learning based Database Performance Troubleshooting Framework in Amazon RDS
Singh, Vikramank; Song, Zhao; Narayanaswamy, Balakrishnan (Murali); Vaidya, Kapil Eknath; Kraska, Tim
Database performance troubleshooting is a complex multi-step process that broadly involves three key stages- (a) Detection: determining what's wrong and when; (b) Root Cause Analysis (RCA): reasoning about why is the performance poor; (c) Resolution: identifying a fix. A plethora of techniques exist to address each of these problems, but they hardly work in real-world at scale. First, real-world customer workloads are noisy, non-stationary and quasi-periodic in nature rendering traditional detectors ineffective. Second, real-world production databases execute a highly diverse set of queries that skew the database statistics into long-tail distributions causing traditional RCA methods to fail. Third, these databases typically execute millions of such diverse queries every minute rendering traditional methods inefficient when deployed at scale.&#13;
In this paper we describe Vista, a machine learning based performance troubleshooting framework for databases, and dive-deep into how it addresses the 3 real-world problems outlined above. Vista deploys a deep auto-regressive model trained on a large and diverse Amazon Relational Database Service (RDS) fleet with custom skip connections and periodicity alignment features to model long range and varying periodicity in customer workloads, and detects performance bottlenecks in the form of outliers. Furthermore, it efficiently filters only a top few dominating SQL queries from millions in a problematic workload, and uses a robust causal inference framework to identify the culprit queries and their statistics leading to a low false-positive and false-negative rate. Currently, Vista runs on hundreds of thousands of RDS databases, analyzes millions of workloads every day bringing down the troubleshooting time for RDS customers from hours to seconds. At the end, we also describe several challenges and learnings from implementing and deploying Vista at Amazon scale.
SoCC ’24, November 20–22, 2024, Redmond, WA, USA
</description>
<pubDate>Wed, 20 Nov 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157897</guid>
<dc:date>2024-11-20T00:00:00Z</dc:date>
</item>
<item>
<title>Towards Safer Heuristics With Xplain</title>
<link>https://hdl.handle.net/1721.1/157896</link>
<description>Towards Safer Heuristics With Xplain
Karimi, Pantea; Pirelli, Solal; Kakarla, Siva Kesava Reddy; Beckett, Ryan; Segarra, Santiago; Li, Beibin; Namyar, Pooria; Arzani, Behnaz
Many problems that cloud operators solve are computationally expensive, and operators often use heuristic algorithms (that are faster and scale better than optimal) to solve them more efficiently. Heuristic analyzers enable operators to find when and by how much their heuristics underperform. However, these tools do not provide enough detail for operators to mitigate the heuristic's impact in practice: they only discover a single input instance that causes the heuristic to underperform (and not the full set) and they do not explain why.&#13;
We propose XPlain, a tool that extends these analyzers and helps operators understand when and why their heuristics underperform. We present promising initial results that show such an extension is viable.
HOTNETS ’24, November 18–19, 2024, Irvine, CA, USA
</description>
<pubDate>Mon, 18 Nov 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157896</guid>
<dc:date>2024-11-18T00:00:00Z</dc:date>
</item>
<item>
<title>The Case for Decentralized Fallback Networks</title>
<link>https://hdl.handle.net/1721.1/157895</link>
<description>The Case for Decentralized Fallback Networks
Lynch, James; Liu, Ziqian; Li, Chenning; Ghobadi, Manya; Balakrishnan, Hari
This paper argues that network and application delivery infrastructures have become highly centralized and are more vulnerable to attacks and disasters than is desirable. It proposes a research agenda for decentralized fallback networks and focuses on a key component---a city-scale decentralized network using existing Wi-Fi access points, which are deployed across almost all buildings in cities. It proposes a routing system that uses information about buildings from geospatial maps instead of traditional routing mechanisms to scale well to millions of Wi-Fi nodes.
HOTNETS ’24, November 18–19, 2024, Irvine, CA, USA
</description>
<pubDate>Mon, 18 Nov 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157895</guid>
<dc:date>2024-11-18T00:00:00Z</dc:date>
</item>
<item>
<title>MLTCP: A Distributed Technique to Approximate Centralized Flow Scheduling For Machine Learning</title>
<link>https://hdl.handle.net/1721.1/157894</link>
<description>MLTCP: A Distributed Technique to Approximate Centralized Flow Scheduling For Machine Learning
Rajasekaran, Sudarsanan; Narang, Sanjoli; Zabreyko, Anton A.; Ghobadi, Manya
This paper argues that congestion control protocols in machine learning datacenters sit at a sweet spot between centralized and distributed flow scheduling solutions. We present MLTCP, a technique to augment today's congestion control algorithms to approximate an interleaved centralized flow schedule. At the heart of MLTCP lies a straight-forward principle based on a key conceptual insight: by scaling the congestion window size (or sending rate) based on the number of bytes sent at each iteration, MLTCP flows eventually converge into a schedule that reduces network contention. We demonstrate that MLTCP uses a gradient descent trend with a step taken at every training (or fine-tuning) iteration towards reducing network congestion among competing jobs.
HOTNETS ’24, November 18–19, 2024, Irvine, CA, USA
</description>
<pubDate>Mon, 18 Nov 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157894</guid>
<dc:date>2024-11-18T00:00:00Z</dc:date>
</item>
<item>
<title>Imaging the initial condition of heavy-ion collisions and nuclear structure across the nuclide chart</title>
<link>https://hdl.handle.net/1721.1/157891</link>
<description>Imaging the initial condition of heavy-ion collisions and nuclear structure across the nuclide chart
Jia, Jiangyong; Giacalone, Giuliano; Bally, Benjamin; Brandenburg, James D.; Heinz, Ulrich; Huang, Shengli; Lee, Dean; Lee, Yen-Jie; Loizides, Constantin; Li, Wei; Luzum, Matthew; Nijs, Govert; Noronha-Hostler, Jacquelyn; Ploskon, Mateusz; van der Schee, Wilke; Schenke, Bjoern
High-energy nuclear collisions encompass three key stages: the structure of the colliding nuclei, informed by low-energy nuclear physics, the initial condition, leading to the formation of quark–gluon plasma (QGP), and the hydrodynamic expansion and hadronization of the QGP, leading to final-state hadron distributions that are observed experimentally. Recent advances in both experimental and theoretical methods have ushered in a precision era of heavy-ion collisions, enabling an increasingly accurate understanding of these stages. However, most approaches involve simultaneously determining both QGP properties and initial conditions from a single collision system, creating complexity due to the coupled contributions of these stages to the final-state observables. To avoid this, we propose leveraging established knowledge of low-energy nuclear structures and hydrodynamic observables to independently constrain the QGP’s initial condition. By conducting comparative studies of collisions involving isobar-like nuclei—species with similar mass numbers but different ground-state geometries—we can disentangle the initial condition’s impacts from the QGP properties. This approach not only refines our understanding of the initial stages of the collisions but also turns high-energy nuclear experiments into a precision tool for imaging nuclear structures, offering insights that complement traditional low-energy approaches. Opportunities for carrying out such comparative experiments at the Large Hadron Collider and other facilities could significantly advance both high-energy and low-energy nuclear physics. Additionally, this approach has implications for the future electron-ion collider. While the possibilities are extensive, we focus on selected proposals that could benefit both the high-energy and low-energy nuclear physics communities. Originally prepared as input for the long-range plan of U.S. nuclear physics, this white paper reflects the status as of September 2022, with a brief update on developments since then.
</description>
<pubDate>Wed, 11 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157891</guid>
<dc:date>2024-12-11T00:00:00Z</dc:date>
</item>
<item>
<title>Anomaly-aware summary statistic from data batches</title>
<link>https://hdl.handle.net/1721.1/157890</link>
<description>Anomaly-aware summary statistic from data batches
Grosso, G.
Signal-agnostic data exploration based on machine learning could unveil very subtle statistical deviations of collider data from the expected Standard Model of particle physics. The beneficial impact of a large training sample on machine learning solutions motivates the exploration of increasingly large and inclusive samples of acquired data with resource efficient computational methods. In this work we consider the New Physics Learning Machine (NPLM), a multivariate goodness-of-fit test built on the Neyman-Pearson maximum-likelihood-ratio construction, and we address the problem of testing large size samples under computational and storage resource constraints. We propose to perform parallel NPLM routines over batches of the data, and to combine them by locally aggregating over the data-to-reference density ratios learnt by each batch. The resulting data hypothesis defining the likelihood-ratio test is thus shared over the batches, and complies with the assumption that the expected rate of new physical processes is time invariant. We show that this method outperforms the simple sum of the independent tests run over the batches, and can recover, or even surpass, the sensitivity of the single test run over the full data. Beside the significant advantage for the offline application of NPLM to large size samples, the proposed approach offers new prospects toward the use of NPLM to construct anomaly-aware summary statistics in quasi-online data streaming scenarios.
</description>
<pubDate>Thu, 12 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157890</guid>
<dc:date>2024-12-12T00:00:00Z</dc:date>
</item>
<item>
<title>Study of the rare decay J/ψ → μ+μ−μ+μ−</title>
<link>https://hdl.handle.net/1721.1/157889</link>
<description>Study of the rare decay J/ψ → μ+μ−μ+μ−
Aaij, R.; Abdelmotteleb, A. S. W.; Abellan Beteta, C.; Abudinén, F.; Ackernley, T.; Adefisoye, A. A.; Adeva, B.; Adinolfi, M.; Adlarson, P.; Agapopoulou, C.; Aidala, C. A.; Ajaltouni, Z.; Akar, S.; Akiba, K.; Albicocco, P.; Albrecht, J.; The LHCb collaboration
The rare electromagnetic J/ψ → μ+μ−μ+μ− decay is observed with a significance greatly exceeding the discovery threshold, using proton-proton collision data collected by the LHCb experiment during 2016–2018 at a center-of-mass energy of 13 TeV, corresponding to an integrated luminosity of 5.4 fb−1. The rate of this decay is measured relative to that of the J/ψ → μ+μ− mode. Using the QED model for the four-muon decay in the efficiency estimation, its branching fraction is determined to be B J / ψ → μ + μ − μ + μ − = 1.13 ± 0.10 ± 0.05 ± 0.01 × 10 − 6 , where the uncertainties are statistical, systematic and due to the uncertainty on the branching fraction of the J/ψ → μ+μ− decay.
</description>
<pubDate>Tue, 10 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157889</guid>
<dc:date>2024-12-10T00:00:00Z</dc:date>
</item>
<item>
<title>Measurement of boosted Higgs bosons produced via vector boson fusion or gluon fusion in the H → bb¯ decay mode using LHC proton-proton collision data at √s = 13 TeV</title>
<link>https://hdl.handle.net/1721.1/157887</link>
<description>Measurement of boosted Higgs bosons produced via vector boson fusion or gluon fusion in the H → bb¯ decay mode using LHC proton-proton collision data at √s = 13 TeV
Hayrapetyan, A.; Tumasyan, A.; Adam, W.; Andrejkovic, J. W.; Bergauer, T.; Chatterjee, S.; Damanakis, K.; Dragicevic, M.; Hussain, P. S.; Jeitler, M.; Krammer, N.; Li, A.; Liko, D.; Mikulec, I.; Schieck, J.; Schöfbeck, R.; Schwarz, D.; Sonawane, M.; The CMS collaboration
A measurement is performed of Higgs bosons produced with high transverse momentum (pT) via vector boson or gluon fusion in proton-proton collisions. The result is based on a data set with a center-of-mass energy of 13 TeV collected in 2016–2018 with the CMS detector at the LHC and corresponds to an integrated luminosity of 138 fb−1. The decay of a high-pT Higgs boson to a boosted bottom quark-antiquark pair is selected using large-radius jets and employing jet substructure and heavy-flavor taggers based on machine learning techniques. Independent regions targeting the vector boson and gluon fusion mechanisms are defined based on the topology of two quark-initiated jets with large pseudorapidity separation. The signal strengths for both processes are extracted simultaneously by performing a maximum likelihood fit to data in the large-radius jet mass distribution. The observed signal strengths relative to the standard model expectation are 4.9 − 1.6 + 1.9 and 1.6 − 1.5 + 1.7 for the vector boson and gluon fusion mechanisms, respectively. A differential cross section measurement is also reported in the simplified template cross section framework.
</description>
<pubDate>Wed, 04 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157887</guid>
<dc:date>2024-12-04T00:00:00Z</dc:date>
</item>
<item>
<title>How information about historic carbon emissions affects support for climate aid: evidence from a survey experiment</title>
<link>https://hdl.handle.net/1721.1/157886</link>
<description>How information about historic carbon emissions affects support for climate aid: evidence from a survey experiment
Charnysh, Volha; Kalow, Jared; Lieberman, Evan; Walk, Erin
In recent years, international climate negotiations have reached increasing consensus that the wealthiest countries should make significant financial contributions to offset the damages caused by the climate crisis in poorer countries. Proponents have justified such action based on wealthy countries’ disproportionate responsibility for global warming in the form of past emissions. However, in democratic countries such as the United States, it remains uncertain whether such messages can affect public opinion, especially across partisan lines. We conducted a pre-registered survey from a national online pool (N = 5,002) with a built-in experiment to evaluate the effectiveness of alternative communications strategies associated with historic carbon emissions in increasing support for climate aid. We find that specific attribution claims that reflect a climate justice perspective do boost support for more generous climate aid, but the effects are largely driven by Democrats. We also find that global solidarity frames emphasizing shared responsibility did not affect support for climate aid. Our results have important implications for climate advocacy and our understanding of climate-related attitudes.
</description>
<pubDate>Wed, 11 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157886</guid>
<dc:date>2024-12-11T00:00:00Z</dc:date>
</item>
<item>
<title>Study of charmonium production via the decay to p p¯ at √s = 13 TeV</title>
<link>https://hdl.handle.net/1721.1/157885</link>
<description>Study of charmonium production via the decay to p p¯ at √s = 13 TeV
LHCb Collaboration
Charmonium production cross-section in proton–proton collisions is measured at the centre-of-mass energy s = 13 TeV using decays to p p ¯ final state. The study is performed using a data sample corresponding to an integrated luminosity of 2.2 fb - 1 collected in 2018 with the LHCb detector. The production cross-section of the η c meson is measured in a rapidity range of 2.0 &lt; y &lt; 4.0 and in a transverse momentum range of 5.0 &lt; p T &lt; 20.0 GeV / c , which is extended compared with previous LHCb analyses. The differential cross-section is measured in bins of p T and, for the first time, of y. Upper limits, at 90% and 95% confidence levels, on the η c ( 2 S ) and h c ( 1 P ) prompt production cross-sections are determined for the first time.
</description>
<pubDate>Mon, 09 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157885</guid>
<dc:date>2024-12-09T00:00:00Z</dc:date>
</item>
<item>
<title>Sampling from convex sets with a cold start using multiscale decompositions</title>
<link>https://hdl.handle.net/1721.1/157860</link>
<description>Sampling from convex sets with a cold start using multiscale decompositions
Narayanan, Hariharan; Rajaraman, Amit; Srivastava, Piyush
A standard approach for sampling approximately uniformly from a convex body K ⊆ R n is to run a random walk within K. The requirement is that starting from a suitable initial distribution, the random walk should “mix rapidly”, i.e., after a number of steps that is polynomial in n and the aspect ratio R/r (here, K is assumed to contain a ball of radius r and to be contained within a ball of radius R), the distribution of the random walk should come close to the uniform distribution π K on K. Different random walks differ in aspects such as the ease of implementation of each step, or suitability for a specific class of convex bodies. Therefore, the rapid mixing of a wide variety of random walks on convex bodies has been studied. Many proofs of rapid mixing of such random walks however require that the initial distribution of the random walk is not too different from the target distribution π K . In particular, they require that the probability density function of the initial distribution with respect to the uniform distribution π K on K must be bounded above by poly ( n ) : this is called a warm start. Achieving such a warm start often requires a non-trivial pre-processing step before the random walk can be started. This motivates the problem of proving rapid mixing from “cold starts”, i.e., when the density of the initial distribution with respect to π K can be as high as exp ( poly ( n ) ) . In contrast to warm starts, a cold start is usually trivial to achieve. However, rapid mixing from a cold start may not hold for every random walk, e.g., the well-known “ball walk” does not have rapid mixing from an arbitrary cold start. On the other hand, for the “hit-and-run” random walk, Lovász and Vempala proved rapid mixing from a cold start. For the related coordinate hit-and-run (CHR) random walk, which has been found to be promising in computational experiments, a rapid mixing result starting from a warm start was proven only recently, while the question of whether CHR mixes rapidly from a cold start remained open. In this paper, we construct a family of Markov chains inspired by classical multiscale decompositions of subsets of R n into countably many axis-aligned cubes. We show that even with a cold start, the mixing times of these chains are bounded by a polynomial in n and the aspect ratio of the body. Our main technical ingredient is an isoperimetric inequality for K for a metric that magnifies distances between points that are close to the boundary of K. As a byproduct of the analysis of this new family of chains, we show that the coordinate hit-and-run (CHR) random walk also mixes rapidly from a cold start, and also from any point that is not too close to the boundary of the body.
</description>
<pubDate>Fri, 13 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157860</guid>
<dc:date>2024-12-13T00:00:00Z</dc:date>
</item>
<item>
<title>Classical correspondence beyond the Ehrenfest time for open quantum systems with general Lindbladians</title>
<link>https://hdl.handle.net/1721.1/157859</link>
<description>Classical correspondence beyond the Ehrenfest time for open quantum systems with general Lindbladians
Hernández, Felipe; Ranard, Daniel; Riedel, C. J.
Quantum and classical systems evolving under the same formal Hamiltonian H may exhibit dramatically different behavior after the Ehrenfest timescale t E ∼ log ( ħ - 1 ) , even as ħ → 0 . Coupling the system to a Markovian environment results in a Lindblad equation for the quantum evolution. Its classical counterpart is given by the Fokker–Planck equation on phase space, which describes Hamiltonian flow with friction and diffusive noise. The quantum and classical evolutions may be compared via the Wigner-Weyl representation. Due to decoherence, they are conjectured to match closely for times far beyond the Ehrenfest timescale as ħ → 0 . We prove a version of this correspondence, bounding the error between the quantum and classical evolutions for any sufficiently regular Hamiltonian H(x, p) and Lindblad functions L k ( x , p ) . The error is small when the strength of the diffusion D associated to the Lindblad functions satisfies D ≫ ħ 4 / 3 , in particular allowing vanishing noise in the classical limit. Our method uses a time-dependent semiclassical mixture of variably squeezed Gaussian states. The states evolve according to a local harmonic approximation to the Lindblad dynamics constructed from a second-order Taylor expansion of the Lindbladian. Both the exact quantum trajectory and its classical counterpart can be expressed as perturbations of this semiclassical mixture, with the errors bounded using Duhamel’s principle. We present heuristic arguments suggesting the 4/3 exponent is optimal and defines a boundary in the sense that asymptotically weaker diffusion permits a breakdown of quantum-classical correspondence at the Ehrenfest timescale. Our presentation aims to be comprehensive and accessible to both mathematicians and physicists. In a shorter companion paper, we treat the special case of Hamiltonians that decompose into kinetic and potential energy with linear Lindblad operators, with explicit bounds that can be applied directly to physical systems.
</description>
<pubDate>Mon, 09 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157859</guid>
<dc:date>2024-12-09T00:00:00Z</dc:date>
</item>
<item>
<title>Beam heating explains critical current suppression measured during ion irradiation of REBCO tapes</title>
<link>https://hdl.handle.net/1721.1/157858</link>
<description>Beam heating explains critical current suppression measured during ion irradiation of REBCO tapes
Devitre, Alexis; Fischer, David; Riva, N.; Rae, M.; Kortman, Lauryn; Woller, Kevin; Fisher, Zoe; Short, Michael; Whyte, Dennis; Hartwig, Zachary
Reports of critical current (Ic) suppression during cryogenic ion&#13;
irradiation of REBCO tapes have raised concerns for the operational margins&#13;
of fusion power plant (FPP) magnets. However, the data remain inconclusive&#13;
regarding beam heating due to the difficulty of measuring local temperatures&#13;
with contact probes. This leaves a critical knowledge gap concerning the&#13;
mechanism behind Ic suppression, and whether the so-called beam on effect is&#13;
to be expected under neutron irradiation during FPP operation. In this paper,&#13;
we show that Ic suppression is independent of atomic displacement rate in the&#13;
REBCO layer, the latter of which increases twelve-fold as we reduce the beam&#13;
energy from 2400 to 800 keV. At fixed power, we observe statistically identical&#13;
suppression with 150 keV protons, which do not have enough energy to reach&#13;
the REBCO layer, refuting hypotheses about beam on effects being caused by&#13;
nuclear displacements or direct ion-Cooper pair interactions. These results show&#13;
that REBCO temperature rise alone can explain Ic suppression, leaving little to no&#13;
margin for alternative mechanisms. With this insight, we developed a method to&#13;
measure beam spot temperature that does not depend on the specific installation&#13;
of our temperature sensor. With this new method, we measured the temperature&#13;
gradient across the tape during irradiation and found that thermal resistance at&#13;
the tape/target interface is the controlling variable in Ic suppression. As such,&#13;
accelerator-based facilities aiming to reproduce the operation of REBCO magnets&#13;
in a nuclear fusion environment should find strategies to minimize interface&#13;
thermal resistance. Most importantly, we find that the dose rates expected&#13;
in a FPP will not change Ic due to ballistic radiation damage or ion-Cooper&#13;
pair interactions, allowing us to safely ignore these effects when designing FPP&#13;
magnets.
</description>
<pubDate>Mon, 02 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157858</guid>
<dc:date>2024-12-02T00:00:00Z</dc:date>
</item>
<item>
<title>Modular Verification of Secure and Leakage-Free Systems: From Application Specification to Circuit-Level Implementation</title>
<link>https://hdl.handle.net/1721.1/157857</link>
<description>Modular Verification of Secure and Leakage-Free Systems: From Application Specification to Circuit-Level Implementation
Athalye, Anish; Corrigan-Gibbs, Henry; Kaashoek, Frans; Tassarotti, Joseph; Zeldovich, Nickolai
Parfait is a framework for proving that an implementation of a hardware security module (HSM) leaks nothing more than what is mandated by an application specification. Parfait proofs cover the software and the hardware of an HSM, which catches bugs above the cycle-level digital circuit abstraction, including timing side channels. Parfait's contribution is a scalable approach to proving security and non-leakage by using intermediate levels of abstraction and relating them with transitive information-preserving refinement. This enables Parfait to use different techniques to verify the implementation at different levels of abstraction, reuse existing verified components such as CompCert, and automate parts of the proof, while still providing end-to-end guarantees. We use Parfait to verify four HSMs, including an ECDSA certificate-signing HSM and a password-hashing HSM, on top of the OpenTitan Ibex and PicoRV32 processors. Parfait provides strong guarantees for these HSMs: for instance, it proves that the ECDSA-on-Ibex HSM implementation---2,300 lines of code and 13,500 lines of Verilog---leaks nothing more than what is allowed by a 40-line specification of its behavior.
SOSP ’24, November 4–6, 2024, Austin, TX
</description>
<pubDate>Mon, 04 Nov 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157857</guid>
<dc:date>2024-11-04T00:00:00Z</dc:date>
</item>
<item>
<title>Unifying serverless and microservice workloads with SigmaOS</title>
<link>https://hdl.handle.net/1721.1/157856</link>
<description>Unifying serverless and microservice workloads with SigmaOS
Szekely, Ariel; Belay, Adam; Morris, Robert; Kaashoek, M. Frans
Many cloud applications use both serverless functions, for bursts of stateless parallel computation, and container orchestration, for long-running microservices and tasks that need to interact. Ideally a single platform would offer the union of these systems' capabilities, but neither is sufficient to act as that single platform: serverless functions are lightweight but cannot act as servers with long-term state, while container orchestration offers general-purpose computation but instance start-up takes too long to support burst parallelism.&#13;
σOS is a new multi-tenant cloud operating system that combines the best of container orchestration and serverless in one platform with one API. σOS computations, called procs, can be long-running, stateful, and interact with each other, making them a good match for both serverless and microservice tasks. A key aspect of the σOS design is its cloud-centric API, which provides flexible management of computation, a novel abstraction for communication endpoints, σEPs---which allow procs of a tenant to communicate efficiently but prohibits procs from sending packets to other tenants---and a flexible naming system to name, for example, σEPs.&#13;
Quick proc start-up is important for serverless uses. A key enabling observation is that both serverless and microservice applications rely on cloud services for much of the work traditionally done by the local OS (e.g., access to durable storage and additional compute resources). σOS exploits this observation by providing only a small and generic local operating system image to each proc, which can be created much more quickly than a container orchestration instance since σOS need not install application-specific filesystem content or (due to σOS's σEPs) configure an isolated overlay network.&#13;
Microbenchmarks show that σOS can cold start a proc in 7.7 msec and can create 36,650 procs per second, distributing them over a 24-machine cluster. An evaluation of σOS with two microservice applications from DeathStarBench, a MapReduce application, and an image processing benchmark, shows that the σOS API supports both microservices and lambda-style computations, and provides better performance than corresponding versions on AWS Lambda and Kubernetes.
SOSP ’24,, November 4–6, 2024, Austin, TX, USA
</description>
<pubDate>Mon, 04 Nov 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157856</guid>
<dc:date>2024-11-04T00:00:00Z</dc:date>
</item>
<item>
<title>Procedural Material Generation with Reinforcement Learning</title>
<link>https://hdl.handle.net/1721.1/157855</link>
<description>Procedural Material Generation with Reinforcement Learning
Li, Beichen; Hu, Yiwei; Guerrero, Paul; Hasan, Milos; Shi, Liang; Deschaintre, Valentin; Matusik, Wojciech
Modern 3D content creation heavily relies on procedural assets. In particular, procedural materials are ubiquitous in the industry, but their manipulation remains challenging. Previous work conditionally generates procedural graphs that match a given input image. However, the parameter generation step limits how accurately the generated graph matches the input image, due to a reliance on supervision with scarcely available procedural data. We propose to improve parameter prediction accuracy for image-conditioned procedural material generation by leveraging reinforcement learning (RL) and present the first RL approach for procedural materials. RL circumvents the limited availability of procedural data, the domain gap between real and synthetic materials, and the need for end-to-end differentiable loss functions. Given a target image, we retrieve a procedural material and use an RL-trained transformer model to predict a set of parameters that reconstruct the target image as closely as possible. We show that using RL significantly improves parameter prediction to match a given target image compared to supervised methods on both synthetic and real target images.
</description>
<pubDate>Thu, 19 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157855</guid>
<dc:date>2024-12-19T00:00:00Z</dc:date>
</item>
<item>
<title>Medial Skeletal Diagram: A Generalized Medial Axis Approach for Compact 3D Shape Representation</title>
<link>https://hdl.handle.net/1721.1/157854</link>
<description>Medial Skeletal Diagram: A Generalized Medial Axis Approach for Compact 3D Shape Representation
Guo, Minghao; Wang, Bohan; Matusik, Wojciech
We propose the Medial Skeletal Diagram, a novel skeletal representation that tackles the prevailing issues around skeleton sparsity and reconstruction accuracy in existing skeletal representations. Our approach augments the continuous elements in the medial axis representation to effectively shift the complexity away from the discrete elements. To that end, we introduce generalized enveloping primitives, an enhancement over the standard primitives in the medial axis, which ensure efficient coverage of intricate local features of the input shape and substantially reduce the number of discrete elements required. Moreover, we present a computational framework for constructing a medial skeletal diagram from an arbitrary closed manifold mesh. Our optimization pipeline ensures that the resulting medial skeletal diagram comprehensively covers the input shape with the fewest primitives. Additionally, each optimized primitive undergoes a post-refinement process to guarantee an accurate match with the source mesh in both geometry and tessellation. We validate our approach on a comprehensive benchmark of 100 shapes, demonstrating the sparsity of the discrete elements and superior reconstruction accuracy across a variety of cases. Finally, we exemplify the versatility of our representation in downstream applications such as shape generation, mesh decomposition, shape optimization, mesh alignment, mesh compression, and user-interactive design.
</description>
<pubDate>Thu, 19 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157854</guid>
<dc:date>2024-12-19T00:00:00Z</dc:date>
</item>
<item>
<title>UFO Instruction Graphs Are Machine Knittable</title>
<link>https://hdl.handle.net/1721.1/157853</link>
<description>UFO Instruction Graphs Are Machine Knittable
Lin, Jenny; Ikarashi, Yuka; Bernstein, Gilbert; McCann, James
Programming low-level controls for knitting machines is a meticulous, time-consuming task that demands specialized expertise. Recently, there has been a shift towards automatically generating low-level knitting machine programs from high-level knit representations that describe knit objects in a more intuitive, user-friendly way. Current high-level systems trade off&#13;
expressivity for ease-of-use, requiring ad-hoc trapdoors to access the full space of machine capabilities, or eschewing completeness in the name of utility. Thus, advanced techniques either require ad-hoc extensions from domain experts, or are entirely unsupported. Furthermore, errors may emerge during the compilation from knit object representations to machine instructions. While the generated program may describe a valid machine control sequence, the fabricated object is topologically different from the specified input, with little recourse for understanding and fixing the issue.&#13;
&#13;
To address these limitations, we introduce instruction graphs, an intermediate representation capable of capturing the full range of machine knitting programs. We define a semantic mapping from instruction graphs to fenced tangles, which make them compatible with the established formal semantics for machine knitting instructions. We establish a semantics-preserving bijection between machine knittable instruction graphs and knit programs that proves three properties &amp;#8211; upward, forward, and ordered (UFO) &amp;#8211; are both necessary and sufficient to ensure the existence of a machine knitting program that can fabricate the fenced tangle denoted by the graph. As a proof-of-concept, we implement an instruction graph editor and compiler that allows a user to transform an instruction graph into UFO presentation and then compile it to a machine program, all while maintaining semantic equivalence. In addition, we use the UFO properties to more precisely characterize the limitations of existing compilers. This work lays the groundwork for more expressive and reliable automated knitting machine programming systems by providing a formal characterization of machine knittability.
</description>
<pubDate>Thu, 19 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157853</guid>
<dc:date>2024-12-19T00:00:00Z</dc:date>
</item>
<item>
<title>All you need is rotation: Construction of developable strips</title>
<link>https://hdl.handle.net/1721.1/157852</link>
<description>All you need is rotation: Construction of developable strips
Maekawa, Takashi; Scholz, Felix
We present a novel approach to generate developable strips along a space curve. The key idea of the new method is to use the rotation angle between the Frenet frame of the input space curve, and its Darboux frame of the curve on the resulting developable strip as a free design parameter, thereby revolving the strip around the tangential axis of the input space curve. This angle is not restricted to be constant but it can be any differentiable function defined on the curve, thereby creating a large design space of developable strips that share a common directrix curve. The range of possibilities for choosing the rotation angle is diverse, encompassing constant angles, linearly varying angles, sinusoidal patterns, and even solutions derived from initial value problems involving ordinary differential equations. This enables the potential of the proposed method to be used for a wide range of practical applications, spanning fields such as architectural design, industrial design, and papercraft modeling. In our computational and physical examples, we demonstrate the flexibility of the method by constructing, among others, toroidal and helical windmill blades for papercraft models, curved foldings, triply orthogonal structures, and developable strips featuring a log-aesthetic directrix curve.
</description>
<pubDate>Thu, 19 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157852</guid>
<dc:date>2024-12-19T00:00:00Z</dc:date>
</item>
<item>
<title>Paradoxes of Openness: Trans Experiences in Open Source Software</title>
<link>https://hdl.handle.net/1721.1/157851</link>
<description>Paradoxes of Openness: Trans Experiences in Open Source Software
Frluckaj, Hana; Stevens, Nikki; Howison, James; Dabbish, Laura
In recent years, concerns have increased over the lack of contributor diversity in open source software (OSS), despite its status as a paragon of open collaboration. OSS is an important form of digital infrastructure and part of a career path for many developers. While there exists a growing body of literature on cisgender women&amp;#8217;s under-representation in OSS, the experiences of contributors from other marginalized groups are comparatively absent from the literature. Such is the case for trans contributors, a historically influential group in OSS. In this study, we interviewed 21 trans participants to understand and represent their experiences in the OSS literature. From their experiences, we theorize two related paradoxes of openness in OSS: the paradox of openness and display and the paradox of openness and governance. In an increasingly violent world for trans people, we draw on our theorizing to build recommendations for more inclusive and safer OSS projects for contributors.
</description>
<pubDate>Fri, 08 Nov 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157851</guid>
<dc:date>2024-11-08T00:00:00Z</dc:date>
</item>
<item>
<title>Dittos: Personalized, Embodied Agents That Participate in Meetings When You Are Unavailable</title>
<link>https://hdl.handle.net/1721.1/157850</link>
<description>Dittos: Personalized, Embodied Agents That Participate in Meetings When You Are Unavailable
Leong, Joanne; Tang, John; Cutrell, Edward; Junuzovic, Sasa; Baribault, Gregory; Inkpen, Kori
Imagine being able to send a personalized embodied agent to meetings you are unable to attend. This paper explores the idea of a Ditto—an agent that visually resembles a person, sounds like them, possesses knowledge about them, and can represent them in meetings. This paper reports on results from two empirical investigations: 1) focus group sessions with six groups (n=24) and 2) a Wizard of Oz (WOz) study with 10 groups (n=39) recruited from within a large technology company. Results from the focus group sessions provide insights on what contexts are appropriate for Dittos, and issues around social acceptability and representation risk. The focus group results also provide feedback on visual design characteristics for Dittos. In the WOz study, teams participated in meetings with two different embodied agents: a Ditto and a Delegate (an agent which did not resemble the absent person). Insights from this research demonstrate the impact these embodied agents can have in meetings and highlight that Dittos in particular show promise in evoking feelings of presence and trust, as well as informing decision making. These results also highlight issues related to relationship dynamics such as maintaining social etiquette, managing one's professional reputation, and upholding accountability. Overall, our investigation provides early evidence that Dittos could be beneficial to represent users when they are unable to be present but also outlines many factors that need to be carefully considered to successfully realize this vision.
</description>
<pubDate>Fri, 08 Nov 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157850</guid>
<dc:date>2024-11-08T00:00:00Z</dc:date>
</item>
<item>
<title>Chillbot: Content Moderation in the Backchannel</title>
<link>https://hdl.handle.net/1721.1/157849</link>
<description>Chillbot: Content Moderation in the Backchannel
Seering, Joseph; Khadka, Manas; Haghighi, Nava; Yang, Tanya; Xi, Zachary; Bernstein, Michael
Moderating online spaces effectively is not a matter of simply taking down content: moderators also provide private feedback and defuse situations before they cross the line into harm. However, moderators have little tool support for these activities, which often occur in the backchannel rather than in front of the entire community. In this paper, we introduce Chillbot, a moderation tool for Discord designed to facilitate backchanneling from moderators to users. With Chillbot, moderators gain the ability to send rapid anonymous feedback responses to situations where removal or formal punishment is too heavy-handed to be appropriate, helping educate users about how to improve their behavior while avoiding direct confrontations that can put moderators at risk. We evaluated Chillbot through a two week field deployment on eleven Discord servers ranging in size from 25 to over 240,000 members. Moderators in these communities used Chillbot more than four hundred times during the study, and moderators from six of the eleven servers continued using the tool past the end of the formal study period. Based on this deployment, we describe implications for the design of a broader variety of means by which moderation tools can help shape communities' norms and behavior.
</description>
<pubDate>Fri, 08 Nov 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157849</guid>
<dc:date>2024-11-08T00:00:00Z</dc:date>
</item>
<item>
<title>Anonymization of Voices in Spaces for Civic Dialogue: Measuring Impact on Empathy, Trust, and Feeling Heard</title>
<link>https://hdl.handle.net/1721.1/157848</link>
<description>Anonymization of Voices in Spaces for Civic Dialogue: Measuring Impact on Empathy, Trust, and Feeling Heard
Kang, Wonjune; Hughes, Margaret; Roy, Deb
Anonymity is a powerful component of many participatory media platforms that can afford people greater freedom of expression and protection from external coercion and interference. However, it can be difficult to effectively implement on platforms that leverage spoken language due to distinct biomarkers present in the human voice. In this work, we explore the use of voice anonymization methods within the context of a technology-enhanced civic dialogue network based in the United States, whose purpose is to increase feelings of agency and being heard within civic processes. Specifically, we investigate the use of two different speech transformation and synthesis methods for anonymization: voice conversion (VC) and text-to-speech (TTS). Through a series of two studies, we examine the impact that each method has on 1) the empathy and trust that listeners feel towards a person sharing a personal story, and 2) a speaker's own perception of being heard, finding that voice conversion is an especially suitable method for our purposes. Our findings open up interesting potential research directions related to anonymous spoken discourse, as well as additional ways of engaging with voice-based civic technologies.
</description>
<pubDate>Fri, 08 Nov 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157848</guid>
<dc:date>2024-11-08T00:00:00Z</dc:date>
</item>
<item>
<title>Gênero e Feminismos no Ensino de Relações Internacionais no Brasil</title>
<link>https://hdl.handle.net/1721.1/157847</link>
<description>Gênero e Feminismos no Ensino de Relações Internacionais no Brasil
Jungs de Almeida, Alessandra
</description>
<pubDate>Sun, 01 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157847</guid>
<dc:date>2024-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>Dados Ausentes e Contradados</title>
<link>https://hdl.handle.net/1721.1/157846</link>
<description>Dados Ausentes e Contradados
Cruxên, I; Jungs de Almeida, A; Klein, L; D’Ignazio, C
</description>
<pubDate>Sun, 01 Jan 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157846</guid>
<dc:date>2023-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>"Come to us first": Centering Community Organizations in Artificial Intelligence for Social Good Partnerships</title>
<link>https://hdl.handle.net/1721.1/157845</link>
<description>"Come to us first": Centering Community Organizations in Artificial Intelligence for Social Good Partnerships
Lin, Hongjin; Karusala, Naveena; Okolo, Chinasa; D'Ignazio, Catherine; Gajos, Krzysztof
Artificial Intelligence for Social Good (AI4SG) has emerged as a growing body of research and practice exploring the potential of AI technologies to tackle social issues. This area emphasizes interdisciplinary partnerships with community organizations, such as non-profits and government agencies. However, amidst excitement about new advances in AI and their potential impact, the needs, expectations, and aspirations of these community organizations--and whether they are being met--are not well understood. Understanding these factors is important to ensure that the considerable efforts by AI teams and community organizations can actually achieve the positive social impact they strive for. Drawing on the Data Feminism framework, we explored the perspectives of community organization members on their partnerships with AI teams through 16 semi-structured interviews. Our study highlights the pervasive influence of funding agendas and the optimism surrounding AI's potential. Despite the significant intellectual contributions and labor provided by community organization members, their goals were frequently sidelined in favor of other stakeholders, including AI teams. While many community organization members expected tangible project deployment, only two out of 14 projects we studied reached the deployment stage. However, community organization members sustained their belief in the potential of the projects, still seeing diminished goals as valuable. To enhance the efficacy of future collaborations, our participants shared their aspirations for success, calling for co-leadership starting from the early stages of projects. We propose data co-liberation as a grounding principle for approaching AI4SG moving forward, positing that community organizations' co-leadership is essential for fostering more effective, sustainable, and ethical development of AI.
</description>
<pubDate>Fri, 08 Nov 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157845</guid>
<dc:date>2024-11-08T00:00:00Z</dc:date>
</item>
<item>
<title>Insights from an Experiment Crowdsourcing Data from Thousands of US Amazon Users: The importance of transparency, money, and data use</title>
<link>https://hdl.handle.net/1721.1/157844</link>
<description>Insights from an Experiment Crowdsourcing Data from Thousands of US Amazon Users: The importance of transparency, money, and data use
Berke, Alex; Mahari, Robert; Pentland, Sandy; Larson, Kent; Calacci, Dana
Data generated by users on digital platforms are a crucial resource for advocates and researchers interested in uncovering digital inequities, auditing algorithms, and understanding human behavior. Yet data access is often restricted. How can researchers both effectively and ethically collect user data? This paper shares an innovative approach to crowdsourcing user data to collect otherwise inaccessible Amazon purchase histories, spanning 5 years, from more than 5,000 U.S. users. We developed a data collection tool that prioritizes participant consent and includes an experimental study design. The design allows us to study multiple important aspects of privacy perception and user data sharing behavior, including how socio-demographics, monetary incentives and transparency can impact share rates. Experiment results (N=6,325) reveal both monetary incentives and transparency can significantly increase data sharing. Age, race, education, and gender also played a role, where female and less-educated participants were more likely to share. Our study design enables a unique empirical evaluation of the &amp;#8220;privacy paradox&amp;#8221;, where users claim to value their privacy more than they do in practice. We set up both real and hypothetical data sharing scenarios and find measurable similarities and differences in share rates across these contexts. For example, increasing monetary incentives had a 6 times higher impact on share rates in real scenarios. In addition, we study participants' opinions on how data should be used by various third parties, again finding that gender, age, education, and race have a significant impact. Notably, the majority of participants disapproved of government agencies using purchase data yet the majority approved of use by researchers. Overall, our findings highlight the critical role that transparency, incentive design, and user demographics play in ethical data collection practices, and provide guidance for future researchers seeking to crowdsource user generated data.
</description>
<pubDate>Fri, 08 Nov 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157844</guid>
<dc:date>2024-11-08T00:00:00Z</dc:date>
</item>
<item>
<title>Entangled Amid Misaligned Seams: Limitations to Technology-Mediated Care for Repairing Infrastructural Breakdowns in a Youth Empowerment Program</title>
<link>https://hdl.handle.net/1721.1/157843</link>
<description>Entangled Amid Misaligned Seams: Limitations to Technology-Mediated Care for Repairing Infrastructural Breakdowns in a Youth Empowerment Program
Choi, Adrian; Pfohl, Grace; D'Ignazio, Catherine; Foucault Welles, Brooke; Parker, Andrea
The COVID-19 pandemic broke down the human infrastructure of many community-based programs, disrupting in-person care services for low-resourced families. Yet, minimal work has explored how actors repair these breakdowns and how other infrastructures may interfere with repairs in such contexts. Interviewing adolescents and adults affiliated with a youth empowerment program, we used the pandemic to examine how a human infrastructure that previously facilitated a sense of community broke down and how members attempted to repair this infrastructure. While organized activities, resources, and interpersonal interactions aligned to facilitate in-person care that established a sense of community, incorporating information and communication technologies to align a sociotechnical infrastructure during social restrictions could not overcome multiple constraints imposed by other infrastructures that limited this sense of community. We discuss limitations to care and aligning together multiple disjointed infrastructures, calling for CSCW researchers to critically consider asset-based design as a methodology that might help sustain a community's well-being.
</description>
<pubDate>Fri, 08 Nov 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157843</guid>
<dc:date>2024-11-08T00:00:00Z</dc:date>
</item>
<item>
<title>A Graph Deep Learning Model for Station Ridership Prediction in Expanding Metro Networks</title>
<link>https://hdl.handle.net/1721.1/157842</link>
<description>A Graph Deep Learning Model for Station Ridership Prediction in Expanding Metro Networks
Ding, Fangyi; Liang, Yuebing; Wang, Yamin; Tang, Yan; Zhou, Yang; Zhao, Zhan
Due to their reliability, efficiency, and environmental friendliness, metro systems have become a crucial solution to transportation challenges associated with urbanization. Many countries have constructed or expanded their metro networks over the past decades. During the planning stage, accurately predicting station ridership post-expansion, particularly for new stations, is essential to enhance the effectiveness of infrastructure investments. However, station-level metro ridership prediction under expansion scenarios (MRP-E) has not been thoroughly explored, as most advanced models currently focus on short-term predictions. MRP-E presents significant challenges due to the absence of historical data for newly built stations and the dynamic, complex spatiotemporal relationships between stations during expansion phases. In this study, we propose a Metro-specific Multi-Graph Attention Network model (Metro-MGAT) to address these issues. Our model leverages multi-sourced urban context data and network topology information to generate station features. Multi-relation graphs are constructed to capture the spatial correlations between stations, and an attention mechanism is employed to facilitate graph encoding. The model has been evaluated through realistic experiments using multi-year metro ridership data from Shanghai, China. The results validate the superior performance of our approach compared to existing methods, particularly in predicting ridership at new stations.
UrbanAI’24, October 29–November 01, 2024, Atlanta, GA
</description>
<pubDate>Tue, 29 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157842</guid>
<dc:date>2024-10-29T00:00:00Z</dc:date>
</item>
<item>
<title>Focal Surface Holographic Light Transport using Learned Spatially Adaptive Convolutions</title>
<link>https://hdl.handle.net/1721.1/157841</link>
<description>Focal Surface Holographic Light Transport using Learned Spatially Adaptive Convolutions
Zheng, Chuanjun; Zhan, Yicheng; Shi, Liang; Cakmakci, Ozan; Ak?it, Kaan
Computer-Generated Holography (CGH) is a set of algorithmic methods for identifying holograms that reconstruct Three-Dimensio-nal (3D) scenes in holographic displays. CGH algorithms decompose 3D scenes into multiplanes at different depth levels and rely on simulations of light that propagated from a source plane to a targeted plane. Thus, for n planes, CGH typically optimizes holograms using n plane-to-plane light transport simulations, leading to major time and computational demands. Our work replaces multiple planes with a focal surface and introduces a learned light transport model that could propagate a light field from a source plane to the focal surface in a single inference. Our model leverages spatially adaptive convolution to achieve depth-varying propagation demanded by targeted focal surfaces. The proposed model reduces the hologram optimization process up to 1.5x, which contributes to hologram dataset generation and the training of future learned CGH models.
SA Technical Communications ’24, December 03–06, 2024, Tokyo, Japan
</description>
<pubDate>Tue, 03 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157841</guid>
<dc:date>2024-12-03T00:00:00Z</dc:date>
</item>
<item>
<title>How J-chain ensures the assembly of immunoglobulin IgM pentamers</title>
<link>https://hdl.handle.net/1721.1/157840</link>
<description>How J-chain ensures the assembly of immunoglobulin IgM pentamers
Giannone, Chiara; Mess, Xenia; He, Ruiming; Chelazzi, Maria R.; Mayer, Annika; Bakunts, Anush; Nguyen, Tuan; Bushman, Yevheniia; Orsi, Andrea; Gansen, Benedikt; Degano, Massimo; Buchner, Johannes; Sitia, Roberto
Polymeric IgM immunoglobulins have high avidity for antigen and complement, and dominate primary antibody responses. They are produced either as assemblies of six µ2L2 subunits (i.e., hexamers), or as pentamers of two µ2L2 subunits and an additional protein termed J-chain (JC), which allows transcytosis across epithelia. The molecular mechanism of IgM assembly with the desired stoichiometry remained unknown. Here, we show in vitro and in cellula that JC outcompetes the sixth IgM subunit during assembly. Before insertion into IgM, JC exists as an ensemble of largely unstructured, protease-sensitive species with heterogeneous, non-native disulfide bonds. The J-chain interacts with the hydrophobic β-sheets selectively exposed by nascent pentamers. Completion of an amyloid-like core triggers JC folding and drives disulfide rearrangements that covalently stabilize JC-containing pentamers. In cells, the quality control factor ERp44 surveys IgM assembly and prevents the secretion of aberrant conformers. This mechanism allows the efficient production of high-avidity IgM for systemic or mucosal immunity.
</description>
<pubDate>Wed, 04 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157840</guid>
<dc:date>2024-12-04T00:00:00Z</dc:date>
</item>
<item>
<title>A Listeria monocytogenes aptasensor on laser inscribed graphene for food safety monitoring in hydroponic water</title>
<link>https://hdl.handle.net/1721.1/157839</link>
<description>A Listeria monocytogenes aptasensor on laser inscribed graphene for food safety monitoring in hydroponic water
Cavallaro, Nicholas; Moreira, Geisianny; Vanegas, Diana; Xiang, Dong; Datta, Shoumen P. A.; Gomes, Carmen; McLamore, Eric S.
Consumption of fresh produce, such as leafy greens, is often encouraged as part of a healthy diet. Hence, indoor facilities for hydroponic production of leafy greens are increasingly being established. However, fresh produce entails a higher risk of microbial foodborne illnesses than processed foods. Listeria monocytogenes is a major source of fresh produce contamination and is among the leading causes of severe foodborne illnesses in the United States, with a 16% mortality rate. Tools for rapid monitoring are needed for pathogens such as L. monocytogenes to prevent outbreaks. In this manuscript, we have demonstrated the feasibility of a multi-aptamer approach for development of label-free aptasensors targeting L. monocytogenes in irrigation water for lettuce hydroponic production. We use screening studies with surface plasmon resonance to rationally develop mixtures of relevant aptamers for targeting L. monocytogenes. Based on this screening, multiple aptamers targeting extracellular structures on intact L. monocytogenes were tethered to platinum-modified laser inscribed graphene electrodes. This is the first report of a L. monocytogenes biosensor based on laser inscribed graphene. We show that mixing multiple aptamers with varying affinity improves the diagnostic performance over one aptamer alone in complex sample matrices (lettuce hydroponic water). Multi-aptamer biosensors showed high accuracy for L. monocytogenes and were at least three times more selective than Escherichia coli (Crooks, K12, O157:H7) with an accuracy of 85%. The limit of detection (10 CFU/10 mL) is based on data which were significantly different after calibration toward L. monocytogenes or E. coli (Crooks) and validated against gold standard molecular analysis (polymerase chain reaction). Rapid screening of pathogens is a global need to meet food safety and water quality regulations. This study shows the importance of sensors targeting more than one bacterial surface structure in complex samples relevant to the food-water nexus.
</description>
<pubDate>Tue, 03 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157839</guid>
<dc:date>2024-12-03T00:00:00Z</dc:date>
</item>
<item>
<title>Materials approaches for next-generation encapsulated cell therapies</title>
<link>https://hdl.handle.net/1721.1/157838</link>
<description>Materials approaches for next-generation encapsulated cell therapies
Krishnan, Siddharth R.; Langer, Robert; Anderson, Daniel G.
Transplanted cells can act as living drug factories capable of secreting therapeutic proteins in vivo, with applications in the treatment of Type 1 diabetes (T1D), blood borne disease, vision disorders, and degenerative neural disease, potentially representing functional cures for chronic conditions. However, attack from the host immune system represents a major challenge, requiring chronic immunosuppression to enable long-lived cell transplantation in vivo. Encapsulating cells in engineered biomaterials capable of excluding components of the host immune system while allowing for the transport of therapeutic proteins, oxygen, nutrients, metabolites, and waste products represents a potential solution. However, the foreign-body response can lead to isolation from native vasculature and hypoxia leading to cell death. In this prospective article, we highlight materials-based solutions to three important challenges in the field: (i) improving biocompatibility and reducing fibrosis; (ii) enhancing transport of secreted protein drugs and key nutrients and oxygen via engineered, semipermeable membranes; and (iii) improving oxygenation. These efforts draw on several disciplines in materials’ research, including polymer science, surfaces, membranes, biomaterials’ microfabrication, and flexible electronics. If successful, these efforts could lead to new therapies for chronic disease and are a rich space for both fundamental materials’ discovery and applied translational science.
</description>
<pubDate>Mon, 02 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157838</guid>
<dc:date>2024-12-02T00:00:00Z</dc:date>
</item>
<item>
<title>Using magnetic resonance relaxometry to evaluate the safety and quality of induced pluripotent stem cell-derived spinal cord progenitor cells</title>
<link>https://hdl.handle.net/1721.1/157837</link>
<description>Using magnetic resonance relaxometry to evaluate the safety and quality of induced pluripotent stem cell-derived spinal cord progenitor cells
Tan, Jerome; Chen, Jiahui; Roxby, Daniel; Chooi, Wai H.; Nguyen, Tan D.; Ng, Shi Y.; Han, Jongyoon; Chew, Sing Y.
Background The emergence of induced pluripotent stem cells (iPSCs) offers a promising approach for replacing damaged neurons and glial cells, particularly in spinal cord injuries (SCI). Despite its merits, iPSC differentiation into spinal cord progenitor cells (SCPCs) is variable, necessitating reliable assessment of differentiation and validation of cell quality and safety. Phenotyping is often performed via label-based methods including immunofluorescent staining or flow cytometry analysis. These approaches are often expensive, laborious, time-consuming, destructive, and severely limits their use in large scale cell therapy manufacturing settings. On the other hand, cellular biophysical properties have demonstrated a strong correlation to cell state, quality and functionality and can be measured with ingenious label-free technologies in a rapid and non-destructive manner. Method In this study, we report the use of Magnetic Resonance Relaxometry (MRR), a rapid and label-free method that indicates iron levels based on its readout (T2). Briefly, we differentiated human iPSCs into SCPCs and compared key iPSC and SCPC cellular markers to their intracellular iron content (Fe3+) at different stages of the differentiation process. Results With MRR, we found that intracellular iron of iPSCs and SCPCs were distinctively different allowing us to accurately reflect varying levels of residual undifferentiated iPSCs (i.e., OCT4+ cells) in any given population of SCPCs. MRR was also able to predict Day 10 SCPC OCT4 levels from Day 1 undifferentiated iPSC T2 values and identified poorly differentiated SCPCs with lower T2, indicative of lower neural progenitor (SOX1) and stem cell (Nestin) marker expression levels. Lastly, MRR was able to provide predictive indications for the extent of differentiation to Day 28 spinal cord motor neurons (ISL-1/SMI-32) based on the T2 values of Day 10 SCPCs. Conclusion MRR measurements of iPSCs and SCPCs has clearly indicated its capabilities to identify and quantify key phenotypes of iPSCs and SCPCs for end-point validation of safety and quality parameters. Thus, our technology provides a rapid label-free method to determine critical quality attributes in iPSC-derived progenies and is ideally suited as a quality control tool in cell therapy manufacturing.
</description>
<pubDate>Thu, 05 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157837</guid>
<dc:date>2024-12-05T00:00:00Z</dc:date>
</item>
<item>
<title>Measurement of the effective leptonic weak mixing angle</title>
<link>https://hdl.handle.net/1721.1/157836</link>
<description>Measurement of the effective leptonic weak mixing angle
Aaij, R.; Abdelmotteleb, A. S. W.; Abellan Beteta, C.; Abudinén, F.; Ackernley, T.; Adefisoye, A. A.; Adeva, B.; Adinolfi, M.; Adlarson, P.; Agapopoulou, C.; Aidala, C. A.; Ajaltouni, Z.; Akar, S.; Akiba, K.; Albicocco, P.; Albrecht, J.; Alessio, F.; Alexander, M.; The LHCb collaboration
Using pp collision data at s = 13 TeV, recorded by the LHCb experiment between 2016 and 2018 and corresponding to an integrated luminosity of 5.4 fb−1, the forward-backward asymmetry in the pp → Z/γ* → μ+μ− process is measured. The measurement is carried out in ten intervals of the difference between the muon pseudorapidities, within a fiducial region covering dimuon masses between 66 and 116 GeV, muon pseudorapidities between 2.0 and 4.5 and muon transverse momenta above 20 GeV. These forward-backward asymmetries are compared with predictions, at next-to-leading order in the strong and electroweak couplings. The measured effective leptonic weak mixing angle is sin 2 θ eff ℓ = 0.23147 ± 0.00044 ± 0.00005 ± 0.00023 , where the first uncertainty is statistical, the second arises from systematic uncertainties associated with the asymmetry measurement, and the third arises from uncertainties in the fit model used to extract sin 2 θ eff ℓ from the asymmetry measurement. This result is based on an arithmetic average of results using the CT18, MSHT20, and NNPDF31 parameterisations of the proton internal structure, and is consistent with previous measurements and with predictions from the global electroweak fit.
</description>
<pubDate>Tue, 03 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157836</guid>
<dc:date>2024-12-03T00:00:00Z</dc:date>
</item>
<item>
<title>Two complementary features of humoral immune memory confer protection against the same or variant antigens</title>
<link>https://hdl.handle.net/1721.1/157803</link>
<description>Two complementary features of humoral immune memory confer protection against the same or variant antigens
Van Beek, Matthew; Nussenzweig, Michel C; Chakraborty, Arup K
The humoral immune response, a key arm of adaptive immunity, consists of B cells and their products. Upon infection or vaccination, B cells undergo a Darwinian evolutionary process in germinal centers (GCs), resulting in the production of antibodies and memory B cells. We developed a computational model to study how humoral memory is recalled upon reinfection or booster vaccination. We find that upon reexposure to the same antigen, affinity-dependent selective expansion of available memory B cells outside GCs (extragerminal center compartments [EGCs]) results in a rapid response made up of the best available antibodies. Memory B cells that enter secondary GCs can undergo mutation and selection to generate even more potent responses over time, enabling greater protection upon subsequent exposure to the same antigen. GCs also generate a diverse pool of B cells, some with low antigen affinity. These results are consistent with our analyses of data from humans vaccinated with two doses of a COVID-19 vaccine. Our results further show that the diversity of memory B cells generated in GCs is critically important upon exposure to a variant antigen. Clones drawn from this diverse pool that cross-react with the variant are rapidly expanded in EGCs to provide the best protection possible while new secondary GCs generate a tailored response for the new variant. Based on a simple evolutionary model, we suggest that the complementary roles of EGC and GC processes we describe may have evolved in response to complex organisms being exposed to evolving pathogen families for millennia.
</description>
<pubDate>Tue, 13 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157803</guid>
<dc:date>2022-09-13T00:00:00Z</dc:date>
</item>
<item>
<title>A model for organization and regulation of nuclear condensates by gene activity</title>
<link>https://hdl.handle.net/1721.1/157802</link>
<description>A model for organization and regulation of nuclear condensates by gene activity
Schede, Halima H; Natarajan, Pradeep; Chakraborty, Arup K; Shrinivas, Krishna
Condensation by phase separation has recently emerged as a mechanism underlying many nuclear compartments essential for cellular functions. Nuclear condensates enrich nucleic acids and proteins, localize to specific genomic regions, and often promote gene expression. How diverse properties of nuclear condensates are shaped by gene organization and activity is poorly understood. Here, we develop a physics-based model to interrogate how spatially-varying transcription activity impacts condensate properties and dynamics. Our model predicts that spatial clustering of active genes can enable precise localization and de novo nucleation of condensates. Strong clustering and high activity results in aspherical condensate morphologies. Condensates can flow towards distant gene clusters and competition between multiple clusters lead to stretched morphologies and activity-dependent repositioning. Overall, our model predicts and recapitulates morphological and dynamical features of diverse nuclear condensates and offers a unified mechanistic framework to study the interplay between non-equilibrium processes, spatially-varying transcription, and multicomponent condensates in cell biology.
</description>
<pubDate>Sun, 01 Jan 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157802</guid>
<dc:date>2023-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mechanisms that promote the evolution of cross-reactive antibodies upon vaccination with designed influenza immunogens</title>
<link>https://hdl.handle.net/1721.1/157801</link>
<description>Mechanisms that promote the evolution of cross-reactive antibodies upon vaccination with designed influenza immunogens
Yang, Leerang; Caradonna, Timothy M; Schmidt, Aaron G; Chakraborty, Arup K
Immunogens that elicit broadly neutralizing antibodies targeting the conserved receptor-binding site (RBS) on influenza hemagglutinin may serve as candidates for a universal influenza vaccine. Here, we develop a computational model to interrogate antibody evolution by affinity maturation after immunization with two types of immunogens: a heterotrimeric "chimera" hemagglutinin that is enriched for the RBS epitope relative to other B cell epitopes and a cocktail composed of three non-epitope-enriched homotrimers of the monomers that comprise the chimera. Experiments in mice find that the chimera outperforms the cocktail for eliciting RBS-directed antibodies. We show that this result follows from an interplay between how B cells engage these antigens and interact with diverse helper T cells and requires T cell-mediated selection of germinal center B cells to be a stringent constraint. Our results shed light on antibody evolution and highlight how immunogen design and T cells modulate vaccination outcomes.
</description>
<pubDate>Wed, 01 Mar 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157801</guid>
<dc:date>2023-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Search for CP violation in D0 → K0 SK0 S decays in proton–proton collisions at √s = 13 TeV</title>
<link>https://hdl.handle.net/1721.1/157800</link>
<description>Search for CP violation in D0 → K0 SK0 S decays in proton–proton collisions at √s = 13 TeV
CMS Collaboration
A search is reported for charge-parity CP violation in D 0 → K S 0 K S 0 decays, using data collected in proton–proton collisions at s = 13 Te V recorded by the CMS experiment in 2018. The analysis uses a dedicated data set that corresponds to an integrated luminosity of 41.6 fb - 1 , which consists of about 10 billion events containing a pair of b hadrons, nearly all of which decay to charm hadrons. The flavor of the neutral D meson is determined by the pion charge in the reconstructed decays D ∗ + → D 0 π + and D ∗ - → D ¯ 0 π - . The CP asymmetry in D 0 → K S 0 K S 0 is measured to be A CP ( K S 0 K S 0 ) = ( 6.2 ± 3.0 ± 0.2 ± 0.8 ) % , where the three uncertainties represent the statistical uncertainty, the systematic uncertainty, and the uncertainty in the measurement of the CP asymmetry in the D 0 → K S 0 π + π - decay. This is the first CP asymmetry measurement by CMS in the charm sector as well as the first to utilize a fully hadronic final state.
</description>
<pubDate>Fri, 06 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157800</guid>
<dc:date>2024-12-06T00:00:00Z</dc:date>
</item>
<item>
<title>Measurement of the electric potential and the magnetic field in the shifted analysing plane of the KATRIN experiment</title>
<link>https://hdl.handle.net/1721.1/157799</link>
<description>Measurement of the electric potential and the magnetic field in the shifted analysing plane of the KATRIN experiment
KATRIN Collaboration
The projected sensitivity of the effective electron neutrino-mass measurement with the KATRIN experiment is below 0.3 eV (90 % CL) after 5 years of data acquisition. The sensitivity is affected by the increased rate of the background electrons from KATRIN’s main spectrometer. A special shifted-analysing-plane (SAP) configuration was developed to reduce this background by a factor of two. The complex layout of electromagnetic fields in the SAP configuration requires a robust method of estimating these fields. We present in this paper a dedicated calibration measurement of the fields using conversion electrons of gaseous 83m Kr, which enables the neutrino-mass measurements in the SAP configuration.
</description>
<pubDate>Fri, 06 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157799</guid>
<dc:date>2024-12-06T00:00:00Z</dc:date>
</item>
<item>
<title>The Europa Imaging System (EIS) Investigation</title>
<link>https://hdl.handle.net/1721.1/157798</link>
<description>The Europa Imaging System (EIS) Investigation
Turtle, E. P.; McEwen, A. S.; Patterson, G. W.; Ernst, C. M.; Elder, C. M.; Slack, K. A.; Hawkins, S. E.; McDermott, J.; Meyer, H.; DeMajistre, R.; Espiritu, R.; Seifert, H.; Niewola, J.
The Europa Imaging System (EIS) consists of a Narrow-Angle Camera (NAC) and a Wide-Angle Camera (WAC) that are designed to work together to address high-priority science objectives regarding Europa’s geology, composition, and the nature of its ice shell. EIS accommodates variable geometry and illumination during rapid, low-altitude flybys with both framing and pushbroom imaging capability using rapid-readout, 8-megapixel (4k × 2k) detectors. Color observations are acquired using pushbroom imaging with up to six broadband filters. The data processing units (DPUs) perform digital time delay integration (TDI) to enhance signal-to-noise ratios and use readout strategies to measure and correct spacecraft jitter. The NAC has a 2.3° × 1.2° field of view (FOV) with a 10-μrad instantaneous FOV (IFOV), thus achieving 0.5-m pixel scale over a swath that is 2 km wide and several km long from a range of 50 km. The NAC is mounted on a 2-axis gimbal, ±30° cross- and along-track, that enables independent targeting and near-global (≥90%) mapping of Europa at ≤100-m pixel scale (to date, only ∼15% of Europa has been imaged at ≤900 m/pixel), as well as stereo imaging from as close as 50-km altitude to generate digital terrain models (DTMs) with ≤4-m ground sample distance (GSD) and ≤0.5-m vertical precision. The NAC will also perform observations at long range to search for potential erupting plumes, achieving 10-km pixel scale at a distance of one million kilometers. The WAC has a 48° × 24° FOV with a 218-μrad IFOV, achieving 11-m pixel scale at the center of a 44-km-wide swath from a range of 50 km, and generating DTMs with 32-m GSD and ≤4-m vertical precision. The WAC is designed to acquire three-line pushbroom stereo and color swaths along flyby ground-tracks.
</description>
<pubDate>Wed, 04 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157798</guid>
<dc:date>2024-12-04T00:00:00Z</dc:date>
</item>
<item>
<title>JWST sighting of decameter main-belt asteroids and view on meteorite sources</title>
<link>https://hdl.handle.net/1721.1/157797</link>
<description>JWST sighting of decameter main-belt asteroids and view on meteorite sources
Burdanov, Artem Y.; de Wit, Julien; Broz, Miroslav; Muller, Thomas G.; Hoffmann, Tobias; Ferrais, Marin; Micheli, Marco; Jehin, Emmanuel; Parrott, Daniel; Hasler, Samantha N.; Binzel, Richard P.; Ducrot, Elsa; Kreidberg, Laura; Gillon, Michael; Greene, Thomas P.; Grundy, Will M.; Kareta, Theodore; Lagage, Pierre-Olivier; Moskovitz, Nicholas; Thirouin, Audrey; Thomas, Cristina A.; Zieba, Sebastian
Asteroid discoveries are essential for planetary-defense efforts aiming to prevent impacts&#13;
with Earth, including the more frequent megaton explosions from decameter impactors.&#13;
While large asteroids (≥100 km) have remained in the main belt since their formation,&#13;
small asteroids are commonly transported to the near-Earth object (NEO) population.&#13;
However, due to the lack of direct observational constraints, their size-frequency distribution —which informs our understanding of the NEOs and the delivery of meteorite&#13;
samples to Earth—varies significantly among models. Here, we report 138 detections&#13;
of the smallest asteroids (⪆10 m) ever observed in the main belt, which were enabled by JWST’s infrared capabilities covering the asteroids’ emission peaks and synthetic tracking techniques. Despite small orbital arcs, we constrain the objects’ distances and&#13;
phase angles using known asteroids as proxies, allowing us to derive sizes via radiometric&#13;
techniques. Their size-frequency distribution exhibits a break at ∼100 m (debiased cumulative slopes of q = −2.66 ± 0.60 and −0.97 ± 0.14 for diameters smaller and larger than&#13;
∼100 m, respectively), suggestive of a population driven by collisional cascade. These&#13;
asteroids were sampled from multiple asteroid families —most likely Nysa, Polana and&#13;
Massalia— according to the geometry of pointings considered here. Through additional&#13;
long-stare infrared observations, JWST is poised to serendipitously detect thousands of&#13;
decameter-scale asteroids across the sky, probing individual asteroid families and the&#13;
source regions of meteorites “in-situ”.
</description>
<pubDate>Mon, 09 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157797</guid>
<dc:date>2024-12-09T00:00:00Z</dc:date>
</item>
<item>
<title>KnitworkVR: Dual-reality Experience through Distributed Sensor-Actuator Networks in the Living Knitwork Pavilion</title>
<link>https://hdl.handle.net/1721.1/157796</link>
<description>KnitworkVR: Dual-reality Experience through Distributed Sensor-Actuator Networks in the Living Knitwork Pavilion
Wicaksono, Irmandy; Blanchard, Lancelot; Chin, Sam; Colon, Cristian; Paradiso, Joseph
KnitworkVR integrates dual-reality and digital twin platforms to simulate the Living Knitwork Pavilion in a desert landscape, using real-time sensor data. The sensor network captures movements, interactions, and spatial positioning of occupants, linking electric field sensor data with VR positioning. This creates a sensor-driven immersive experience with dynamic lighting, live animations, and adaptive soundscapes, enabling telepresence and collaborative interaction in both digital and physical environments. This paper explores the functional textile design, sensing hardware, audiovisual system, and VR framework, highlighting the applications of immersive spaces with knitted electronic textiles and distributed physical-digital systems.
SA Art Papers ’24, December 03–06, 2024, Tokyo, Japan
</description>
<pubDate>Tue, 03 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157796</guid>
<dc:date>2024-12-03T00:00:00Z</dc:date>
</item>
<item>
<title>Thermochromorph: Dynamic Relief Printing with Thermochromic Inks</title>
<link>https://hdl.handle.net/1721.1/157795</link>
<description>Thermochromorph: Dynamic Relief Printing with Thermochromic Inks
Sethapakdi, Ticha; Myers, Paris; Yu, Tianyu; Covarrubias, Juliana; Leake, Mackenzie; Mueller, Stefanie
Thermochromorph is a novel relief printing technique that produces multicolored images that transition into each other through changes in temperature. Our process utilizes two sets of CMYK thermochromic inks that exhibit complementary color-changing behaviors: one shifting from color to transparency, the other from transparency to color at the same activation temperature. We describe our printmaking workflow, provide an open-source software toolkit, showcase prints made with our system, and facilitate an artist workshop. By incorporating new materials and technology with the rich history of printmaking, our work extends the expressive capabilities of relief printing as the medium continues to evolve.
SA Art Papers ’24, December 03–06, 2024, Tokyo, Japan
</description>
<pubDate>Tue, 03 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157795</guid>
<dc:date>2024-12-03T00:00:00Z</dc:date>
</item>
<item>
<title>Understanding Non-Verbal Irony Markers: Machine Learning Insights Versus Human Judgment</title>
<link>https://hdl.handle.net/1721.1/157794</link>
<description>Understanding Non-Verbal Irony Markers: Machine Learning Insights Versus Human Judgment
Spitale, Micol; Catania, Fabio; Panzeri, Francesca
rony detection is a complex task that often stumps both humans, who frequently misinterpret ironic statements, and artificial intelligence (AI) systems. While the majority of AI research on irony detection has concentrated on linguistic cues, the role of non-verbal cues like facial expressions and auditory signals has been largely overlooked. This paper investigates the effectiveness of machine learning models in recognizing irony using solely non-verbal cues. To this end, we conducted the following experiments and analysis: (i) we trained and evaluated some machine-learning models to detect irony; (ii) we compared the results with human interpretations; and (iii) we analysed and identified multi-modal non-verbal irony markers. Our research demonstrates that machine learning models trained on nonverbal data have shown significant promise in detecting irony, outperforming human judgments in this task. Specifically, we found that certain facial action units and acoustic characteristics of speech are key indicators of irony expression. These non-verbal cues, often overlooked in traditional irony detection methods, were effectively identified by machine learning models, leading to improved accuracy in detecting irony.
ICMI ’24, November 04–08, 2024, San Jose, Costa Rica
</description>
<pubDate>Mon, 04 Nov 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157794</guid>
<dc:date>2024-11-04T00:00:00Z</dc:date>
</item>
<item>
<title>Identifying the Values that Shape HCI and CSCW Research with Latin American Communities: A Collaborative Autoethnography</title>
<link>https://hdl.handle.net/1721.1/157793</link>
<description>Identifying the Values that Shape HCI and CSCW Research with Latin American Communities: A Collaborative Autoethnography
Griggio, Carla; Barrera Machuca, Mayra; Wong-Villacres, Marisol; Gayt?n-Lugo, Laura; Badillo-Urquiola, Karla; Alvarado Garcia, Adriana; Perusquia-Hernandez, Monica; Ciolfi Felice, Marianela; Cibrian, Franceli; Thomas, Michaelanne; Fuentes, Carolina; Reynolds-Cu?llar, Pedro
Over the past decade, community collaborations have come into focus within the HCI and CSCW fields. Largely the result of increased concern for social and contextual dimensions of practice, these partnerships facilitate a pathway for researchers and practitioners to foreground the nuances of technology as it takes place in the real world. How these collaborations are engaged, what values mediate them, and how practices might vary across geographies remain active research questions. In this paper, we contribute by zooming into the experience of four HCI and CSCW researchers engaging in community collaborations in Latin America (LATAM). Through a collaborative autoethnography (CAE), we identify three main value tensions impacting HCI practices and methods in research collaborations with LATAM communities: camaraderie vs. cautiousness, informality vs. formality and hopefulness vs. transparency. Building on our findings, we provide three recommendations for researchers interested in engaging in community-based research in similar contexts.
CSCW Companion ’24, November 9–13, 2024, San Jose, Costa Rica
</description>
<pubDate>Mon, 11 Nov 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157793</guid>
<dc:date>2024-11-11T00:00:00Z</dc:date>
</item>
<item>
<title>AudienceView: AI-Assisted Interpretation of Audience Feedback in Journalism</title>
<link>https://hdl.handle.net/1721.1/157792</link>
<description>AudienceView: AI-Assisted Interpretation of Audience Feedback in Journalism
Brannon, William; Beeferman, Doug; Jiang, Hang; Heyward, Andrew; Roy, Deb
Understanding and making use of audience feedback is important but difficult for journalists, who now face an impractically large volume of audience comments online. We introduce AudienceView, an online tool to help journalists categorize and interpret this feedback by leveraging large language models (LLMs). AudienceView identifies themes and topics, connects them back to specific comments, provides ways to visualize the sentiment and distribution of the comments, and helps users develop ideas for subsequent reporting projects. We consider how such tools can be useful in a journalist's workflow, and emphasize the importance of contextual awareness and human judgment.
CSCW Companion ’24, November 9–13, 2024, San Jose, Costa Rica
</description>
<pubDate>Mon, 11 Nov 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157792</guid>
<dc:date>2024-11-11T00:00:00Z</dc:date>
</item>
<item>
<title>Antigen presentation dynamics shape the antibody response to variants like SARS-CoV-2 Omicron after multiple vaccinations with the original strain</title>
<link>https://hdl.handle.net/1721.1/157791</link>
<description>Antigen presentation dynamics shape the antibody response to variants like SARS-CoV-2 Omicron after multiple vaccinations with the original strain
Yang, Leerang; Van Beek, Matthew; Wang, Zijun; Muecksch, Frauke; Canis, Marie; Hatziioannou, Theodora; Bieniasz, Paul D; Nussenzweig, Michel C; Chakraborty, Arup K
The Omicron variant of SARS-CoV-2 is not effectively neutralized by most antibodies elicited by two doses of mRNA vaccines, but a third dose increases anti-Omicron neutralizing antibodies. We reveal mechanisms underlying this observation by combining computational modeling with data from vaccinated humans. After the first dose, limited antigen availability in germinal centers (GCs) results in a response dominated by B cells that target immunodominant epitopes that are mutated in an Omicron-like variant. After the second dose, these memory cells expand and differentiate into plasma cells that secrete antibodies that are thus ineffective for such variants. However, these pre-existing antigen-specific antibodies transport antigen efficiently to secondary GCs. They also partially mask immunodominant epitopes. Enhanced antigen availability and epitope masking in secondary GCs together result in generation of memory B cells that target subdominant epitopes that are less mutated in Omicron. The third dose expands these cells and boosts anti-variant neutralizing antibodies.
</description>
<pubDate>Sat, 01 Apr 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157791</guid>
<dc:date>2023-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Polymer folding through active processes recreates features of genome organization</title>
<link>https://hdl.handle.net/1721.1/157790</link>
<description>Polymer folding through active processes recreates features of genome organization
Goychuk, Andriy; Kannan, Deepti; Chakraborty, Arup K; Kardar, Mehran
From proteins to chromosomes, polymers fold into specific conformations that control their biological function. Polymer folding has long been studied with equilibrium thermodynamics, yet intracellular organization and regulation involve energy-consuming, active processes. Signatures of activity have been measured in the context of chromatin motion, which shows spatial correlations and enhanced subdiffusion only in the presence of adenosine triphosphate. Moreover, chromatin motion varies with genomic coordinate, pointing toward a heterogeneous pattern of active processes along the sequence. How do such patterns of activity affect the conformation of a polymer such as chromatin? We address this question by combining analytical theory and simulations to study a polymer subjected to sequence-dependent correlated active forces. Our analysis shows that a local increase in activity (larger active forces) can cause the polymer backbone to bend and expand, while less active segments straighten out and condense. Our simulations further predict that modest activity differences can drive compartmentalization of the polymer consistent with the patterns observed in chromosome conformation capture experiments. Moreover, segments of the polymer that show correlated active (sub)diffusion attract each other through effective long-ranged harmonic interactions, whereas anticorrelations lead to effective repulsions. Thus, our theory offers nonequilibrium mechanisms for forming genomic compartments, which cannot be distinguished from affinity-based folding using structural data alone. As a first step toward exploring whether active mechanisms contribute to shaping genome conformations, we discuss a data-driven approach.
</description>
<pubDate>Tue, 16 May 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157790</guid>
<dc:date>2023-05-16T00:00:00Z</dc:date>
</item>
<item>
<title>A model for cis-regulation of transcriptional condensates and gene expression by proximal lncRNAs</title>
<link>https://hdl.handle.net/1721.1/157789</link>
<description>A model for cis-regulation of transcriptional condensates and gene expression by proximal lncRNAs
Natarajan, Pradeep; Shrinivas, Krishna; Chakraborty, Arup K
Long noncoding RNAs (lncRNAs) perform several important functions in cells including cis-regulation of transcription. Barring a few specific cases, the mechanisms underlying transcriptional regulation by lncRNAs remain poorly understood. Transcriptional proteins can form condensates via phase separation at protein-binding loci (BL) on the genome (e.g., enhancers and promoters). lncRNA-coding genes are present at loci in close genomic proximity of these BL and these RNAs can interact with transcriptional proteins via attractive heterotypic interactions mediated by their net charge. Motivated by these observations, we propose that lncRNAs can dynamically regulate transcription in cis via charge-based heterotypic interactions with transcriptional proteins in condensates. To study the consequences of this mechanism, we developed and studied a dynamical phase-field model. We find that proximal lncRNAs can promote condensate formation at the BL. Vicinally localized lncRNA can migrate to the BL to attract more protein because of favorable interaction free energies. However, increasing the distance beyond a threshold leads to a sharp decrease in protein recruitment to the BL. This finding could potentially explain why genomic distances between lncRNA-coding genes and protein-coding genes are conserved across metazoans. Finally, our model predicts that lncRNA transcription can fine-tune transcription from neighboring condensate-controlled genes, repressing transcription from highly expressed genes and enhancing transcription of genes expressed at a low level. This nonequilibrium effect can reconcile conflicting reports that lncRNAs can enhance or repress transcription from proximal genes.
</description>
<pubDate>Sat, 01 Jul 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157789</guid>
<dc:date>2023-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>A low-cost, open-source cylindrical Couette rheometer</title>
<link>https://hdl.handle.net/1721.1/157788</link>
<description>A low-cost, open-source cylindrical Couette rheometer
Erni, Makita; Hart, A. John; Trumper, David; Owens, Crystal E.
Rheology describes the flow of fluids from food and plastics, to coatings, adhesives, and 3D printing inks, and is commonly denoted by viscosity alone as a simplification. While viscometers adequately probe Newtonian (constant) viscosity, most fluids have complex viscosity, requiring tests over multiple shear rates, and transient measurements. As a result, rheometers are typically large, expensive, and require additional infrastructure (e.g., gas lines), rendering them inaccessible for regular use by many individuals, small organizations, and educators. Here, we introduce a low-cost (under USD$200 bill of materials) Open Source Rheometer (OSR), constructed entirely from thermoplastic 3D printed components and off-the-shelf electromechanical components. A sample fluid rests in a cup while a micro stepping motor rotates a tool inside the cup, applying strain-controlled shear flow. A loadcell measures reaction torque exerted on the cup, and viscosity is calculated. To establish the measurement range, the viscosity of four Newtonian samples of 0.1–10 Pa.s were measured with the OSR and compared to benchmark values from a laboratory rheometer, showing under 23% error. Building on this, flow curves of three complex fluids – a microgel (hand sanitizer), foam (Gillette), and biopolymer solution (1% Xanthan Gum) – were measured with a similar error range. Stress relaxation, a transient test, was demonstrated on the biopolymer solution to extract the nonlinear damping function. We finally include detailed exposition of measurement windows, sources of error, and future design suggestions. The OSR cost is ∼1/25th that of commercially available devices with comparable minimum torque (200 µN.m), and provides a fully open-source platform for further innovation in customized rheometry.
</description>
<pubDate>Wed, 04 Dec 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157788</guid>
<dc:date>2024-12-04T00:00:00Z</dc:date>
</item>
<item>
<title>Modeling Electrochemical and Rheological Characteristics of Suspension-Based Electrodes for Redox Flow Cells</title>
<link>https://hdl.handle.net/1721.1/157787</link>
<description>Modeling Electrochemical and Rheological Characteristics of Suspension-Based Electrodes for Redox Flow Cells
Majji, Madhu V; Neyhouse, Bertrand J; Matteucci, Nicholas J; Lennon, Kyle R; Mallia, Christopher T; Fenton Jr., Alexis M; Swan, James W; Brushett, Fikile R
Flowable suspension-based electrodes (FSEs) have gained attention in recent years, as the integration of solid materials into electrochemical flow cells can offer improved performance and flexible operation. However, under conditions that engender favorable electrochemical properties (e.g., high particle loading, high conductivity, high surface area), FSEs can exhibit non-Newtonian characteristics that impose large pumping losses and flow-dependent transport rates. These multifaceted trade-offs motivate the use of models to broadly explore scaling relationships and better understand design rules for electrochemical devices. To this end, we present a one-dimensional model, integrating porous electrode theory with FSE rheology as well as flow-dependent electron and mass transport under pressure-driven flow. We study FSE behavior as a function of material properties and operating conditions, identifying key dimensionless groups that describe the underlying physical processes. We assess flow cell performance by quantifying electrode polarization and relative pumping losses, establishing generalized property-performance relationships for FSEs. Importantly, we expound relevant operating regimes—based on a subset of dimensionless groups—that inform practical operating envelopes, ultimately helping to guide FSE and cell engineering for electrochemical systems.
</description>
<pubDate>Mon, 01 May 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157787</guid>
<dc:date>2023-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Bridging Dictionary: AI-Generated Dictionary of Partisan Language Use</title>
<link>https://hdl.handle.net/1721.1/157764</link>
<description>Bridging Dictionary: AI-Generated Dictionary of Partisan Language Use
Jiang, Hang; Beeferman, Doug; Brannon, William; Heyward, Andrew; Roy, Deb
Words often carry different meanings for people from diverse backgrounds. Today's era of social polarization demands that we choose words carefully to prevent miscommunication, especially in political communication and journalism. To address this issue, we introduce the Bridging Dictionary, an interactive tool designed to illuminate how words are perceived by people with different political views. The Bridging Dictionary includes a static, printable document featuring 796 terms with summaries generated by a large language model. These summaries highlight how the terms are used distinctively by Republicans and Democrats. Additionally, the Bridging Dictionary offers an interactive interface that lets users explore selected words, visualizing their frequency, sentiment, summaries, and examples across political divides. We present a use case for journalists and emphasize the importance of human agency and trust in further enhancing this tool. The deployed version of Bridging Dictionary is available at https://dictionary.ccc-mit.org/.
CSCW Companion ’24, November 9–13, 2024, San Jose, Costa Rica
</description>
<pubDate>Mon, 11 Nov 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157764</guid>
<dc:date>2024-11-11T00:00:00Z</dc:date>
</item>
<item>
<title>Harnessing LLMs for Automated Video Content Analysis: An Exploratory Workflow of Short Videos on Depression</title>
<link>https://hdl.handle.net/1721.1/157763</link>
<description>Harnessing LLMs for Automated Video Content Analysis: An Exploratory Workflow of Short Videos on Depression
Liu, Jiaying (Lizzy); Wang, Yunlong; Lyu, Yao; Su, Yiheng; Niu, Shuo; Xu, Xuhai; Zhang, Yan
Despite the growing interest in leveraging Large Language Models (LLMs) for content analysis, current studies have primarily focused on text-based content. In the present work, we explored the potential of LLMs in assisting video content analysis by conducting a case study that followed a new workflow of LLM-assisted multimodal content analysis. The workflow encompasses codebook design, prompt engineering, LLM processing, and human evaluation. We strategically crafted annotation prompts to get LLM Annotations in structured form and explanation prompts to generate LLM Explanations for a better understanding of LLM reasoning and transparency. To test LLM's video annotation capabilities, we analyzed 203 keyframes extracted from 25 YouTube short videos about depression. We compared the LLM Annotations with those of two human coders and found that LLM has higher accuracy in object and activity Annotations than emotion and genre Annotations. Moreover, we identified the potential and limitations of LLM's capabilities in annotating videos. Based on the findings, we explore opportunities and challenges for future research and improvements to the workflow. We also discuss ethical concerns surrounding future studies based on LLM-assisted video analysis.
CSCW Companion ’24, November 9–13, 2024, San Jose, Costa Rica
</description>
<pubDate>Mon, 11 Nov 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157763</guid>
<dc:date>2024-11-11T00:00:00Z</dc:date>
</item>
<item>
<title>FraudGT: A Simple, Effective, and Efficient Graph Transformer for Financial Fraud Detection</title>
<link>https://hdl.handle.net/1721.1/157762</link>
<description>FraudGT: A Simple, Effective, and Efficient Graph Transformer for Financial Fraud Detection
Lin, Junhong; Guo, Xiaojie; Zhu, Yada; Mitchell, Samuel; Altman, Erik; Shun, Julian
Fraud detection plays a crucial role in the financial industry, preventing significant financial losses. Traditional rule-based systems and manual audits often struggle with the evolving nature of fraud schemes and the vast volume of transactions. Recent advances in machine learning, particularly graph neural networks (GNNs), have shown promise in addressing these challenges. However, GNNs still face limitations in learning intricate patterns, effectively utilizing edge attributes, and maintaining efficiency on large financial graphs. To address these limitations, we introduce FraudGT, a simple, effective, and efficient graph transformer (GT) model specifically designed for fraud detection in financial transaction graphs. FraudGT leverages edge-based message passing gates and an edge attribute-based attention bias to enhance its ability to discern important transactional features and differentiate between normal and fraudulent transactions. Our model achieves state-of-the-art performance in detecting fraudulent activities while demonstrating high throughput and significantly lower latency compared to existing methods. We validate the effectiveness of FraudGT through extensive experiments on multiple large-scale synthetic financial datasets. FraudGT consistently outperforms other models, achieving 7.8–17.8% higher F1 scores, while delivering an average of 2.4 × greater throughput and reduced latency. Our code and datasets are available at https://github.com/junhongmit/FraudGT.
ICAIF ’24, November 14–17, 2024, Brooklyn, NY, USA
</description>
<pubDate>Thu, 14 Nov 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157762</guid>
<dc:date>2024-11-14T00:00:00Z</dc:date>
</item>
<item>
<title>GUI: A Comprehensive Dataset of Global Urban Infrastructure Based on Geospatial Visual Foundation Models</title>
<link>https://hdl.handle.net/1721.1/157761</link>
<description>GUI: A Comprehensive Dataset of Global Urban Infrastructure Based on Geospatial Visual Foundation Models
Han, Zhenyu; Zhang, Xin; Xi, Yanxin; Luo, Yan; Xia, Tong; Li, Yong
The substantial social and financial costs of infrastructure identification impede in-depth analyses of sustainable urban design, especially in developing countries. In this paper, we present a novel framework with interactive web visualization based on geospatial visual foundation models. Leveraging this framework, we examine the urban infrastructure information in 1,178 cities worldwide, covering 93, 088 km2 areas. Cross-validation reveals that the overall accuracy of identified infrastructure achieves 67.0%. It sheds light on the sustainable development of cities and exposes the stark inequity in urban infrastructure provision for vulnerable populations. The identified urban infrastructure dataset of this study are available at https://github.com/tsinghua-fib-lab/GUI, and the interactive web application is at https://tinyurl.com/yz7xbfy3.
SIGSPATIAL ’24, October 29-November 1, 2024, Atlanta, GA, USA
</description>
<pubDate>Tue, 29 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157761</guid>
<dc:date>2024-10-29T00:00:00Z</dc:date>
</item>
<item>
<title>Identifying Money Laundering Subgraphs on the Blockchain</title>
<link>https://hdl.handle.net/1721.1/157760</link>
<description>Identifying Money Laundering Subgraphs on the Blockchain
Song, Kiwhan; Dhraief, Mohamed Ali; Xu, Muhua; Cai, Locke; Chen, Xuhao; Mithal, Arvind; Chen, Jie
Anti-Money Laundering (AML) involves the identification of money laundering crimes in financial activities, such as cryptocurrency transactions. Recent studies advanced AML through the lens of graph-based machine learning, modeling the web of financial transactions as a graph and developing graph methods to identify suspicious activities. For instance, a recent effort on opensourcing datasets and benchmarks, Elliptic2, treats a set of Bitcoin addresses, considered to be controlled by the same entity, as a graph node and transactions among entities as graph edges. This modeling reveals the “shape” of a money laundering scheme—a subgraph on the blockchain, such as a peeling chain or a nested service. Despite the attractive subgraph classification results benchmarked by the paper, competitive methods remain expensive to apply due to the massive size of the graph; moreover, existing methods require candidate subgraphs as inputs which may not be available in practice.&#13;
In this work, we introduce RevTrack, a graph-based framework that enables large-scale AML analysis with a lower cost and a higher accuracy. The key idea is to track the initial senders and the final receivers of funds; these entities offer a strong indication of the nature (licit vs. suspicious) of their respective subgraph. Based on this framework, we propose RevClassify, which is a neural network model for subgraph classification. Additionally, we address the practical problem where subgraph candidates are not given, by proposing RevFilter. This method identifies new suspicious subgraphs by iteratively filtering licit transactions, using RevClassify. Benchmarking these methods on Elliptic2, a new standard for AML, we show that RevClassify outperforms state-of-the-art subgraph classification techniques in both cost and accuracy. Furthermore, we demonstrate the effectiveness of RevFilter in discovering new suspicious subgraphs, confirming its utility for practical AML.
ICAIF ’24, November 14–17, 2024, Brooklyn, NY, USA
</description>
<pubDate>Thu, 14 Nov 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157760</guid>
<dc:date>2024-11-14T00:00:00Z</dc:date>
</item>
<item>
<title>Q-Pilot: Field Programmable Qubit Array Compilation with Flying Ancillas</title>
<link>https://hdl.handle.net/1721.1/157759</link>
<description>Q-Pilot: Field Programmable Qubit Array Compilation with Flying Ancillas
Wang, Hanrui; Tan, Daniel Bochen; Liu, Pengyu; Liu, Yilian; Gu, Jiaqi; Cong, Jason; Han, Song
Neutral atom arrays, particularly the reconfigurable field programmable qubit arrays (FPQA) with atom movement, show strong promise for quantum computing. FPQA has a dynamic qubit connectivity, facilitating cost-effective execution of long-range gates, but it also poses new challenges in the compilation. Inspired by the FPGA compilation strategy, we develop a router, Q-Pilot, that leverages flying ancillas to implement 2-Q gates between data qubits mapped to fixed atoms. Equipped with domain-specific routing techniques, Q-Pilot achieves 1.4×, 27.7×, and 6.7× reductions in circuit depth for 100-qubit random, quantum simulation, and QAOA circuits, respectively, compared to alternative fixed atom array architectures.
</description>
<pubDate>Sun, 23 Jun 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157759</guid>
<dc:date>2024-06-23T00:00:00Z</dc:date>
</item>
<item>
<title>Engineering Redox Flow Battery Electrodes with Spatially Varying Porosity Using Non‐Solvent‐Induced Phase Separation</title>
<link>https://hdl.handle.net/1721.1/157758</link>
<description>Engineering Redox Flow Battery Electrodes with Spatially Varying Porosity Using Non‐Solvent‐Induced Phase Separation
Wan, Charles Tai-Chieh; Jacquemond, Rémy Richard; Chiang, Yet-Ming; Forner-Cuenca, Antoni; Brushett, Fikile R
Redox flow batteries (RFBs) are a promising electrochemical platform for efficiently and reliably delivering electricity to the grid. Within the RFB, porous carbonaceous electrodes facilitate electrochemical reactions and distribute the flowing electrolyte. Tailoring electrode microstructure and surface area can improve RFB performance, lowering costs. Electrodes with spatially varying porosity may increase electrode utilization and provide surface area in reaction‐limited zones; however, the efficacy of such designs remains an open area of research. Herein, a non‐solvent‐induced phase‐separation (NIPS) technique that enables the reproducible synthesis of macrovoid‐free electrodes with well‐defined across‐thickness porosity gradients is described. The monotonically varying porosity profile is quantified and the physical properties and surface chemistries of porosity‐gradient electrodes are compared with macrovoid‐containing electrode, also synthesized by NIPS. Then, the electrochemical and fluid dynamic performance of the porosity‐gradient electrodes is evaluated, exploring the effect of changing the direction of the porosity gradient and benchmarking against the macrovoid‐containing electrode. Lastly, the performance is examined in a vanadium RFB, finding that the porosity‐gradient electrode outperforms the macrovoid electrode, is independent of gradient direction, and performs favorably compared to advanced electrodes in the contemporary literature. It is anticipated that the approach motivates further exploration of microstructurally tailored electrodes in electrochemical systems.
</description>
<pubDate>Tue, 01 Aug 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157758</guid>
<dc:date>2023-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Characterizing the Impact of Oligomerization on Redox Flow Cell Performance</title>
<link>https://hdl.handle.net/1721.1/157757</link>
<description>Characterizing the Impact of Oligomerization on Redox Flow Cell Performance
Weiss, Trent A; Fan, Gang; Neyhouse, Bertrand J; Moore, Evan B; Furst, Ariel; Brushett, Fikile R
Redox flow batteries (RFBs) are hindered by complex failure modes, particularly crossover through the membrane, resulting in capacity fade and reduced cycling efficiencies. Redox‐active oligomers (RAOs) have recently been proposed for mitigating this phenomenon while maintaining sufficient transport properties; however, to date, few studies have quantified how the chemical and electrochemical properties of RAOs influence their performance in redox flow cells. Here, we demonstrate that oligomeric derivatives of 2,2,6,6‐tetramethylpiperidine 1‐oxyl (TEMPO) exhibit lower diffusivities than the monomeric species but retain facile charge transfer characteristics. The size‐dependent variations in mass transport rates directly translate to differences in flow cell polarization and symmetric cycling performance. Post‐mortem analyses reveal that oligomerization does not meaningfully alter decay processes as evinced by similar capacity fade across all species. Broadly, these findings corroborate and extend upon previously developed relationships between molecular size, electrochemical properties, and flow cell performance.
</description>
<pubDate>Tue, 01 Aug 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157757</guid>
<dc:date>2023-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Modeling the Impact of Electrolyte Flow on Heat Management in a Li-Ion Convection Cell</title>
<link>https://hdl.handle.net/1721.1/157756</link>
<description>Modeling the Impact of Electrolyte Flow on Heat Management in a Li-Ion Convection Cell
Gao, Weiran; Drake, Javit; Brushett, Fikile R
In response to challenges in the thermal management of lithium-ion batteries (LIBs), we investigate the concept of circulating electrolyte through the porous electrodes and separator to facilitate effective, uniform, and real-time temperature regulation. We show, through physics-based electrothermal modeling and dimensional analysis of a single, planar LIB cell, that electrolyte convection can simultaneously draw heat from the cell and suppress heat generation from entropy change, charge-transfer, and ohmic losses, and that the cell temperature rise can be effectively mitigated when heat removal matches or exceeds heat generation. These findings distinguish internal convection from external surface cooling approaches used in conventional thermal management that often lead to a tradeoff between heat and mass transport. In a simulated exemplary 5.7-C case, a LIB cell with stationary electrolyte must stop discharging at only 54% of its capacity due to cell temperature rise to an upper threshold (325 K); with sufficient electrolyte flow (∼1 μm s−1 for a single cell, or a residence time of ∼200 s), the cell can be maintained below 315 K while delivering 98% of its capacity. Finally, to illustrate the potential for dynamic temperature regulation, we simulate scenarios where cells already experiencing self-heating can instantly arrest temperature rise with the onset of convection.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157756</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>On the Representation and Learning of Monotone Triangular Transport Maps</title>
<link>https://hdl.handle.net/1721.1/157755</link>
<description>On the Representation and Learning of Monotone Triangular Transport Maps
Baptista, Ricardo; Marzouk, Youssef; Zahm, Olivier
Transportation of measure provides a versatile approach for modeling complex probability distributions, with applications in density estimation, Bayesian inference, generative modeling, and beyond. Monotone triangular transport maps—approximations of the Knothe–Rosenblatt (KR) rearrangement—are a canonical choice for these tasks. Yet the representation and parameterization of such maps have a significant impact on their generality and expressiveness, and on properties of the optimization problem that arises in learning a map from data (e.g., via maximum likelihood estimation). We present a general framework for representing monotone triangular maps via invertible transformations of smooth functions. We establish conditions on the transformation such that the associated infinite-dimensional minimization problem has no spurious local minima, i.e., all local minima are global minima; and we show for target distributions satisfying certain tail conditions that the unique global minimizer corresponds to the KR map. Given a sample from the target, we then propose an adaptive algorithm that estimates a sparse semi-parametric approximation of the underlying KR map. We demonstrate how this framework can be applied to joint and conditional density estimation, likelihood-free inference, and structure learning of directed graphical models, with stable generalization performance across a range of sample sizes.
</description>
<pubDate>Thu, 16 Nov 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157755</guid>
<dc:date>2023-11-16T00:00:00Z</dc:date>
</item>
<item>
<title>Quantifying concentration distributions in redox flow batteries with neutron radiography</title>
<link>https://hdl.handle.net/1721.1/157754</link>
<description>Quantifying concentration distributions in redox flow batteries with neutron radiography
Jacquemond, Rémy Richard; van der Heijden, Maxime; Boz, Emre Burak; Carreón Ruiz, Eric Ricardo; Greco, Katharine Virginia; Kowalski, Jeffrey Adam; Muñoz Perales, Vanesa; Brushett, Fikile Richard; Nijmeijer, Kitty; Boillat, Pierre; Forner-Cuenca, Antoni
The continued advancement of electrochemical technologies requires an increasingly detailed understanding of the microscopic processes that control their performance, inspiring the development of new multi-modal diagnostic techniques. Here, we introduce a neutron imaging approach to enable the quantification of spatial and temporal variations in species concentrations within an operating redox flow cell. Specifically, we leverage the high attenuation of redox-active organic materials (high hydrogen content) and supporting electrolytes (boron-containing) in solution and perform subtractive neutron imaging of active species and supporting electrolyte. To resolve the concentration profiles across the electrodes, we employ an in-plane imaging configuration and correlate the concentration profiles to cell performance with polarization experiments under different operating conditions. Finally, we use time-of-flight neutron imaging to deconvolute concentrations of active species and supporting electrolyte during operation. Using this approach, we evaluate the influence of cell polarity, voltage bias and flow rate on the concentration distribution within the flow cell and correlate these with the macroscopic performance, thus obtaining an unprecedented level of insight into reactive mass transport. Ultimately, this diagnostic technique can be applied to a range of (electro)chemical technologies and may accelerate the development of new materials and reactor designs.&lt;/jats:p&gt;
</description>
<pubDate>Thu, 05 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157754</guid>
<dc:date>2024-09-05T00:00:00Z</dc:date>
</item>
<item>
<title>NOFIS: Normalizing Flow for Rare Circuit Failure Analysis</title>
<link>https://hdl.handle.net/1721.1/157751</link>
<description>NOFIS: Normalizing Flow for Rare Circuit Failure Analysis
Gao, Zhengqi; Zhang, Dinghuai; Daniel, Luca; Boning, Duane
DAC ’24, June 23–27, 2024, San Francisco, CA, USA
</description>
<pubDate>Sun, 23 Jun 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157751</guid>
<dc:date>2024-06-23T00:00:00Z</dc:date>
</item>
<item>
<title>Hardness of Approximate Diameter: Now for Undirected Graphs</title>
<link>https://hdl.handle.net/1721.1/157750</link>
<description>Hardness of Approximate Diameter: Now for Undirected Graphs
Dalirrooyfard, Mina; Li, Ray; Vassilevska Williams, Virginia
Approximating the graph diameter is a basic task of both theoretical and practical interest. A simple folklore algorithm can output a 2-approximation to the diameter in linear time by running BFS from an arbitrary vertex. It has been open whether a better approximation is possible in near-linear time. A series of papers on fine-grained complexity have led to strong hardness results for diameter in directed graphs, culminating in a recent tradeoff curve independently discovered by [Li, STOC'21] and [Dalirrooyfard and Wein, STOC'21], showing that under the Strong Exponential Time Hypothesis (SETH), for any integer k?2 and ?&gt;0, a (2-(1/k)-?) approximation for diameter in directed m-edge graphs requires mn1+1/(k-1)-o(1) time. In particular, the simple linear time 2-approximation algorithm is optimal for directed graphs.    In this paper we prove that the same tradeoff lower bound curve is possible for undirected graphs as well, extending results of [Roditty and Vassilevska W., STOC', [Li'20] and [Bonnet, ICALP'21] who proved the first few cases of the curve, k=2,3 and 4, respectively. Our result shows in particular that the simple linear time 2-approximation algorithm is also optimal for undirected graphs. To obtain our result, we extract the core ideas in known reductions and introduce a unification and generalization that could be useful for proving SETH-based hardness for other problems in undirected graphs related to distance computation.
</description>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157750</guid>
</item>
<item>
<title>SAUC: Sparsity-Aware Uncertainty Calibration for Spatiotemporal Prediction with Graph Neural Networks</title>
<link>https://hdl.handle.net/1721.1/157749</link>
<description>SAUC: Sparsity-Aware Uncertainty Calibration for Spatiotemporal Prediction with Graph Neural Networks
Zhuang, Dingyi; Bu, Yuheng; Wang, Guang; Wang, Shenhao; Zhao, Jinhua
Quantifying uncertainty is crucial for robust and reliable predictions. However, existing spatiotemporal deep learning mostly focuses on deterministic prediction, overlooking the inherent uncertainty in such prediction. Particularly, highly-granular spatiotemporal datasets are often sparse, posing extra challenges in prediction and uncertainty quantification. To address these issues, this paper introduces a novel post-hoc Sparsity-aware Uncertainty Calibration (SAUC) framework, which calibrates uncertainty in both zero and non-zero values. To develop SAUC, we firstly modify the state-of-the-art deterministic spatiotemporal Graph Neural Networks (ST-GNNs) to probabilistic ones in the pre-calibration phase. Then we calibrate the probabilistic ST-GNNs for zero and non-zero values using quantile approaches. Through extensive experiments, we demonstrate that SAUC can effectively fit the variance of sparse data and generalize across two real-world spatiotemporal datasets at various granularities. Specifically, our empirical experiments show a 20% reduction in calibration errors in zero entries on the sparse traffic accident and urban crime prediction. Overall, this work demonstrates the theoretical and empirical values of the SAUC framework, thus bridging a significant gap between uncertainty quantification and spatiotemporal prediction.
</description>
<pubDate>Tue, 29 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157749</guid>
<dc:date>2024-10-29T00:00:00Z</dc:date>
</item>
<item>
<title>Efficient numerical schemes for multidimensional population balance models</title>
<link>https://hdl.handle.net/1721.1/157748</link>
<description>Efficient numerical schemes for multidimensional population balance models
Inguva, Pavan K; Braatz, Richard D
Multidimensional population balance models (PBMs) describe chemical and biological processes having a&#13;
distribution over two or more intrinsic properties (such as size and age, or two independent spatial variables).&#13;
The incorporation of additional intrinsic variables into a PBM improves its descriptive capability and can be&#13;
necessary to capture specific features of interest. As most PBMs of interest cannot be solved analytically,&#13;
computationally expensive high-order finite difference or finite volume methods are frequently used to obtain&#13;
an accurate numerical solution. We propose a finite difference scheme based on operator splitting and solving&#13;
each sub-problem at the limit of numerical stability that achieves a discretization error that is zero for&#13;
certain classes of PBMs and low enough to be acceptable for other classes. In conjunction to employing&#13;
specially constructed meshes and variable transformations, the scheme exploits the commutative property of&#13;
the differential operators present in many classes of PBMs. The scheme has very low computational cost –&#13;
potentially as low as just memory reallocation. Multiple case studies demonstrate the performance of the&#13;
proposed scheme.
</description>
<pubDate>Wed, 01 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157748</guid>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Machine Learning Benchmarks for the Classification of Equivalent Circuit Models from Electrochemical Impedance Spectra</title>
<link>https://hdl.handle.net/1721.1/157747</link>
<description>Machine Learning Benchmarks for the Classification of Equivalent Circuit Models from Electrochemical Impedance Spectra
Schaeffer, Joachim; Gasper, Paul; Garcia-Tamayo, Esteban; Gasper, Raymond; Adachi, Masaki; Pablo Gaviria-Cardona, Juan; Montoya-Bedoya, Simon; Bhutani, Anoushka; Schiek, Andrew; Goodall, Rhys; Findeisen, Rolf; Braatz, Richard D; Engelke, Simon
Analysis of Electrochemical Impedance Spectroscopy (EIS) data for electrochemical systems often consists of defining an Equivalent Circuit Model (ECM) using expert knowledge and then optimizing the model parameters to deconvolute various resistance, capacitive, inductive, or diffusion responses. For small data sets, this procedure can be conducted manually; however, it is not feasible to manually define a proper ECM for extensive data sets with a wide range of EIS responses. Automatic identification of an ECM would substantially accelerate the analysis of large sets of EIS data. We showcase machine learning methods to classify the ECMs of 9,300 impedance spectra provided by QuantumScape for the BatteryDEV hackathon. The best-performing approach is a gradient-boosted tree model utilizing a library to automatically generate features, followed by a random forest model using the raw spectral data. A convolutional neural network using boolean images of Nyquist representations is presented as an alternative, although it achieves a lower accuracy. We publish the data and open source the associated code. The approaches described in this article can serve as benchmarks for further studies. A key remaining challenge is the identifiability of the labels, underlined by the model performances and the comparison of misclassified spectra.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157747</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Retrieval of refractivity fields from GNSS tropospheric delays: theoretical and data-based evaluation of collocation methods and comparisons with GNSS tomography</title>
<link>https://hdl.handle.net/1721.1/157746</link>
<description>Retrieval of refractivity fields from GNSS tropospheric delays: theoretical and data-based evaluation of collocation methods and comparisons with GNSS tomography
Shehaj, Endrit; Geiger, Alain; Rothacher, Markus; Moeller, Gregor
This paper focuses on the retrieval of refractivity fields from GNSS measurements by means of least-squares collocation. Collocation adjustment estimates parameters that relate delays and refractivity without relying on a grid. It contains functional and stochastic models that define the characteristics of the retrieved refractivity fields. This work aims at emphasizing the capabilities and limitations of the collocation method in modeling refractivity and to present it as a valuable alternative to GNSS tomography. Initially, we analyze the stochastic models in collocation and compare the theoretical errors of collocation with those of tomography. We emphasize the low variability of collocation formal variances/covariances compared to tomography and its lower dependence on a-priori fields. Then, based on real and simulated data, we investigate the importance of station resolution and station heights for collocation. Increasing the network resolution, for example, from 10 to 2 km, results in improved a-posteriori statistics, including a 10% reduction in the error statistic for the retrieved refractivity up to 6 km. In addition, using additional stations at higher altitudes has an impact on the retrieved refractivity fields of about 1 ppm in terms of standard deviation up to 6 km, and a bias reduction of more than 3 ppm up to 3 km. Furthermore, we compare refractivity fields retrieved through tomography and collocation, where data of the COSMO weather model are utilized in a closed-loop validation mode to simulate tropospheric delays and validate the retrieved profiles. While tomography estimates are less biased, collocation captures relative changes in refractivity more effectively among the voxels within one height level. Finally, we apply tomography and collocation to test their capabilities to detect an approaching weather front. Both methods can sense the weather front, but their atmospheric structures appear more similar when the GNSS network has a well-distributed height coverage.
</description>
<pubDate>Sat, 30 Nov 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157746</guid>
<dc:date>2024-11-30T00:00:00Z</dc:date>
</item>
<item>
<title>Structural molecular modeling of bacterial integral membrane protein enzymes and their AlphaFold2 predicted water-soluble QTY variants</title>
<link>https://hdl.handle.net/1721.1/157745</link>
<description>Structural molecular modeling of bacterial integral membrane protein enzymes and their AlphaFold2 predicted water-soluble QTY variants
Sajeev-Sheeja, Akash; Zhang, Shuguang
Context Beta-barrel enzymes are an important area of study in the field of structural biology. These proteins serve crucial roles, acting as porins, transporters, enzymes, virulence factors, and receptors. Recent research has unveiled a novel role for beta-barrel enzymes in the bacterial integral membrane as sentinels. They remain inactive when the integral membrane is intact but activate to carry out enzymatic catalysis in response to host immune responses and antibiotics that breach this barrier. Understanding their structure and function is pivotal in grasping their sentinel role in the bacterial integral membrane. Here we present our structural molecular modeling analyses on four bacterial integral membrane beta-barrel enzymes: (a) OMPLA, (b) OmpT, (c) PagP from E. coli, and (d) PagL from Pseudomonas aeruginosa. We superposed the structures of native beta-barrel integral membrane enzymes with their AlphaFold2-predicted QTY variant structures that showed remarkable similarity despite the replacement of at least 22.95% amino acids in transmembrane regions, the superposed structures displayed notable structural similarity, indicated by RMSD values ranging from 0.181 Å to 0.286 Å. We also analyze the hydrophobicity patches and the enhanced hydrophilic surfaces. Our research provide insights into the structural similarity of hydrophobic and hydrophilic beta-barrel enzymes, validating the utility of the QTY code for investigating beta-barrel membrane enzymes. Our results not only demonstrate that the QTY code serves as a straightforward tool for designing water-soluble membrane proteins across various biological contexts, but it may also stimulate experiments to validate our molecular modeling studies. Methods All the QTY variant beta-barrel enzyme structure prediction was performed using the AlphaFold2 program ( https://github.com/sokrypton/ColabFold ) following the provided instructions. Computations were carried out on 11th Gen Intel Core i5-11300H processor with 16 GB RAM and Iris Xe Graphics, 512 GB NVMe SSD. The structures are publicly available on the AlphaFold2 database ( https://alphafold.ebi.ac.uk ) at the European Bioinformatics Institute (EBI). A custom Python script was used to extract the relevant information from the UniProt database. To predict the structures of the QTY variants, AlphaFold2 was utilized. The native sequences for these enzymes were retrieved from UniProt https://www.uniprot.org , and AlphaFold2 structural predictions were performed using the open-source implementation at https://github.com/sokrypton/ColabFold . The predicted variant structures were then superposed with the native structures using PyMOL https://pymol.org/2/ for structural analysis and comparison. This work leverages public databases PDB, UniProt and open-source software AlphaFold2 and PyMOL to computationally model and analyze QTY variant integral membrane beta-barrel enzyme structures. Graphical abstract
</description>
<pubDate>Thu, 28 Nov 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157745</guid>
<dc:date>2024-11-28T00:00:00Z</dc:date>
</item>
<item>
<title>Towards real-time monitoring of insect species populations</title>
<link>https://hdl.handle.net/1721.1/157744</link>
<description>Towards real-time monitoring of insect species populations
Venverloo, Titus; Duarte, Fábio
Insect biodiversity and abundance are in global decline, potentially leading to a crisis with profound ecological and economic consequences. Methods and technologies to monitor insect species to aid in preservation efforts are rapidly being developed yet their adoption has been slow and focused on specific use cases. We propose a computer vision model that works towards multi-objective insect species identification in real-time and on a large scale. We leverage an image data source with 16 million instances and a recent improvement in the YOLO computer vision architecture to present a quick and open-access method to develop visual AI models to monitor insect species across climatic regions.
</description>
<pubDate>Mon, 01 Jan 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157744</guid>
<dc:date>2024-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Blocking of counter-partisan accounts drives political assortment on Twitter</title>
<link>https://hdl.handle.net/1721.1/157743</link>
<description>Blocking of counter-partisan accounts drives political assortment on Twitter
Martel, Cameron; Mosleh, Mohsen; Yang, Qi; Zaman, Tauhid; Rand, David G
There is strong political assortment of Americans on social media networks. This is typically attributed to preferential tie formation (i.e. homophily) among those with shared partisanship. Here, we demonstrate an additional factor beyond homophily driving assorted networks: preferential prevention of social ties. In two field experiments on Twitter, we created human-looking bot accounts that identified as Democrats or Republicans, and then randomly assigned users to be followed by one of these accounts. In addition to preferentially following-back copartisans, we found that users were 12 times more likely to block counter-partisan accounts compared to copartisan accounts in the first experiment, and 4 times more likely to block counter-partisan accounts relative to a neutral account or a copartisan account in the second experiment. We then replicated these findings in a survey experiment and found evidence of a key motivation for blocking: wanting to avoid seeing any content posted by the blocked user. Additionally, we found that Democrats preferentially blocked counter-partisans more than Republicans, and that this asymmetry was likely due to blocking accounts who post low-quality or politically slanted content (rather than an asymmetry in identity-based blocking). Our results demonstrate that preferential blocking of counter-partisans is an important phenomenon driving political assortment on social media.
</description>
<pubDate>Tue, 30 Apr 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157743</guid>
<dc:date>2024-04-30T00:00:00Z</dc:date>
</item>
<item>
<title>Preserved functional organization of auditory cortex in two individuals missing one temporal lobe from infancy</title>
<link>https://hdl.handle.net/1721.1/157742</link>
<description>Preserved functional organization of auditory cortex in two individuals missing one temporal lobe from infancy
Regev, Tamar I; Lipkin, Benjamin; Boebinger, Dana; Paunov, Alexander; Kean, Hope; Norman-Haignere, Sam V; Fedorenko, Evelina
Human cortical responses to natural sounds, measured with fMRI, can be approximated as the weighted sum of a small number of canonical response patterns (components), each having interpretable functional and anatomical properties. Here, we asked whether this organization is preserved in cases where only one temporal lobe is available due to early brain damage by investigating a unique family: one sibling missing their left temporal lobe from infancy, another missing the right temporal lobe from infancy, and a third anatomically neurotypical. None of the siblings manifested behavioral deficits. We analyzed fMRI responses to diverse natural sounds within the intact hemispheres of these individuals and compared them to 12 neurotypical participants. All siblings manifested typical-like auditory responses in their intact hemispheres. These results suggest that the development of the auditory cortex in each hemisphere does not depend on the existence of the other hemisphere, highlighting the redundancy and equipotentiality of the bilateral auditory system.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157742</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>On the use of physics in machine learning for manufacturing process inspection</title>
<link>https://hdl.handle.net/1721.1/157741</link>
<description>On the use of physics in machine learning for manufacturing process inspection
Barbastathis, George; Zhang, Qihang; Pandit, Ajinkya; Tang, Wenlong; Papageorgiou, Charles; Braatz, Richard; Myerson, Allan S; Tan, Bingyao; Schmetterer, Leopold
We discuss the use of machine learning in computational imaging for manufacturing process inspection and control. In a recent article we described a physics-enhanced auto-correlation based estimator (Peace) for quantitative speckle. We derived an explicit forward relationship between the Particle Size Distribution (PSD) and the speckle autocorrelation for particle sizes significantly larger than the wavelength (x100 to approximately x1,000). We subsequently trained a machine learning kernel to invert the autocorrelation and obtain the PSD, using the explicit forward model to reduce the number of experimentally acquired examples. In this talk, we present an expanded discussion of Peace and its properties, including spatial and temporal sampling and accuracy, and more general applications.
SPIE Optical Metrology, 2023, Munich, Germany
</description>
<pubDate>Fri, 11 Aug 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157741</guid>
<dc:date>2023-08-11T00:00:00Z</dc:date>
</item>
<item>
<title>A Direct Optimization Algorithm for Input-Constrained MPC</title>
<link>https://hdl.handle.net/1721.1/157705</link>
<description>A Direct Optimization Algorithm for Input-Constrained MPC
Wu, Liang; Braatz, Richard D
Providing an execution time certificate is a pressing requirement when deploying Model Predictive Control (MPC) in real-time embedded systems such as microcontrollers. Real-time MPC requires that its worst-case (maximum) execution time must be theoretically guaranteed to be smaller than the sampling time in closed-loop. This technical note considers input-constrained MPC problems and exploits the structure of the resulting box-constrained QPs. Then, we propose a \textit{cost-free} and \textit{data-independent} initialization strategy, which enables us, for the first time, to remove the initialization assumption of feasible full-Newton interior-point algorithms. We prove that the number of iterations of our proposed algorithm is \textit{only dimension-dependent} (\textit{data-independent}), \textit{simple-calculated}, and \textit{exact} (not \textit{worst-case}) with the value ⌈log(2nϵ)−2log(2n√2n√+2√−1)⌉+1, where n denotes the problem dimension and ϵ denotes the constant stopping tolerance. These features enable our algorithm to trivially certify the execution time of nonlinear MPC (via online linearized schemes) or adaptive MPC problems. The execution-time-certified capability of our algorithm is theoretically and numerically validated through an open-loop unstable AFTI-16 example.
</description>
<pubDate>Mon, 01 Jan 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157705</guid>
<dc:date>2024-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Probing the nature of the χc1(3872) state using radiative decays</title>
<link>https://hdl.handle.net/1721.1/157704</link>
<description>Probing the nature of the χc1(3872) state using radiative decays
Aaij, R.; Abdelmotteleb, A. S. W.; Abellan Beteta, C.; Abudinén, F.; Ackernley, T.; Adefisoye, A. A.; Adeva, B.; Adinolfi, M.; Adlarson, P.; Agapopoulou, C.; Aidala, C. A.; Ajaltouni, Z.; Akar, S.; Akiba, K.; Albicocco, P.; Albrecht, J.
The radiative decays χc1(3872) → ψ(2S) γ and χc1(3872) → J/ψγ are used to probe the nature of the χc1(3872) state using proton-proton collision data collected with the LHCb detector, corresponding to an integrated luminosity of 9 fb−1. Using the B+ → χc1(3872)K+ decay, the χc1(3872) → ψ(2S) γ process is observed for the first time and the ratio of its partial width to that of the χc1(3872) → J/ψγ decay is measured to be Γ χ c 1 3872 → ψ 2 S γ Γ χ c 1 3872 → J / ψ γ = 1.67 ± 0.21 ± 0.12 ± 0.04 , where the first uncertainty is statistical, the second systematic and the third is due to the uncertainties on the branching fractions of the ψ(2S) and J/ψ mesons. The measured ratio makes the interpretation of the χc1(3872) state as a pure D0 D ¯ ∗ 0 + D ¯ 0 D*0 molecule questionable and strongly indicates a sizeable compact charmonium or tetraquark component within the χc1(3872) state.
</description>
<pubDate>Thu, 21 Nov 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157704</guid>
<dc:date>2024-11-21T00:00:00Z</dc:date>
</item>
<item>
<title>Multifunctional lightweight autonomous vehicles: an agent-based study</title>
<link>https://hdl.handle.net/1721.1/157703</link>
<description>Multifunctional lightweight autonomous vehicles: an agent-based study
Coretti Sanchez, Naroa; Larson, Kent
In mobility-on-demand services, the number of vehicles needed is often determined by peak demand during rush hours, leading to prolonged vehicle idle times during off-peak periods. This surplus capacity presents an opportunity for vehicles to perform additional tasks, potentially enhancing system efficiency and reducing the overall number of vehicles needed in cities. Leveraging agent-based modeling, we evaluate the effectiveness of vehicles catering to on-demand rides and food deliveries in two real-life scenarios: Cambridge, MA, USA, and San Sebastian, Gipuzkoa, Spain. The results show that multifunctional behavior can lead to reduced fleet sizes, with context-specific exceptions. Additionally, a strategic dispatching algorithm is introduced that demonstrates reductions in wait times and overall distances traveled. This research contributes to the understanding of the performance of multifunctional fleets in diverse urban contexts, informing the development of sustainable and resource-efficient mobility systems.
</description>
<pubDate>Wed, 27 Nov 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157703</guid>
<dc:date>2024-11-27T00:00:00Z</dc:date>
</item>
<item>
<title>Acceleration by stepsize hedging: Silver Stepsize Schedule for smooth convex optimization</title>
<link>https://hdl.handle.net/1721.1/157702</link>
<description>Acceleration by stepsize hedging: Silver Stepsize Schedule for smooth convex optimization
Altschuler, Jason M.; Parrilo, Pablo A.
We provide a concise, self-contained proof that the Silver Stepsize Schedule proposed in our companion paper directly applies to smooth (non-strongly) convex optimization. Specifically, we show that with these stepsizes, gradient descent computes an ε -minimizer in O ( ε - log ρ 2 ) = O ( ε - 0.7864 ) iterations, where ρ = 1 + 2 is the silver ratio. This is intermediate between the textbook unaccelerated rate O ( ε - 1 ) and the accelerated rate O ( ε - 1 / 2 ) due to Nesterov in 1983. The Silver Stepsize Schedule is a simple explicit fractal: the i-th stepsize is 1 + ρ ν ( i ) - 1 where ν ( i ) is the 2-adic valuation of i. The design and analysis are conceptually identical to the strongly convex setting in our companion paper, but simplify remarkably in this specific setting.
</description>
<pubDate>Mon, 25 Nov 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157702</guid>
<dc:date>2024-11-25T00:00:00Z</dc:date>
</item>
<item>
<title>The price elasticity of natural gas demand of small consumers in Germany during the energy crisis 2022</title>
<link>https://hdl.handle.net/1721.1/157701</link>
<description>The price elasticity of natural gas demand of small consumers in Germany during the energy crisis 2022
Jamissen, David; Vatne, Johanne; Holz, Franziska; Neumann, Anne
Understanding how consumers respond to turbulent market conditions is crucial for planning security of natural gas supply. This paper estimates the price elasticity of demand of small consumers in Germany in the period with both high price fluctuations and a fear of natural gas shortage in the aftermath of the Russian invasion of Ukraine. Using granular data between 2018 and 2023, we estimate an Auto Regressive Distributed Lag (ARDL) time series cointegrating model. We find a price elasticity of demand for natural gas of -0.01 for wholesale prices and -0.04 for retail prices. Additionally, we quantify the effects of weather conditions and public awareness on the energy crisis. The results suggest i) that extreme price changes would be required to trigger short-term demand adjustments, and ii) demonstrate the importance of public attention on the crisis situation.
</description>
<pubDate>Thu, 28 Nov 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157701</guid>
<dc:date>2024-11-28T00:00:00Z</dc:date>
</item>
<item>
<title>Exact algorithms for continuous pricing with advanced discrete choice demand models</title>
<link>https://hdl.handle.net/1721.1/157700</link>
<description>Exact algorithms for continuous pricing with advanced discrete choice demand models
Haering, Tom; Legault, Robin; Torres, Fabian; Ljubić, Ivana; Bierlaire, Michel
We present a spatial Branch and Bound and spatial Branch and Benders Decomposition approach together with the Breakpoint Exact Algorithm (BEA) to tackle the uncapacitated choice-based pricing problem (CPP) where demand is captured by a discrete choice model (DCM) based on the random utility principle. We leverage problem characteristics to reformulate the state-of-the-art simulation-based formulation of the CPP as a mixed-integer linear program (MILP) into a non-convex quadratically constrained quadratic program (QCQP), and then into a non-convex QCQP with linear objective (QCQP-L). We solve this reformulation with an efficient spatial Branch and Bound procedure utilizing the McCormick envelope for relaxations, which are then solved using Benders decomposition. We further exploit utility breakpoints to develop the BEA, which scales polynomially in the number of customers and draws, providing a fast option for low numbers of prices. Our methods are evaluated against solving the MILP, QCQP, or QCQP-L with GUROBI on a mixed logit (ML) parking space operator case study. We outspeed the MILP by several orders of magnitude when optimizing one or two prices and reduce computational time drastically for larger numbers of prices. When comparing to algorithms tailored for the CPP with ML demand specifically, our approaches significantly outperform the state of the art. Our methodology suits all choice-based optimization problems with linear-in-price utilities, given any DCM.
</description>
<pubDate>Tue, 26 Nov 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157700</guid>
<dc:date>2024-11-26T00:00:00Z</dc:date>
</item>
<item>
<title>A nonparametric learning framework for nonlinear robust output regulation</title>
<link>https://hdl.handle.net/1721.1/157698</link>
<description>A nonparametric learning framework for nonlinear robust output regulation
Wang, Shimin; Guay, Martin; Chen, Zhiyong; Braatz, Richard D
A nonparametric learning solution framework is proposed for the global nonlinear robust output regulation problem. We first extend the assumption that the steady-state generator is linear in the exogenous signal to the more relaxed assumption that it is polynomial in the exogenous signal. Additionally, a nonparametric learning framework is proposed to eliminate the construction of an explicit regressor, as required in the adaptive method, which can potentially simplify the implementation and reduce the computational complexity of existing methods. With the help of the proposed framework, the robust nonlinear output regulation problem can be converted into a robust non-adaptive stabilization problem for the augmented system with integral input-to-state stable (iISS) inverse dynamics. Moreover, a dynamic gain approach can adaptively raise the gain to a sufficiently large constant to achieve stabilization without requiring any a priori knowledge of the uncertainties appearing in the dynamics of the exosystem and the system. Furthermore, we apply the nonparametric learning framework to globally reconstruct and estimate multiple sinusoidal signals with unknown frequencies without the need for adaptive parametric techniques. An explicit nonlinear mapping can directly provide the estimated parameters, which will exponentially converge to the unknown frequencies. Finally, a feedforward control design is proposed to solve the linear output regulation problem using the nonparametric learning framework. Two simulation examples are provided to illustrate the effectiveness of the theoretical results.
</description>
<pubDate>Mon, 01 Jan 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157698</guid>
<dc:date>2024-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Economic Nonlinear Model Predictive Control of Continuous Viral Bioreactors</title>
<link>https://hdl.handle.net/1721.1/157697</link>
<description>Economic Nonlinear Model Predictive Control of Continuous Viral Bioreactors
Inguva, Pavan K; Paoli, Luc T; Braatz, Richard D
Viral particle systems are integral parts of modern biotechnology, finding use in vaccines, drug delivery platforms, and recombinant protein production. Continuous manufacturing of these systems can offer improved manufacturability and quality control. However, viral systems often have complex kinetics which can introduce undesirable process dynamics and lower product titers in continuous operation. This article explores the use of economic nonlinear dynamic optimization and model predictive control to achieve multiple process objectives such as maximizing productivity and/or purity. Economic nonlinear model predictive control is also demonstrated to robustly control the bioreactor under plant-model mismatch in different scenarios.
</description>
<pubDate>Mon, 01 Jan 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157697</guid>
<dc:date>2024-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Learning Model Predictive Control Parameters via Bayesian Optimization for Battery Fast Charging</title>
<link>https://hdl.handle.net/1721.1/157696</link>
<description>Learning Model Predictive Control Parameters via Bayesian Optimization for Battery Fast Charging
Hirt, Sebastian; Höhl, Andreas; Schaeffer, Joachim; Pohlodek, Johannes; Braatz, Richard D; Findeisen, Rolf
Tuning parameters in model predictive control (MPC) presents significant challenges, particularly when there is a notable discrepancy between the controller’s predictions and the actual behavior of the closed-loop plant. This mismatch may stem from factors like substantial model-plant differences, limited prediction horizons that do not cover the entire time of interest, or unforeseen system disturbances. Such mismatches can jeopardize both performance and safety, including constraint satisfaction. Traditional methods address this issue by modifying the finite horizon cost function to better reflect the overall operational cost, learning parts of the prediction model from data, or implementing robust MPC strategies, which might be either computationally intensive or overly cautious. As an alternative, directly optimizing or learning the controller parameters to enhance closed-loop performance has been proposed. We apply Bayesian optimization for efficient learning of unknown model parameters and parameterized constraint backoff terms, aiming to improve closed-loop performance of battery fast charging. This approach establishes a hierarchical control framework where Bayesian optimization directly fine-tunes closed-loop behavior towards a global and long-term objective, while MPC handles lower-level, short-term control tasks. For lithium-ion battery fast charging, we show that the learning approach not only ensures safe operation but also maximizes closed-loop performance. This includes maintaining the battery’s operation below its maximum terminal voltage and reducing charging times, all achieved using a standard nominal MPC model with a short horizon and notable initial model-plant mismatch.
</description>
<pubDate>Mon, 01 Jan 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157696</guid>
<dc:date>2024-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Multidose transient transfection of human embryonic kidney 293 cells modulates recombinant adeno‐associated virus2/5 Rep protein expression and influences the enrichment fraction of filled capsids</title>
<link>https://hdl.handle.net/1721.1/157695</link>
<description>Multidose transient transfection of human embryonic kidney 293 cells modulates recombinant adeno‐associated virus2/5 Rep protein expression and influences the enrichment fraction of filled capsids
Srinivasan, Prasanna; Canova, Christopher T; Sha, Sha; Nguyen, Tam NT; Joseph, John; Sangerman, Jose; Maloney, Andrew J; Katsikis, Georgios; Ou, Rui Wen; Hong, Moo Sun; Ng, Jaclyn; Yuan, Arella; Antov, Daniel; Song, Sally; Chen, Wenyu; Neufeld, Caleb; Wolfrum, Jacqueline M; Barone, Paul W; Sinskey, Anthony J; Springs, Stacy L; Braatz, Richard D
Recombinant adeno-associated virus (rAAV) is a commonly used in vivo gene therapy vector because of its nonpathogenicity, long-term transgene expression, broad tropism, and ability to transduce both dividing and nondividing cells. However, rAAV vector production via transient transfection of mammalian cells typically yields a low fraction of filled-to-total capsids (~1%–30% of total capsids produced). Analysis of our previously developed mechanistic model for rAAV2/5 production attributed these low fill fractions to a poorly coordinated timeline between capsid synthesis and viral DNA replication and the repression of later phase capsid formation by Rep proteins. Here, we extend the model by quantifying the expression dynamics of total Rep proteins and their influence on the key steps of rAAV2/5 production using a multiple dosing transfection of human embryonic kidney 293 (HEK293) cells. We report that the availability of preformed empty capsids and viral DNA copies per cell are not limiting to the capsid-filling reaction. However, optimal expression of Rep proteins (&lt;240 ± 13 ag per cell) enables enrichment of the filled capsid population (&gt;12% of total capsids/cell) upstream. Our analysis suggests increased enrichment of filled capsids via regulating the expression of Rep proteins is possible but at the expense of per cell capsid titer in a triple plasmid transfection. Our study reveals an intrinsic limitation of scaling rAAV2/5 vector genome (vg) production and underscores the need for approaches that allow for regulating the expression of Rep proteins to maximize vg titer per cell upstream.
</description>
<pubDate>Mon, 01 Jan 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157695</guid>
<dc:date>2024-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Non-invasive estimation of the powder size distribution from a single speckle image</title>
<link>https://hdl.handle.net/1721.1/157694</link>
<description>Non-invasive estimation of the powder size distribution from a single speckle image
Zhang, Qihang; Pandit, Ajinkya; Liu, Zhiguang; Guo, Zhen; Muddu, Shashank; Wei, Yi; Pereg, Deborah; Nazemifard, Neda; Papageorgiou, Charles; Yang, Yihui; Tang, Wenlong; Braatz, Richard D; Myerson, Allan S; Barbastathis, George
Non-invasive characterization of powders may take one of two approaches: imaging and counting individual particles; or relying on scattered light to estimate the particle size distribution (PSD) of the ensemble. The former approach runs into practical difficulties, as the system must conform to the working distance and other restrictions of the imaging optics. The latter approach requires an inverse map from the speckle autocorrelation to the particle sizes. The principle relies on the pupil function determining the basic sidelobe shape, whereas the particle size spread modulates the sidelobe intensity. We recently showed that it is feasible to invert the speckle autocorrelation and obtain the PSD using a neural network, trained efficiently through a physics-informed semi-generative approach. In this work, we eliminate one of the most time-consuming steps of our previous method by engineering the pupil function. By judiciously blocking portions of the pupil, we sacrifice some photons but in return we achieve much enhanced sidelobes and, hence, higher sensitivity to the change of the size distribution. The result is a 60 × reduction in total acquisition and processing time, or 0.25 seconds per frame in our implementation. Almost real-time operation in our system is not only more appealing toward rapid industrial adoption, it also paves the way for quantitative characterization of complex spatial or temporal dynamics in drying, blending, and other chemical and pharmaceutical manufacturing processes.
</description>
<pubDate>Mon, 01 Jan 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157694</guid>
<dc:date>2024-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Population Balance Model-based Dynamic Multiobjective Optimization of Yeast Cell Manufacturing</title>
<link>https://hdl.handle.net/1721.1/157693</link>
<description>Population Balance Model-based Dynamic Multiobjective Optimization of Yeast Cell Manufacturing
Ganko, Krystian; Berliner, Marc D; Rhyu, Jinwook; Wu, Liang; Braatz, Richard D; Leyffer, Sven
Biological systems play a key role in many advanced manufacturing processes, of which many have interesting nonlinear dynamics. We investigate a continuous yeast cell manufacturing process that produces sustained oscillations in outputs under nominal conditions. Using a population balance model to perform dynamic optimization with multiple objectives and observability constraints, we quantify tradeoffs on the Pareto surface for varying the extent of process oscillations that the decision-maker deems tolerable (or desirable). Numerical optimal control design for oscillatory distributed parameter systems is discussed within the context of both dynamic optimization and on-line nonlinear model predictive control strategies.
</description>
<pubDate>Mon, 01 Jan 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157693</guid>
<dc:date>2024-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>How Do Transformers Model Physics? Investigating the Simple Harmonic Oscillator</title>
<link>https://hdl.handle.net/1721.1/157692</link>
<description>How Do Transformers Model Physics? Investigating the Simple Harmonic Oscillator
Kantamneni, Subhash; Liu, Ziming; Tegmark, Max
ow do transformers model physics? Do transformers model systems with interpretable analytical solutions or do they create an “alien physics” that is difficult for humans to decipher? We have taken a step towards demystifying this larger puzzle by investigating the simple harmonic oscillator (SHO), &#119909;¨+2&#120574;&#119909;˙+&#120596;20&#119909;=0&#13;
, one of the most fundamental systems in physics. Our goal was to identify the methods transformers use to model the SHO, and to do so we hypothesized and evaluated possible methods by analyzing the encoding of these methods’ intermediates. We developed four criteria for the use of a method within the simple test bed of linear regression, where our method was &#119910;=&#119908;&#119909;&#13;
 and our intermediate was w: (1) Can the intermediate be predicted from hidden states? (2) Is the intermediate’s encoding quality correlated with the model performance? (3) Can the majority of variance in hidden states be explained by the intermediate? (4) Can we intervene on hidden states to produce predictable outcomes? Armed with these two correlational (1,2), weak causal (3), and strong causal (4) criteria, we determined that transformers use known numerical methods to model the trajectories of the simple harmonic oscillator, specifically, the matrix exponential method. Our analysis framework can conveniently extend to high-dimensional linear systems and nonlinear systems, which we hope will help reveal the “world model” hidden in transformers.
</description>
<pubDate>Tue, 19 Nov 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157692</guid>
<dc:date>2024-11-19T00:00:00Z</dc:date>
</item>
<item>
<title>Dynamic Expansion and Merging of the Equatorial Ionization Anomaly During the 10–11 May 2024 Super Geomagnetic Storm</title>
<link>https://hdl.handle.net/1721.1/157691</link>
<description>Dynamic Expansion and Merging of the Equatorial Ionization Anomaly During the 10–11 May 2024 Super Geomagnetic Storm
Aa, Ercha; Chen, Yanhong; Luo, Bingxian
first_pagesettingsOrder Article Reprints&#13;
Open AccessTechnical Note&#13;
Dynamic Expansion and Merging of the Equatorial Ionization Anomaly During the 10–11 May 2024 Super Geomagnetic Storm&#13;
by Ercha Aa 1,2,*ORCID,Yanhong Chen 2ORCID andBingxian Luo 2ORCID&#13;
1&#13;
Haystack Observatory, Massachusetts Institute of Technology, Westford, MA 01886, USA&#13;
2&#13;
National Space Science Center, Chinese Academy of Sciences, Beijing 100190, China&#13;
*&#13;
Author to whom correspondence should be addressed.&#13;
Remote Sens. 2024, 16(22), 4290; https://doi.org/10.3390/rs16224290&#13;
Submission received: 24 October 2024 / Revised: 14 November 2024 / Accepted: 15 November 2024 / Published: 18 November 2024&#13;
(This article belongs to the Special Issue Ionosphere Monitoring with Remote Sensing (3rd Edition))&#13;
Downloadkeyboard_arrow_down Browse Figures Versions Notes&#13;
&#13;
Abstract&#13;
This study investigates the responses of the equatorial and low-latitude ionosphere in the American–Atlantic longitude sector during the super geomagnetic storm that occurred on 10–11 May 2024. The investigation utilizes multi-instrument datasets, including ground-based observations (GNSS TEC, ionosonde, and Fabry–Perot interferometer) as well as space-borne satellite measurements (GOLD, Swarm, DMSP, and TIMED). Our findings reveal significant day-to-day variations in the storm-time equatorial ionization anomaly (EIA), summarized as follows: (1) During the main phase of the storm, the low- and mid-latitude ionosphere experienced a positive storm, with TEC drastically enhanced by 50–100% within a few hours. The EIA crests exhibited a substantial poleward expansion, reaching as high as ±35° MLAT. This expansion was caused by the enhanced fountain effect driven by penetration electric fields, along with increased ambipolar diffusion due to transient meridional wind surges. (2) During the recovery phase of the storm, the global ionosphere was characterized by a substantial negative storm with a 50–80% depletion in TEC. The EIA crests were notably suppressed and merged into a single equatorial band, which can be attributed to the composition change effect and the influence of disturbance dynamo electric fields. These results illustrate the complex processes of magnetosphere–ionosphere–thermosphere coupling during a superstorm, highlighting the significant impacts of space weather on the global ionosphere.
</description>
<pubDate>Mon, 18 Nov 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157691</guid>
<dc:date>2024-11-18T00:00:00Z</dc:date>
</item>
<item>
<title>Modeling Inertia-Driven Oil Transport Inside the Three-Piece Oil Control Ring of Internal Combustion Engines</title>
<link>https://hdl.handle.net/1721.1/157690</link>
<description>Modeling Inertia-Driven Oil Transport Inside the Three-Piece Oil Control Ring of Internal Combustion Engines
Yang, Tsung-Yu; Li, Mo; Tian, Tian
The three-piece oil control ring (TPOCR), traditionally used in light-duty gasoline engines, is becoming a viable option for heavy-duty gas and hydrogen engines due to its ability to control lubricating oil consumption (LOC) under throttled conditions. Understanding the distribution of oil inside the TPOCR groove, as well as the effects of rail gap and drain hole positions, is critical for optimizing TPOCR and groove designs. In this work, a one-dimensional oil distribution model was developed to simulate inertia-driven oil transport in the TPOCR groove. A novel approach was proposed by first dividing the TPOCR into units composed of a pair of expander pitches. Then, the relationship between the oil outflow rate of the unit and its oil mass was established with the help of three-dimensional two-phase computational fluid dynamics (CFD) simulations. This relationship was then used to model one-dimensional oil transport along the circumference of the TPOCR groove. Incorporating the boundary conditions at the rail gaps and drain holes, this simple model can complete computations for 10,000 cycles within a few seconds, allowing for quick the evaluation of transient behavior and design iterations. Studies on low-load conditions show that the model, with reasonable adjustment for the boundary conditions, can match the oil distribution patterns observed in visualization experiments. This is the first step toward studying oil transport in the TPOCR groove before involving the effects of gas flows.
</description>
<pubDate>Sat, 16 Nov 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157690</guid>
<dc:date>2024-11-16T00:00:00Z</dc:date>
</item>
<item>
<title>Unsupervised Canine Emotion Recognition Using Momentum Contrast</title>
<link>https://hdl.handle.net/1721.1/157689</link>
<description>Unsupervised Canine Emotion Recognition Using Momentum Contrast
Bhave, Aarya; Hafner, Alina; Bhave, Anushka; Gloor, Peter A.
We describe a system for identifying dog emotions based on dogs’ facial expressions and body posture. Towards that goal, we built a dataset with 2184 images of ten popular dog breeds, grouped into seven similarly sized primal mammalian emotion categories defined by neuroscientist and psychobiologist Jaak Panksepp as ‘Exploring’, ‘Sadness’, ‘Playing’, ‘Rage’, ‘Fear’, ‘Affectionate’ and ‘Lust’. We modified the contrastive learning framework MoCo (Momentum Contrast for Unsupervised Visual Representation Learning) to train it on our original dataset and achieved an accuracy of 43.2% and a baseline of 14%. We also trained this model on a second publicly available dataset that resulted in an accuracy of 48.46% but had a baseline of 25%. We compared our unsupervised approach with a supervised model based on a ResNet50 architecture. This model, when tested on our dataset with the seven Panksepp labels, resulted in an accuracy of 74.32%
</description>
<pubDate>Sat, 16 Nov 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157689</guid>
<dc:date>2024-11-16T00:00:00Z</dc:date>
</item>
<item>
<title>Renewing Our Focus on Vulnerable Populations Among People Living with HIV</title>
<link>https://hdl.handle.net/1721.1/157688</link>
<description>Renewing Our Focus on Vulnerable Populations Among People Living with HIV
Ayieko, James; Thorp, Marguerite; Ghebremichael, Musie
The global HIV landscape has changed over the past few decades, with great milestones achieved in both HIV treatment and prevention. Access to lifesaving antiretroviral therapy (ART) has markedly expanded, with a total of 30.7 million (27 million–31.9 million) out of 39.9 million (36.1 million–44.6 million) people living with HIV accessing the medication in 2023 [1]. Continued expansion of access to, initiation of, and adherence to treatment is crucial in achieving control of the HIV pandemic, given the strong evidence that treatment is prevention [2]. Despite these marked advances, 28% of people living with HIV (PLHIV) are reported to be virally unsuppressed [1]. Viral non-suppression is associated with increased risk of progression to AIDS and portends poor outcomes for PLHIV [3,4]. Additionally, viral non-suppression increases the risk of onward transmission of HIV, reversing the gains made in combating the pandemic [3]. The risk of viral non-suppression is greater in certain groups. This Special Issue focuses on exploring HIV support, care, and treatment for vulnerable populations, or those at elevated risk of viral non-suppression and poor health outcomes.&#13;
We solicited articles on this topic and received submissions from diverse settings and authors of different backgrounds and training. The interest and importance of this topic are revealed in the diversity of articles that were submitted and the disciplines that showed interest. This Special Issue contains ten articles that advance our understanding of vulnerable populations, challenge the current thinking about vulnerable populations, and propose bold interventions to address the barriers to HIV care engagement throughout the cascade.&#13;
The articles in this Special Issue bring to the fore three critical questions about vulnerable groups: What makes one vulnerable? What are the threats to care engagement for vulnerable people? And what health care system changes are needed to accommodate vulnerable people? These questions must be addressed to improve outcomes among vulnerable groups, especially to design interventions that address their concerns.
</description>
<pubDate>Thu, 14 Nov 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157688</guid>
<dc:date>2024-11-14T00:00:00Z</dc:date>
</item>
<item>
<title>A 35-Year Analysis of Vegetation Cover in Rare-Earth Mining Areas Using Landsat Data</title>
<link>https://hdl.handle.net/1721.1/157687</link>
<description>A 35-Year Analysis of Vegetation Cover in Rare-Earth Mining Areas Using Landsat Data
Zheng, Zhubin; Liu, Yuqing; Chen, Na; Liu, Ge; Lei, Shaohua; Xu, Jie; Li, Jianzhong; Ren, Jingli; Huang, Chao
Fractional vegetation cover (FVC) plays a significant role in assessing ecological quality and protection, as well as soil and water conservation. As a typical rare-earth resource county in China, Dingnan County has experienced rapid development due to rare-earth mining, resulting in significant alterations to vegetation cover. To elucidate the spatio-temporal changes in vegetation within Dingnan County over the past 35 years and the effects of natural and human factors on these changes, the spatial and temporal variations in FVC were analyzed using Landsat-TM/OLI multispectral images taken in 1988, 1995, 1997, 2002, 2006, 2013, 2017, and 2023. The findings indicate that (1) vegetation coverage in Dingnan County decreased from 1988 to 2002, followed by a gradual increase; (2) high vegetation cover is predominantly found in forested areas that maintain their natural state, while the central town and mining areas exhibit generally low coverage; (3) there are regional differences in the relationship between vegetation cover and environmental factors in Dingnan County. This research facilitates the alignment of ion-type rare-earth mining with ecological protection, thereby promoting the sustainable development of the mining area and providing scientific guidance for local governments to formulate more effective management and protection strategies for the mining ecosystem. Additionally, this research offers a scientific foundation for mining areas globally to develop sustainable policies and informed decision-making regarding environmental protection and sustainable development.
</description>
<pubDate>Wed, 13 Nov 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157687</guid>
<dc:date>2024-11-13T00:00:00Z</dc:date>
</item>
<item>
<title>Local Search Heuristic for the Two-Echelon Capacitated Vehicle Routing Problem in Educational Decision Support Systems</title>
<link>https://hdl.handle.net/1721.1/157686</link>
<description>Local Search Heuristic for the Two-Echelon Capacitated Vehicle Routing Problem in Educational Decision Support Systems
Cruz, José Pedro Gomes da; Winkenbach, Matthias; Yoshizaki, Hugo Tsugunobu Yoshida
This study focuses on developing a heuristic for Decision Support Systems (DSS) in e-commerce logistics education, specifically addressing the Two-Echelon Capacitated Vehicle Routing Problem (2E-CVRP). The 2E-CVRP involves using Urban Transshipment Points (UTPs) to optimize deliveries. To tackle the complexity of the 2E-CVRP, DSS can employ fast and effective techniques for visual problem-solving. Therefore, the objective of this work is to develop a local search heuristic to solve the 2E-CVRP quickly and efficiently for implementation in DSS. The efficiency of the heuristic is assessed through benchmarks from the literature and applied to real-world problems from a Brazilian e-commerce retailer, contributing to advancements in the 2E-CVRP approach and promoting operational efficiency in e-commerce logistics education. The heuristic yielded promising results, solving problems almost instantly, for instances in the literature on average in 1.06 s, with average gaps of 6.3% in relation to the best-known solutions and, for real problems with hundreds of customers, in 1.4 s, with gaps of 8.3%, demonstrating its effectiveness in achieving the study’s objectives.
</description>
<pubDate>Wed, 06 Nov 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157686</guid>
<dc:date>2024-11-06T00:00:00Z</dc:date>
</item>
<item>
<title>Telepresence Robots in the Context of Dementia Caregiving: Caregivers’ and Care Recipients’ Perspectives</title>
<link>https://hdl.handle.net/1721.1/157684</link>
<description>Telepresence Robots in the Context of Dementia Caregiving: Caregivers’ and Care Recipients’ Perspectives
FakhrHosseini, Shabnam; Cerino, Lauren; D’Ambrosio, Lisa; Balmuth, Lexi; Lee, Chaiwoo; Wu, Mengke; Coughlin, Joseph
As a result of a rapidly aging population and the increasing prevalence of dementia among older adults, technological solutions are increasingly being considered to facilitate caregiving. This research investigates the perspectives of 20 caregiving dyads on VGo, a telepresence social robot with features designed to support caregiving. Care recipients (CRs), aged 65 and older, diagnosed with Alzheimer’s disease and related dementias, along with their primary caregivers (CGs), evaluated the robot through an online interview study. The interviews integrated informative videos showcasing VGo’s features and functions. Insights from the interviews revealed diverse expectations, interests, and reservations. The majority of CGs and their CRs perceived the robot’s features as beneficial. In particular, the voice command capability was appreciated as an alternative to using smartphones and as a way to manage home appliances. The community feature, however, did not align well with many participants’ lifestyles, and participants had a number of suggestions to enhance the robot’s notification function. Based on the interview results, the study offers a set of design recommendations for telepresence social robots in home caregiving contexts. This investigation highlights the promise of social robots in caregiving contexts and underscores the need for further improvements to ensure they fit users’ needs.
</description>
<pubDate>Wed, 30 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157684</guid>
<dc:date>2024-10-30T00:00:00Z</dc:date>
</item>
<item>
<title>Flexible Composites with Rare-Earth Element Doped Polycrystalline Particles for Piezoelectric Nanogenerators</title>
<link>https://hdl.handle.net/1721.1/157683</link>
<description>Flexible Composites with Rare-Earth Element Doped Polycrystalline Particles for Piezoelectric Nanogenerators
Fan, Yanzhe; Jia, Zihan; Zhang, Zhuo; Gu, Shengfei; Du, Wenya; Lin, Dabin
Energy harvesting plays an important role in advancing personalized wearables by enabling continuous monitoring, enhancing wearable functionality and facilitating sustainable solutions. We aimed to develop a flexible piezoelectric energy harvesting system based on inorganic piezoelectric materials that convert mechanical energy into electricity to power a wide range of mobile and portable electronic devices. There is significant interest in flexible piezoelectric energy harvesting systems that use inorganic piezoelectric materials due to their exceptional physical features and prospective applications. Herein, we successfully demonstrated a flexible piezoelectric nanogenerator (PENG) designed by the co-doped rare-earth element ceramics (RE-PMN-PT) embedded in PVDF and PDMS composite film and attained a significant output performance while avoiding electrical poling process. The impact of dielectric characteristics on the electrical output of nanogenerators was investigated, together with the structure of the composites. The Sm/La-PMN-PT particles effectively amplify both the voltage and current output, showcasing their potential to power portable and wearable devices, as demonstrated by their capacity to illuminate LEDs. The maximal output power of 2 mW was correlated with the high voltage (220 V) and current (90 &amp;micro;A) of Sm/La-PMN-PT/PVDF, which demonstrated that the device has the potential for energy harvesting in biomedical applications.
</description>
<pubDate>Tue, 22 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157683</guid>
<dc:date>2024-10-22T00:00:00Z</dc:date>
</item>
<item>
<title>Modeling short visual events through the BOLD moments video fMRI dataset and metadata</title>
<link>https://hdl.handle.net/1721.1/157682</link>
<description>Modeling short visual events through the BOLD moments video fMRI dataset and metadata
Lahner, Benjamin; Dwivedi, Kshitij; Iamshchinina, Polina; Graumann, Monika; Lascelles, Alex; Roig, Gemma; Gifford, Alessandro Thomas; Pan, Bowen; Jin, SouYoung; Murty, N. Apurva Ratan; Kay, Kendrick; Oliva, Aude; Cichy, Radoslaw
Studying the neural basis of human dynamic visual perception requires extensive experimental data to evaluate the large swathes of functionally diverse brain neural networks driven by perceiving visual events. Here, we introduce the BOLD Moments Dataset (BMD), a repository of whole-brain fMRI responses to over 1000 short (3 s) naturalistic video clips of visual events across ten human subjects. We use the videos’ extensive metadata to show how the brain represents word- and sentence-level descriptions of visual events and identify correlates of video memorability scores extending into the parietal cortex. Furthermore, we reveal a match in hierarchical processing between cortical regions of interest and video-computable deep neural networks, and we showcase that BMD successfully captures temporal dynamics of visual events at second resolution. With its rich metadata, BMD offers new perspectives and accelerates research on the human brain basis of visual event perception.
</description>
<pubDate>Wed, 24 Jul 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157682</guid>
<dc:date>2024-07-24T00:00:00Z</dc:date>
</item>
<item>
<title>Computational discovery of microstructured composites with optimal stiffness-toughness trade-offs</title>
<link>https://hdl.handle.net/1721.1/157681</link>
<description>Computational discovery of microstructured composites with optimal stiffness-toughness trade-offs
Li, Beichen; Deng, Bolei; Shou, Wan; Oh, Tae-Hyun; Hu, Yuanming; Luo, Yiyue; Shi, Liang; Matusik, Wojciech
The conflict between stiffness and toughness is a fundamental problem in engineering materials design. However, the systematic discovery of microstructured composites with optimal stiffness-toughness trade-offs has never been demonstrated, hindered by the discrepancies between simulation and reality and the lack of data-efficient exploration of the entire Pareto front. We introduce a generalizable pipeline that integrates physical experiments, numerical simulations, and artificial neural networks to address both challenges. Without any prescribed expert knowledge of material design, our approach implements a nested-loop proposal-validation workflow to bridge the simulation-to-reality gap and find microstructured composites that are stiff and tough with high sample efficiency. Further analysis of Pareto-optimal designs allows us to automatically identify existing toughness enhancement mechanisms, which were previously found through trial and error or biomimicry. On a broader scale, our method provides a blueprint for computational design in various research areas beyond solid mechanics, such as polymer chemistry, fluid dynamics, meteorology, and robotics.
</description>
<pubDate>Fri, 02 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157681</guid>
<dc:date>2024-02-02T00:00:00Z</dc:date>
</item>
<item>
<title>Adaptive tactile interaction transfer via digitally embroidered smart gloves</title>
<link>https://hdl.handle.net/1721.1/157680</link>
<description>Adaptive tactile interaction transfer via digitally embroidered smart gloves
Luo, Yiyue; Liu, Chao; Lee, Young Joong; DelPreto, Joseph; Wu, Kui; Foshey, Michael; Rus, Daniela; Palacios, Tomás; Li, Yunzhu; Torralba, Antonio; Matusik, Wojciech
Human-machine interfaces for capturing, conveying, and sharing tactile information across time and space hold immense potential for healthcare, augmented and virtual reality, human-robot collaboration, and skill development. To realize this potential, such interfaces should be wearable, unobtrusive, and scalable regarding both resolution and body coverage. Taking a step towards this vision, we present a textile-based wearable human-machine interface with integrated tactile sensors and vibrotactile haptic actuators that are digitally designed and rapidly fabricated. We leverage a digital embroidery machine to seamlessly embed piezoresistive force sensors and arrays of vibrotactile actuators into textiles in a customizable, scalable, and modular manner. We use this process to create gloves that can record, reproduce, and transfer tactile interactions. User studies investigate how people perceive the sensations reproduced by our gloves with integrated vibrotactile haptic actuators. To improve the effectiveness of tactile interaction transfer, we develop a machine-learning pipeline that adaptively models how each individual user reacts to haptic sensations and then optimizes haptic feedback parameters. Our interface showcases adaptive tactile interaction transfer through the implementation of three end-to-end systems: alleviating tactile occlusion, guiding people to perform physical skills, and enabling responsive robot teleoperation.
</description>
<pubDate>Mon, 29 Jan 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157680</guid>
<dc:date>2024-01-29T00:00:00Z</dc:date>
</item>
<item>
<title>Dual-phase microporous polymer nanofilms by interfacial polymerization for ultrafast molecular separation</title>
<link>https://hdl.handle.net/1721.1/157679</link>
<description>Dual-phase microporous polymer nanofilms by interfacial polymerization for ultrafast molecular separation
Lee, Tae Hoon; Balcik, Marcel; Wu, Wan-Ni; Pinnau, Ingo; Smith, Zachary P
Fine-tuning microporosity in polymers with a scalable method has great potential for energy-efficient molecular separations. Here, we report a dual-phase molecular engineering approach to prepare microporous polymer nanofilms through interfacial polymerization. By integrating two micropore-generating units such as a water-soluble Tröger’s base diamine (TBD) and a contorted spirobifluorene (SBF) motif, the resultant TBD-SBF polyamide shows an unprecedentedly high surface area. An ultrathin TBD-SBF membrane (~20 nm) exhibits up to 220 times improved solvent permeance with a moderate molecular weight cutoff (~640 g mol−1) compared to the control membrane prepared by conventional chemistry, which outperforms currently reported polymeric membranes. We also highlight the great potential of the SBF-based microporous polyamides for hydrocarbon separations by exploring the isomeric effects of aqueous phase monomers to manipulate microporosity.
</description>
<pubDate>Fri, 16 Aug 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157679</guid>
<dc:date>2024-08-16T00:00:00Z</dc:date>
</item>
<item>
<title>Signatures of wave erosion in Titan’s coasts</title>
<link>https://hdl.handle.net/1721.1/157678</link>
<description>Signatures of wave erosion in Titan’s coasts
Palermo, Rose V; Ashton, Andrew D; Soderblom, Jason M; Birch, Samuel PD; Hayes, Alexander G; Perron, J Taylor
The shorelines of Titan’s hydrocarbon seas trace flooded erosional landforms such as river valleys; however, it is unclear whether coastal erosion has subsequently altered these shorelines. Spacecraft observations and theoretical models suggest that wind may cause waves to form on Titan’s seas, potentially driving coastal erosion, but the observational evidence of waves is indirect, and the processes affecting shoreline evolution on Titan remain unknown. No widely accepted framework exists for using shoreline morphology to quantitatively discern coastal erosion mechanisms, even on Earth, where the dominant mechanisms are known. We combine landscape evolution models with measurements of shoreline shape on Earth to characterize how different coastal erosion mechanisms affect shoreline morphology. Applying this framework to Titan, we find that the shorelines of Titan’s seas are most consistent with flooded landscapes that subsequently have been eroded by waves, rather than a uniform erosional process or no coastal erosion, particularly if wave growth saturates at fetch lengths of tens of kilometers.
</description>
<pubDate>Fri, 21 Jun 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157678</guid>
<dc:date>2024-06-21T00:00:00Z</dc:date>
</item>
<item>
<title>NEWTS1.0: Numerical model of coastal Erosion by Waves and Transgressive Scarps</title>
<link>https://hdl.handle.net/1721.1/157677</link>
<description>NEWTS1.0: Numerical model of coastal Erosion by Waves and Transgressive Scarps
Palermo, Rose V; Perron, J Taylor; Soderblom, Jason M; Birch, Samuel PD; Hayes, Alexander G; Ashton, Andrew D
Models of rocky-coast erosion help us understand the physical phenomena that control coastal morphology and evolution, infer the processes shaping coasts in remote environments, and evaluate risk from natural hazards and future climate change. Existing models, however, are highly complex, are computationally expensive, and depend on many input parameters; this limits our ability to explore planform erosion of rocky coasts over long timescales (thousands to millions of years) and over a range of conditions. In this paper, we present a simplified cellular model of coastline evolution in closed basins through uniform erosion and wave-driven erosion. Uniform erosion is modeled as a constant rate of retreat. Wave erosion is modeled as a function of fetch, the distance over which the wind blows to generate waves, and the angle between the incident wave and the shoreline. This reduced-complexity model can be used to evaluate how a detachment-limited coastal landscape reflects climate, sea-level history, material properties, and the relative influence of different erosional processes.
</description>
<pubDate>Mon, 01 Jan 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157677</guid>
<dc:date>2024-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Hydrodynamic forces on a side-by-side ellipse pair with and without relative motion</title>
<link>https://hdl.handle.net/1721.1/157676</link>
<description>Hydrodynamic forces on a side-by-side ellipse pair with and without relative motion
Rhodes, Preston; van Rees, Wim M.
Motivated by flow interactions in schooling biological swimmers as well as in unmanned underwater vehicle fleets, we investigate the flow past two identical 6 : 1 ellipses using two-dimensional simulations at Reynolds numbers of  (103). When both ellipses move at the same velocity, overall drag reductions of 10 %–20 % can be achieved in staggered formations, with the strongest drag reductions occurring at the smallest lateral distances. In side-by-side configurations, the drag on both bodies increases by 10 %–20 %. Lift coefficients are repulsive and up to four times larger than the total drag coefficients. During overtaking manoeuvres, increasing the relative speed of the overtaking ellipse predominantly affects the forces on the overtaken ellipse. The mean drag force on the overtaken ellipse increases with increasing speed difference. Mean lift forces during the overtaking manoeuvre are repulsive for both bodies; as the speed difference increases, the repulsive force increases on the overtaken body and decreases on the overtaking body. Overall, these results highlight that the lateral forces in hydrodynamic interactions between bodies in formation dominate the hydrodynamic interactions. Further, the results indicate that future work is needed to investigate how viscous and three-dimensional effects change the lateral forces between side-by-side submerged bodies.
</description>
<pubDate>Wed, 13 Nov 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157676</guid>
<dc:date>2024-11-13T00:00:00Z</dc:date>
</item>
<item>
<title>Time-certified Input-constrained NMPC via Koopman Operator</title>
<link>https://hdl.handle.net/1721.1/157675</link>
<description>Time-certified Input-constrained NMPC via Koopman Operator
Wu, Liang; Ganko, Krystian; Braatz, Ricahrd D
Determining solving-time certificates of nonlinear model predictive control (NMPC) implementations is a pressing requirement when deploying NMPC in production environments. Such a certificate guarantees that the NMPC controller returns a solution before the next sampling time. However, NMPC formulations produce nonlinear programs (NLPs) for which it is very difficult to derive their solving-time certificates. Our previous work, Wu and Braatz (2023), challenged this limitation with a proposed input-constrained MPC algorithm having exact iteration complexity but was restricted to linear MPC formulations. This work extends the algorithm to solve input-constrained NMPC problems, by using the Koopman operator and a condensing MPC technique. We illustrate the algorithm performance on a high-dimensional, nonlinear partial differential equation (PDE) control case study, in which we theoretically and numerically certify the solving time to be less than the sampling time.
</description>
<pubDate>Mon, 01 Jan 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157675</guid>
<dc:date>2024-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Analysis of Poly(ethylene terephthalate) degradation kinetics of evolved IsPETase variants using a surface crowding model</title>
<link>https://hdl.handle.net/1721.1/157674</link>
<description>Analysis of Poly(ethylene terephthalate) degradation kinetics of evolved IsPETase variants using a surface crowding model
Zhong-Johnson, En Ze Linda; Dong, Ziyue; Canova, Christopher T; Destro, Francesco; Cañellas, Marina; Hoffman, Mikaila C; Maréchal, Jeanne; Johnson, Timothy M; Zheng, Maya; Schlau-Cohen, Gabriela S; Lucas, Maria Fátima; Braatz, Richard D; Sprenger, Kayla G; Voigt, Christopher A; Sinskey, Anthony J
Poly(ethylene terephthalate) (PET) is a major plastic polymer utilized in the single-use and textile industries. The discovery of PET-degrading enzymes (PETases) has led to an increased interest in the biological recycling of PET in addition to mechanical recycling. IsPETase from Ideonella sakaiensis is a candidate catalyst, but little is understood about its structure-function relationships with regards to PET degradation. To understand the effects of mutations on IsPETase productivity, we develop a directed evolution assay to identify mutations beneficial to PET film degradation at 30 °C. IsPETase also displays enzyme concentration-dependent inhibition effects, and surface crowding has been proposed as a causal phenomenon. Based on total internal reflectance fluorescence microscopy and adsorption experiments, IsPETase is likely experiencing crowded conditions on PET films. Molecular dynamics simulations of IsPETase variants reveal a decrease in active site flexibility in free enzymes and reduced probability of productive active site formation in substrate-bound enzymes under crowding. Hence, we develop a surface crowding model to analyze the biochemical effects of three hit mutations (T116P, S238N, S290P) that enhanced ambient temperature activity and/or thermostability. We find that T116P decreases susceptibility to crowding, resulting in higher PET degradation product accumulation despite no change in intrinsic catalytic rate. In conclusion, we show that a macromolecular crowding-based biochemical model can be used to analyze the effects of mutations on properties of PETases and that crowding behavior is a major property to be targeted for enzyme engineering for improved PET degradation.
</description>
<pubDate>Fri, 01 Mar 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157674</guid>
<dc:date>2024-03-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mechanistic modeling of in vitro transcription incorporating effects of magnesium pyrophosphate crystallization</title>
<link>https://hdl.handle.net/1721.1/157673</link>
<description>Mechanistic modeling of in vitro transcription incorporating effects of magnesium pyrophosphate crystallization
Stover, Nathan Merica; Ganko, Krystian; Braatz, Richard D
The in vitro transcription (IVT) reaction used in the production of messenger RNA vaccines and therapies remains poorly quantitatively understood. Mechanistic modeling of IVT could inform reaction design, scale‐up, and control. In this work, we develop a mechanistic model of IVT to include nucleation and growth of magnesium pyrophosphate crystals and subsequent agglomeration of crystals and DNA. To help generalize this model to different constructs, a novel quantitative description is included for the rate of transcription as a function of target sequence length, DNA concentration, and T7 RNA polymerase concentration. The model explains previously unexplained trends in IVT data and quantitatively predicts the effect of adding the pyrophosphatase enzyme to the reaction system. The model is validated on additional literature data showing an ability to predict transcription rates as a function of RNA sequence length.
</description>
<pubDate>Sun, 01 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157673</guid>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Accounting for the Effects of Probabilistic Uncertainty During Fast Charging of Lithium-ion Batteries</title>
<link>https://hdl.handle.net/1721.1/157672</link>
<description>Accounting for the Effects of Probabilistic Uncertainty During Fast Charging of Lithium-ion Batteries
Kim, Minsu; Schaeffer, Joachim; Berliner, Marc D; Sagnier, Berta Pedret; Findeisen, Rolf; Braatz, Richard D
Batteries are nonlinear dynamical systems that can be modeled by Porous Electrode Theory models. The aim of optimal fast charging is to reduce the charging time while keeping battery degradation low. Most past studies assume that model parameters and ambient temperature are a fixed known value and that all PET model parameters are perfectly known. In real battery operation, however, the ambient temperature and the model parameters are uncertain. To ensure that operational constraints are satisfied at all times in the context of model-based optimal control, uncertainty quantification is required. Here, we analyze optimal fast charging for modest uncertainty in the ambient temperature and 23 model parameters. Uncertainty quantification of the battery model is carried out using non-intrusive polynomial chaos expansion and the results are verified with Monte Carlo simulations. The method is investigated for a constant current--constant voltage charging strategy for a battery for which the strategy is known to be standard for fast charging subject to operating below maximum current and charging constraints. Our results demonstrate that uncertainty in ambient temperature results in violations of constraints on the voltage and temperature. Our results identify a subset of key parameters that contribute to fast charging among the overall uncertain parameters. Additionally, it is shown that the constraints represented by voltage, temperature, and lithium-plating overpotential are violated due to uncertainties in the ambient temperature and parameters. The C-rate and charge constraints are then adjusted so that the probability of violating the degradation acceleration condition is below a pre-specified value. This approach demonstrates a computationally efficient approach for determining fast-charging protocols that take probabilistic uncertainties into account.
2024 American Control Conference (ACC) July 8-12, 2024. Toronto, Canada
</description>
<pubDate>Wed, 10 Jul 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157672</guid>
<dc:date>2024-07-10T00:00:00Z</dc:date>
</item>
<item>
<title>Cycle Life Prediction for Lithium-ion Batteries: Machine Learning and More</title>
<link>https://hdl.handle.net/1721.1/157671</link>
<description>Cycle Life Prediction for Lithium-ion Batteries: Machine Learning and More
Schaeffer, Joachim; Galuppini, Giacomo; Rhyu, Jinwook; Asinger, Patrick A; Droop, Robin; Findeisen, Rolf; Braatz, Richard D
Batteries are dynamic systems with complicated nonlinear aging, highly dependent on cell design, chemistry, manufacturing, and operational conditions. Prediction of battery cycle life and estimation of aging states is important to accelerate battery R&amp;D, testing, and to further the understanding of how batteries degrade. Beyond testing, battery management systems rely on real-time models and onboard diagnostics and prognostics for safe operation. Estimating the state of health and remaining useful life of a battery is important to optimize performance and use resources optimally.&#13;
This tutorial begins with an overview of first-principles, machine learning, and hybrid battery models. Then, a typical pipeline for the development of interpretable machine learning models is explained and showcased for cycle life prediction from laboratory testing data. We highlight the challenges of machine learning models, motivating the incorporation of physics in hybrid modeling approaches, which are needed to decipher the aging trajectory of batteries but require more data and further work on the physics of battery degradation. The tutorial closes with a discussion on generalization and further research directions.
2024 American Control Conference (ACC) July 8-12, 2024. Toronto, Canada
</description>
<pubDate>Wed, 10 Jul 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157671</guid>
<dc:date>2024-07-10T00:00:00Z</dc:date>
</item>
<item>
<title>Clinical Validation of Non-invasive Simulation-Based Determination of Vascular Impedance, Wave Intensity, and Hydraulic Work in Patients Undergoing Transcatheter Aortic Valve Replacement</title>
<link>https://hdl.handle.net/1721.1/157670</link>
<description>Clinical Validation of Non-invasive Simulation-Based Determination of Vascular Impedance, Wave Intensity, and Hydraulic Work in Patients Undergoing Transcatheter Aortic Valve Replacement
Brown, Jonathan Y.; Fernandez, Gabriela V.; De La Torre Hernández, Jose M.; Murphy, Michael; Wessler, Benjamin S.; Edelman, Elazer R.
Purpose The impact of Aortic Stenosis (AS) on the left ventricle (LV) extends beyond the influence of the pressure drop across the stenotic valve, but also includes the additional serial afterload imposed by the vascular system. Aortic input impedance is the gold standard for comprehensively studying the contribution of the vascular system to total myocardial afterload, but in the past measurement has been challenging arising from the need for invasive catheterization or specialized equipment to precisely record time-resolved blood pressure and flow signals. The goal of this work was to develop and validate a novel simulation-based method for determining aortic input impedance using only clinically available echocardiographic data and a simple blood pressure measurement. Methods A simulation-based method to determine vascular impedance was developed using echocardiographic data and a brachial blood pressure measurement. Simulation-based impedance was compared to impedance calculated from echocardiographic flow data and pressure data from a non-invasive central pressure measurement device. Results In validation analysis comparing patient-specific simulation-based vascular impedance to non-invasively measured impedance, correlation between methods across a range of vascular parameters varied between R2 = 0.40 and 0.99. A tendency was seen toward underestimation of pressure waveforms in point-by-point comparison of measured and simulated waveforms with an overall mean difference of 4.01 mmHg. Conclusions Requiring only non-invasive clinical data that are widely available, simulation-based vascular impedance has the potential to allow for easier, more widespread, and larger-scale investigation of the effect of vascular impedance on total LV afterload.
</description>
<pubDate>Tue, 19 Nov 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157670</guid>
<dc:date>2024-11-19T00:00:00Z</dc:date>
</item>
<item>
<title>Extracting kinetic information from short-time trajectories: relaxation and disorder of lossy cavity polaritons</title>
<link>https://hdl.handle.net/1721.1/157664</link>
<description>Extracting kinetic information from short-time trajectories: relaxation and disorder of lossy cavity polaritons
Wu, Andrew; Cerrillo, Javier; Cao, Jianshu
The emerging field of molecular cavity polaritons has stimulated a surge of experimental and theoretical activities and presents a unique opportunity to develop the many-body simulation methodology. This paper presents a numerical scheme for the extraction of key kinetic information of lossy cavity polaritons based on the transfer tensor method (TTM). Steady state, relaxation timescales, and oscillatory phenomena can all be deduced directly from a set of transfer tensors without the need for long-time simulation. Moreover, we generalize TTM to disordered systems by sampling dynamical maps and achieve fast convergence to disordered-averaged dynamics using a small set of realizations. Together, these techniques provide a toolbox for characterizing the interplay of cavity loss, disorder, and cooperativity in polariton relaxation and allow us to predict unusual dependences on the initial excitation state, photon decay rate, strength of disorder, and the type of cavity models. Thus, using the example of cavity polaritons, we have demonstrated significant potential in the use of the TTM toward both the efficient computation of long-time polariton dynamics and the extraction of crucial kinetic information about polariton relaxation from a small set of short-time trajectories.
</description>
<pubDate>Mon, 03 Jun 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157664</guid>
<dc:date>2024-06-03T00:00:00Z</dc:date>
</item>
<item>
<title>High-throughput quantification of quasistatic, dynamic and spall strength of materials across 10 orders of strain rates</title>
<link>https://hdl.handle.net/1721.1/157663</link>
<description>High-throughput quantification of quasistatic, dynamic and spall strength of materials across 10 orders of strain rates
Eswarappa Prameela, Suhas; Walker, Christopher C; DiMarco, Christopher S; Mallick, Debjoy D; Sun, Xingsheng; Hernandez, Stephanie; Sasaki, Taisuke; Wilkerson, Justin W; Ramesh, KT; Pharr, George M; Weihs, Timothy P
The response of metals and their microstructures under extreme dynamic conditions can be markedly different from that under quasistatic conditions. Traditionally, high strain rates and shock stresses are achieved using cumbersome and expensive methods such as the Kolsky bar or large spall experiments. These methods are low throughput and do not facilitate high-fidelity microstructure–property linkages. In this work, we combine two powerful small-scale testing methods, custom nanoindentation, and laser-driven microflyer (LDMF) shock, to measure the dynamic and spall strength of metals. The nanoindentation system is configured to test samples from quasistatic to dynamic strain-rate regimes. The LDMF shock system can test samples through impact loading, triggering spall failure. The model material used for testing is magnesium alloys, which are lightweight, possess high-specific strengths, and have historically been challenging to design and strengthen due to their mechanical anisotropy. We adopt two distinct microstructures, solutionized (no precipitates) and peak-aged (with precipitates) to demonstrate interesting upticks in strain-rate sensitivity and evolution of dynamic strength. At high shock-loading rates, we unravel an interesting paradigm where the spall strength vs. strain rate of these materials converges, but the failure mechanisms are markedly different. Peak aging, considered to be a standard method to strengthen metallic alloys, causes catastrophic failure, faring much worse than solutionized alloys. Our high-throughput testing framework not only quantifies strength but also teases out unexplored failure mechanisms at extreme strain rates, providing valuable insights for the rapid design and improvement of materials for extreme environments.
</description>
<pubDate>Tue, 30 Apr 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157663</guid>
<dc:date>2024-04-30T00:00:00Z</dc:date>
</item>
<item>
<title>The Language Network Reliably “Tracks” Naturalistic Meaningful Nonverbal Stimuli</title>
<link>https://hdl.handle.net/1721.1/157662</link>
<description>The Language Network Reliably “Tracks” Naturalistic Meaningful Nonverbal Stimuli
Sueoka, Yotaro; Paunov, Alexander; Tanner, Alyx; Blank, Idan A.; Ivanova, Anna; Fedorenko, Evelina
The language network, comprised of brain regions in the left frontal and temporal cortex, responds robustly and reliably during language comprehension but shows little or no response during many nonlinguistic cognitive tasks (e.g., Fedorenko &amp; Blank, 2020). However, one domain whose relationship with language remains debated is semantics—our conceptual knowledge of the world. Given that the language network responds strongly to meaningful linguistic stimuli, could some of this response be driven by the presence of rich conceptual representations encoded in linguistic inputs? In this study, we used a naturalistic cognition paradigm to test whether the cognitive and neural resources that are responsible for language processing are also recruited for processing semantically rich nonverbal stimuli. To do so, we measured BOLD responses to a set of ∼5-minute-long video and audio clips that consisted of meaningful event sequences but did not contain any linguistic content. We then used the intersubject correlation (ISC) approach (Hasson et al., 2004) to examine the extent to which the language network “tracks” these stimuli, that is, exhibits stimulus-related variation. Across all the regions of the language network, meaningful nonverbal stimuli elicited reliable ISCs. These ISCs were higher than the ISCs elicited by semantically impoverished nonverbal stimuli (e.g., a music clip), but substantially lower than the ISCs elicited by linguistic stimuli. Our results complement earlier findings from controlled experiments (e.g., Ivanova et al., 2021) in providing further evidence that the language network shows some sensitivity to semantic content in nonverbal stimuli.
</description>
<pubDate>Mon, 03 Jun 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157662</guid>
<dc:date>2024-06-03T00:00:00Z</dc:date>
</item>
<item>
<title>Impinging jet mixers: A review of their mixing characteristics, performance considerations, and applications</title>
<link>https://hdl.handle.net/1721.1/157661</link>
<description>Impinging jet mixers: A review of their mixing characteristics, performance considerations, and applications
Devos, Cedric; Mukherjee, Saikat; Inguva, Pavan; Singh, Shalini; Wei, Yi; Mondal, Sandip; Yu, Huiwen; Barbastathis, George; Stelzer, Torsten; Braatz, Richard D; Myerson, Allan S
Optimal control over fast chemical processes hinges on the achievement of rapid and effective mixing. Impinging jet mixers are a unique class of passive mixing devices renowned for their exceptional ability to achieve rapid mixing at micro‐length scales, whilst offering the possibility of a high throughput. Comprising of two co‐linear jets flowing in opposite directions and colliding with each other within a small (usually confined) volume, these devices effectively intensify various mixing‐controlled processes in a reproducible manner. Impinging jet mixers find extensive use in both the chemical and pharmaceutical industry for a plethora of applications, such as reaction injection molding and precipitation processes. This review provides an overview of research related to impinging jet mixers, with an emphasis on the mixing characteristics and the influence of design and process parameters on the mixing performance. Lastly, specific applications for which these devices are exceptionally suited are discussed.
</description>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157661</guid>
</item>
<item>
<title>Real-time estimation of bound water concentration during lyophilization with temperature-based state observers</title>
<link>https://hdl.handle.net/1721.1/157660</link>
<description>Real-time estimation of bound water concentration during lyophilization with temperature-based state observers
Srisuma, Prakitr; Barbastathis, George; Braatz, Richard D
Lyophilization (aka freeze drying) has been shown to provide long-term stability for many crucial biotherapeutics, e.g., mRNA vaccines for COVID-19, allowing for higher storage temperature. The final stage of lyophilization, namely secondary drying, entails bound water removal via desorption, in which accurate prediction of bound water concentration is vital to ensuring the quality of the lyophilized product. This article proposes a novel technique for real-time estimation of the bound water concentration during secondary drying in lyophilization. A state observer is employed, which combines temperature measurement and mechanistic understanding of heat transfer and desorption kinetics, without requiring any online concentration measurement. Results from both simulations and experimental data show that the observer can accurately estimate the concentration of bound water in real time for all possible concentration levels, operating conditions, and measurement noise. This framework can also be applied for monitoring and control of the residual moisture in other desorption-related processes.
</description>
<pubDate>Fri, 01 Nov 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157660</guid>
<dc:date>2024-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Gaussian process-based online health monitoring and fault analysis of lithium-ion battery systems from field data</title>
<link>https://hdl.handle.net/1721.1/157659</link>
<description>Gaussian process-based online health monitoring and fault analysis of lithium-ion battery systems from field data
Schaeffer, Joachim; Lenz, Eric; Gulla, Duncan; Bazant, Martin Z; Braatz, Richard D; Findeisen, Rolf
Health monitoring, fault analysis, and detection methods are important to operate battery systems safely. We apply Gaussian process resistance models on lithium-iron-phosphate (LFP) battery field data to separate the time-dependent and operating-point-dependent resistances. The dataset contains 28 battery systems returned to the manufacturer for warranty, each with eight cells in series, totaling 224 cells and 133 million data rows. We develop probabilistic fault detection rules using recursive spatiotemporal Gaussian processes. These processes scale linearly with the number of data points, allowing online monitoring. The fault analysis underlines that often, only a single cell shows abnormal behavior or a knee point, consistent with weakest-link failure for cells connected in series, amplified by local resistive heating. The results further the understanding of how battery packs degrade and fail in the field and demonstrate the potential of online monitoring. We open source the code and publish the dataset with this article.
</description>
<pubDate>Tue, 01 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157659</guid>
<dc:date>2024-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Environmental damping and vibrational coupling of confined fluids within isolated carbon nanotubes</title>
<link>https://hdl.handle.net/1721.1/157658</link>
<description>Environmental damping and vibrational coupling of confined fluids within isolated carbon nanotubes
Tu, Yu-Ming; Kuehne, Matthias; Misra, Rahul Prasanna; Ritt, Cody L; Oliaei, Hananeh; Faucher, Samuel; Li, Haokun; Xu, Xintong; Penn, Aubrey; Yang, Sungyun; Yang, Jing Fan; Sendgikoski, Kyle; Chakraverty, Joshika; Cumings, John; Majumdar, Arun; Aluru, Narayana R; Hachtel, Jordan A; Blankschtein, Daniel; Strano, Michael S
Because of their large surface areas, nanotubes and nanowires demonstrate exquisite mechanical coupling to their surroundings, promising advanced sensors and nanomechanical devices. However, this environmental sensitivity has resulted in several ambiguous observations of vibrational coupling across various experiments. Herein, we demonstrate a temperature-dependent Radial Breathing Mode (RBM) frequency in free-standing, electron-diffraction-assigned Double-Walled Carbon Nanotubes (DWNTs) that shows an unexpected and thermally reversible frequency downshift of 10 to 15%, for systems isolated in vacuum. An analysis based on a harmonic oscillator model assigns the distinctive frequency cusp, produced over 93 scans of 3 distinct DWNTs, along with the hyperbolic trajectory, to a reversible increase in damping from graphitic ribbons on the exterior surface. Strain-dependent coupling from self-tensioned, suspended DWNTs maintains the ratio of spring-to-damping frequencies, producing a stable saturation of RBM in the low-tension limit. In contrast, when the interior of DWNTs is subjected to a water-filling process, the RBM thermal trajectory is altered to that of a Langmuir isobar and elliptical trajectories, allowing measurement of the enthalpy of confined fluid phase change. These mechanisms and quantitative theory provide new insights into the environmental coupling of nanomechanical systems and the implications for devices and nanofluidic conduits.&lt;/jats:p&gt;
</description>
<pubDate>Mon, 01 Jan 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157658</guid>
<dc:date>2024-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Precise Issuance of Meituan Merchants’ Coupons with Machine Learning</title>
<link>https://hdl.handle.net/1721.1/157656</link>
<description>Precise Issuance of Meituan Merchants’ Coupons with Machine Learning
Zhang, Xue; Qiu, Jie; Li, Bo
With the popularity of mobile Internet, the “Online-to-Offline” (O2O) business model has become popular. Issuing coupons to attract new customer registrations and keep old customers active is an important marketing tool for O2O companies. But the random distribution of coupons can be annoying to those non-target customers. For merchants, the transition of issuing coupons to merchants will not only increase the promotion cost but also have a negative effect on their brand reputation. The purpose of this study is to analyze transaction data and build a model to predict the redemption of coupons, so as to achieve the precise issue of coupons by merchants. We use machine learning to analyze the consumption data and extract features from five categories: coupons, merchants, consumers, consumers-merchants, and other categories. A total of 44 features are extracted and the XGBoost (eXtreme Gradient Boosting) model is adopted. It has been verified that the prediction results of the application of the XGBoost model can nearly increase 50% net profits of the merchants.
MLPRAE 2024, August 07–09, 2024, Singapore, Singapore
</description>
<pubDate>Wed, 07 Aug 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157656</guid>
<dc:date>2024-08-07T00:00:00Z</dc:date>
</item>
<item>
<title>From Transparency to Accountability and Back</title>
<link>https://hdl.handle.net/1721.1/157655</link>
<description>From Transparency to Accountability and Back
Cen, Sarah; Alur, Rohan
Artificial intelligence (AI) is increasingly intervening in our lives, raising widespread concern about its unintended and undeclared side effects. These developments have brought attention to the problem of AI auditing: the systematic evaluation and analysis of an AI system, its development, and its behavior relative to a set of predetermined criteria. Auditing can take many forms, including pre-deployment risk assessments, ongoing monitoring, and compliance testing. It plays a critical role in providing assurances to various AI stakeholders, from developers to end users. Audits may, for instance, be used to verify that an algorithm complies with the law, is consistent with industry standards, and meets the developer’s claimed specifications. However, AI developers and companies will rarely grant auditors unfettered access to their systems.&#13;
In this work, we examine a key consideration in AI auditing: what type of access to an AI system is needed to perform a meaningful audit? Addressing this question has direct policy relevance, as it can inform AI audit guidelines and requirements. We begin by discussing the factors that auditors balance when determining the appropriate type of access, and unpack the benefits and drawbacks of four types of access. We conclude that, at minimum, black-box access—providing query access to a model without exposing its internal implementation—should be granted to auditors. In particular, we argue that black-box access effectively balances concerns related to proprietary technology, data privacy, audit standardization, and audit efficiency. We then suggest a framework for determining how much further access (on top of black-box access) to provide to auditors. We show that auditing can be cast as a natural hypothesis test and argue that this framing provides clear and interpretable guidance on the implementation of AI audits. In particular, we draw parallels between aspects of hypothesis testing and those of legal procedure, such as legal presumption and burden of proof. As a result, hypothesis testing provides an approach to AI auditing that is both interpretable and effective, offering a potential path forward despite the challenges posed by AI’s opacity.
EAAMO ’24, October 29–31, 2024, San Luis Potosi, Mexico
</description>
<pubDate>Tue, 29 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157655</guid>
<dc:date>2024-10-29T00:00:00Z</dc:date>
</item>
<item>
<title>Racial Steering by Large Language Models: A Prospective Audit of GPT-4 on Housing Recommendations</title>
<link>https://hdl.handle.net/1721.1/157628</link>
<description>Racial Steering by Large Language Models: A Prospective Audit of GPT-4 on Housing Recommendations
Liu, Eric; So, Wonyoung; Hosoi, Peko; D'Ignazio, Catherine
The integration of Large Language Models (LLMs) into a wide range of rental and real estate platforms could exacerbate historical inequalities in housing, particularly given that LLMs have exhibited gender, racial, ethnic, nationality, and language-based biases in other contexts. Examples of use cases already exist, with real estate listing platforms having launched ChatGPT plugins in 2023. In response to the critical need to assess the ways that LLMs may contribute to housing discrimination, we analyze GPT-4 housing recommendations in response to N = 168,000 prompts for renting and buying in the ten largest majority-minority cities in the US with prompts varying by demographic characteristics like sexuality, race, gender, family status, and source of income, many of which are protected under federal, state, and local fair housing laws. We find evidence of racial steering, default whiteness, and steering of minority homeseekers toward neighborhoods with lower opportunity indices in GPT-4’s housing recommendations to prospective buyers or renters, all of which could have the effect of exacerbating segregation in already segregated cities. Finally, we discuss potential legal implications on how LLMs could be liable under fair housing laws and end with policy recommendations regarding the importance of auditing, understanding, and mitigating risks from AI systems before they are put to use.
EAAMO ’24, October 29–31, 2024, San Luis Potosi, Mexico
</description>
<pubDate>Tue, 29 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157628</guid>
<dc:date>2024-10-29T00:00:00Z</dc:date>
</item>
<item>
<title>Automatic Local Inverse Calculation for Change of Variables</title>
<link>https://hdl.handle.net/1721.1/157627</link>
<description>Automatic Local Inverse Calculation for Change of Variables
Rojas Collins, Elias
Inversion is a fundamental operation that arises frequently in probabilistic inference and computer graphics. For example, inversion is used to decrease variance and to enable differentiation in variational inference (e.g., reparameterization trick) and in differentiable rendering (e.g., to integrate over object boundaries). Existing approaches to inversion limit the class of functions inverted, for example, to affine functions, or require a user-specified inverse. We study when a local inverse—an inverse that is valid in a neighborhood of a point—exists. We provide an algorithm to approximate the local inverse and give the convergence rate of the solver. We present LIN, a system that automatically computes the local inverse of a function using a fixed-point solver. We implement LIN in Python and use it to automatically compute the local inverse of affine, polar, and hyperbolic changes of variables arising in image stylization.
SPLASH Companion ’24, October 20–25, 2024, Pasadena, CA, USA
</description>
<pubDate>Sun, 20 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157627</guid>
<dc:date>2024-10-20T00:00:00Z</dc:date>
</item>
<item>
<title>Learning Socio-Temporal Graphs for Multi-Agent Trajectory Prediction</title>
<link>https://hdl.handle.net/1721.1/157626</link>
<description>Learning Socio-Temporal Graphs for Multi-Agent Trajectory Prediction
Li, Yuke; Chen, Lixiong; Chen, Guangyi; Chan, Ching-Yao; Zhang, Kun; Anzellotti, Stefano; Wei, Donglai
In order to predict a pedestrian's trajectory in a crowd accurately, one has to take into account her/his underlying socio-temporal interactions with other pedestrians consistently. Unlike existing work that represents the relevant information separately, partially, or implicitly, we propose a complete representation for it to be fully and explicitly captured and analyzed. In particular, we introduce a Directed Acyclic Graph-based structure, which we term Socio-Temporal Graph (STG), to explicitly capture pair-wise socio-temporal interactions among a group of people across both space and time. Our model is built on a time-varying generative process, whose latent variables determine the structure of the STGs. We design an attention-based model named STGformer that affords an end-to-end pipeline to learn the structure of the STGs for trajectory prediction. Our solution achieves overall state-of-the-art prediction accuracy in two large-scale benchmark datasets. Our analysis shows that a person's past trajectory is critical for predicting another person's future path. Our model learns this relationship with a strong notion of socio-temporal localities. Statistics show that utilizing this information explicitly for prediction yields a noticeable performance gain with respect to the trajectory-only approaches.
MM’24, October 28 - November 1, 2024, Melbourne, Australia
</description>
<pubDate>Mon, 28 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157626</guid>
<dc:date>2024-10-28T00:00:00Z</dc:date>
</item>
<item>
<title>Adaptive In-Vehicle Virtual Reality for Reducing Motion Sickness: Manipulating Passenger Posture During Driving Events</title>
<link>https://hdl.handle.net/1721.1/157625</link>
<description>Adaptive In-Vehicle Virtual Reality for Reducing Motion Sickness: Manipulating Passenger Posture During Driving Events
Elsharkawy, Ahmed; Ataya, Aya; Yeo, Dohyeon; Seong, Minwoo; Hwang, Seokhyun; DelPreto, Joseph; Matusik, Wojciech; Rus, Daniela; Kim, SeungJun
The rise of autonomous vehicles (AVs) has promoted the adoption of in-vehicle virtual reality (VR) for creating immersive experiences. However, these experiences can trigger motion sickness (MS) due to visual-vestibular mismatches. Traditional techniques, such as visual matching and scene manipulation, address MS but often neglect the impact of body posture changes. This study examines the effects of interactive VR tasks on passenger body posture during MS-inducing events, including turns and vertical displacements. Our findings reveal significant variations in user body postures relative to conditions with event-based designed interactive VR tasks, resulting in a reduction of MS symptoms. Specifically, participants engaged in interactive VR tasks showed improved posture alignment and body stability. These insights offer practical guidelines for developing adaptive VR content that proactively manages posture to alleviate MS, thereby enhancing passenger comfort in in-vehicle VR applications.
UbiComp Companion ’24, October 5–9, 2024, Melbourne, VIC, Australia
</description>
<pubDate>Sat, 05 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157625</guid>
<dc:date>2024-10-05T00:00:00Z</dc:date>
</item>
<item>
<title>Proposal of a Framework for Enhancing Teleoperation Experience with Biomechanical Simulation-Based Electrical Muscle Stimulation in Virtual Reality</title>
<link>https://hdl.handle.net/1721.1/157624</link>
<description>Proposal of a Framework for Enhancing Teleoperation Experience with Biomechanical Simulation-Based Electrical Muscle Stimulation in Virtual Reality
Hwang, Seokhyun; Kang, Seongjun; Oh, Jeongseok; Park, Jeongju; Shin, Semoo; Luo, Yiyue; DelPreto, Joseph; Matusik, Wojciech; Rus, Daniela; Kim, SeungJun
Teleoperation, the remote manual control of robots, is primarily used in high-precision and safety-critical environments such as surgery, space exploration, and deep-sea exploration. Despite being a widely utilized technology, teleoperation relies on human cognitive abilities, leading to significant cognitive load for operators. To address this challenge, we propose a concept of a VR teleoperation haptic system that combines biomechanical simulation and electrical muscle stimulation to provide force feedback in a lightweight, wearable form by mimicking natural force generation without the need for external actuators. Our system is divided into two main components: the physical simulation part, which calculates the joint torques to replicate forces from the manipulator, and the electrical stimulation part, which translates torques into muscle stimulations. Through this integration, we expect our system to bridge the gulf of execution and evaluation, reducing cognitive load and enhancing teleoperation performance. This paper aims to discuss the detailed framework of our system and potential future research directions.
</description>
<pubDate>Sat, 05 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157624</guid>
<dc:date>2024-10-05T00:00:00Z</dc:date>
</item>
<item>
<title>Exploring Potential Application Areas of Artificial Intelligence-Infused System for Engagement Recognition: Insights from Special Education Experts</title>
<link>https://hdl.handle.net/1721.1/157623</link>
<description>Exploring Potential Application Areas of Artificial Intelligence-Infused System for Engagement Recognition: Insights from Special Education Experts
Kim, Won; Seong, Minwoo; DelPreto, Joseph; Matusik, Wojciech; Rus, Daniela; Kim, SeungJun
Active engagement where children with autism spectrum disorder (ASD) are involved (e.g., educational and social activities) plays a crucial role in enhancing their cognitive, motor, and social development. This offers opportunities to enhance overall development, including learning abilities, physical coordination, and social interactions. Indirect methods, leveraging sensors and artificial intelligence (AI), have exhibited potential for enhancing engagement predictions but have been primarily focused within specific fields, resulting in a gap that leads to limited generalizability of ASD studies. This gap, due to small ASD sample sizes, presents a significant challenge as the annual ASD population increases, highlighting the need for practical and applicable research solutions, especially for general learning. In this work, we conducted expert interviews to explore the potential application areas of AI-infused systems that provide three levels of engagement status for children with ASD, ranging from "not engaged and out of control" to "highly engaged." Interviews with special educators revealed five key application areas for AI-driven engagement recognition: social skills training, stereotyped behavior modification, support for leisure activities, effective tutoring, and independent daily living skills. These findings highlight the potential of adaptive AI interventions in improving educational and daily outcomes, advocating for expanded applications for children with ASD.
UbiComp Companion ’24, October 5–9, 2024, Melbourne, VIC, Australia
</description>
<pubDate>Sat, 05 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157623</guid>
<dc:date>2024-10-05T00:00:00Z</dc:date>
</item>
<item>
<title>LegSense: Inducing Walking Sensation in Seated VR by Providing Movement Illusion via Electrical Muscle Stimulation</title>
<link>https://hdl.handle.net/1721.1/157622</link>
<description>LegSense: Inducing Walking Sensation in Seated VR by Providing Movement Illusion via Electrical Muscle Stimulation
Um, Juwon; Jeon, Eunki; Kang, Yumin; Kang, Seongjun; Elsharkawy, Ahmed; DelPreto, Joseph; Matusik, Wojciech; Rus, Daniela; Kim, SeungJun
Providing convincing proprioceptive cues is essential for immersive virtual reality (VR) navigation. However, this is challenging for seated users with restricted mobility. To address this gap, this study proposes LegSense, a method designed to induce the walking sensation in VR via electrical muscle stimulation (EMS). This method activates the leg muscle senses in sync with the gait cycle without requiring physical motion to enhance users' immersion. We evaluated the efficacy of LegSense through a user study and confirmed its potential in terms of walking sensation, embodiment, and presence compared to other static conditions (baseline and vibro-tactile). Additionally, participant interviews confirmed that LegSense effectively creates a leg movement illusion, suggesting its potential applications in diverse virtual scenarios to enhance VR experiences for seated users.
UbiComp Companion ’24, October 5–9, 2024, Melbourne, VIC, Australia
</description>
<pubDate>Sat, 05 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157622</guid>
<dc:date>2024-10-05T00:00:00Z</dc:date>
</item>
<item>
<title>Intelligent Seat: Tactile Signal-Based 3D Sitting Pose Inference</title>
<link>https://hdl.handle.net/1721.1/157621</link>
<description>Intelligent Seat: Tactile Signal-Based 3D Sitting Pose Inference
Seong, Minwoo; Kim, Gwangbin; Lee, Jaehee; DelPreto, Joseph; Matusik, Wojciech; Rus, Daniela; Kim, SeungJun
Owing to people spending a large portion of their day sitting while working, commuting, or relaxing, monitoring their sitting posture is crucial for the development of adaptive interventions that respond to the user's pose, state, and behavior. This is because posture is closely linked to actions, health, attention, and engagement levels. The existing systems for posture estimation primarily use computer vision-based measurements or body-attached sensors; however, they are plagued by challenges such as privacy concerns, occlusion issues, and user discomfort. To address these drawbacks, this study proposed a posture-inference system that uses high-density piezoresistive sensors for joint reconstruction. Tactile pressure data were collected from six individuals, each performing seven different postures 20 times. The proposed system achieved an average L2 distance of 20.2 cm in the joint position reconstruction with a posture classification accuracy of 96.3%. Future research will focus on the development of a system capable of providing real-time feedback to help users maintain the correct sitting posture.
UbiComp Companion ’24, October 5–9, 2024, Melbourne, VIC, Australia
</description>
<pubDate>Sat, 05 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157621</guid>
<dc:date>2024-10-05T00:00:00Z</dc:date>
</item>
<item>
<title>9th International Workshop on Mental Health and Well-being: New Research Directions</title>
<link>https://hdl.handle.net/1721.1/157620</link>
<description>9th International Workshop on Mental Health and Well-being: New Research Directions
Adler, Daniel; Xu, Xuhai; Salekin, Asif; Mishra, Varun; Kwon, Hyeokhyen; Sano, Akane; Abdullah, Saeed; Bardram, Jakob; Zhao, Yiran; Kalanadhabhatta, Manasa; Zhang, Han; Murnane, Elizabeth; Choudhury, Tanzeem; Musolesi, Mirco; Rahman, Tauhidur; King, Zachary; Krell, Rony; D'Alfonso, Simon
Mental health and well-being influence overall health: suffering from a mental illness can create severe impairment and reduce quality of life. Ubiquitous computing technologies are beginning to play a central role in collecting clinically relevant behavioral and physiological information on mental health that can be used to detect symptoms early-on, deliver preventative interventions, and manage symptoms throughout the course of illness. Despite this potential, designing and translating ubiquitous technologies into mental healthcare is a complex process, and existing technologies have faced numerous challenges towards effective implementation. The goal of this workshop is to bring together researchers, practitioners, and industry professionals to identify, articulate, and address the challenges of designing and implementing ubiquitous computing technologies in mental healthcare. Given these challenges, we are adding a specific call for papers that inspire new research directions, with initial findings that are valuable to the community, but are not fully publishable or finished contributions. Following the success of this workshop for the last eight years, we aim to continue facilitating the UbiComp community in both the conceptualization, translation, and implementation of novel mental health sensing and intervention technologies.
UbiComp Companion ’24, October 5–9, 2024, Melbourne, VIC, Australia
</description>
<pubDate>Sat, 05 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157620</guid>
<dc:date>2024-10-05T00:00:00Z</dc:date>
</item>
<item>
<title>Mind the Hazard: Modeling and Interpreting Comfort with Personalized Sensing</title>
<link>https://hdl.handle.net/1721.1/157619</link>
<description>Mind the Hazard: Modeling and Interpreting Comfort with Personalized Sensing
Zhang, Yufei; Favero, Matteo; Chwalek, Patrick; Zhong, Sailin; Lalanne, Denis; Paradiso, Joseph A.; Miller, Clayton; Sonta, Andrew
Recent advances in personalized sensing and comfort feedback have spurred the development of data-driven comfort models tailored to individual needs. However, because current models treat sequential comfort feedback independently, they are subject to unstable predictions and limited interpretability, hindering their deployment in building management. This study introduces a dynamic modeling framework that utilizes a Neural Ordinary Differential Equations-based Continuous-time Markov Chain to model the transitions in comfort states over time. Our modeling approach, developed through a field study utilizing smart glasses and mobile app feedback, tracks occupants' comfort transitions across daily activities and contexts. The results demonstrate that this model not only predicts comfort states more accurately and stably than conventional classification models but also uniquely provides a representation of how the hazards of state transitions are influenced by changing ambient and contextual conditions. This approach, therefore, offers a new perspective on personalized building control, where predictions of comfort transition hazards can preemptively suggest building management interventions to avoid occupants experiencing discomfort. In addition, insights into how environmental and contextual characteristics relate to these hazards can guide holistic management strategies that dynamically balance comfort with energy targets in response to the occupants' activities and contexts.
BuildSys ’24, November 7–8, 2024, Hangzhou, China
</description>
<pubDate>Tue, 29 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157619</guid>
<dc:date>2024-10-29T00:00:00Z</dc:date>
</item>
<item>
<title>Examining the adoption of electromobility concepts across social contexts for energy transition</title>
<link>https://hdl.handle.net/1721.1/157618</link>
<description>Examining the adoption of electromobility concepts across social contexts for energy transition
K?hlke, Julia; Lechowicz, Adam; Fabikun, Oluwole; Bashir, Noman; Souza, Abel; Shenoy, Prashant; Lehnhoff, Sebastian
The impact of mobility decisions not only shapes urban traffic patterns and planning, but also its associated effects, such as greenhouse gas (GHG) emissions. Although e-bike sharing is not a new concept, it has shown significant strides in technological progress in recent years due to the ongoing process of digitalization, specifically towards decarbonization effects. Past studies have shown that e-bike sharing shows a potential as a fast, mobile, and environmentally friendly alternative to cars and public transport. Although e-bikes represent a viable alternative to traditional means of transportation, there is a lack of quantification in understanding the impact and acceptance of e-bikes towards social contexts as well as its adoption as a type of sharing concept. In this paper, we employ the Unified Theory of Acceptance and Use of Technology (UTAUT) model as an analytical framework to discern the use and acceptance of e-bike sharing as an emerging technological concept across different cities and social contexts. Our findings reveal that the e-bike sharing system's utilization is skewed towards a small percentage of "frequent users", and overall usage is biased towards younger, more-educated, and higher-income populations who live in bike-friendly areas. Our work contributes to the feasibility of embedding the e-bike sharing concept in the scope of the energy transition.
BuildSys ’24, November 7–8, 2024, Hangzhou, China
</description>
<pubDate>Tue, 29 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157618</guid>
<dc:date>2024-10-29T00:00:00Z</dc:date>
</item>
<item>
<title>A Mixed Continuous-Discrete Approach to Fast Charging of Li-ion Batteries While Maximizing Lifetime</title>
<link>https://hdl.handle.net/1721.1/157617</link>
<description>A Mixed Continuous-Discrete Approach to Fast Charging of Li-ion Batteries While Maximizing Lifetime
Berliner, Marc D; Cogswell, Daniel A; Bazant, Martin Z; Braatz, Richard D
Fast charging studies for lithium-ion batteries aim to minimize charging time while maximizing battery lifetime. Real-time optimal control problems are typically solved using empirical or simplified physical models with constraint-based model predictive control (MPC). In this article, we derive physics-based operating modes based on degradative governing equations, which are used to ensure safe use and minimal degradation during long-term cycling. The fast-charging protocols are efficiently and deterministically simulated using a mixed continuous-discrete (aka hybrid) approach to fast charging. This simultaneously solves the battery system of equations and the constraint-based control problem. The approach is evaluated using a Porous Electrode Theory-based model that includes solid-electrolyte interface (SEI) capacity fade. Three physics-based charging protocols are compared to a conventional constant current-constant voltage (CC-CV) protocol. Given identical levels of capacity fade after 500 cycles, the physics-based protocols uniformly reach a greater charge capacity compared to CC-CV after charging for 10 and 15 minutes. The computational cost of simulating physics-based charging protocols is only about 30% greater than the CC-CV method. The fast charging framework is easily extendable to other battery models, irrespective of model complexity.
</description>
<pubDate>Sat, 01 Jan 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157617</guid>
<dc:date>2022-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Pico-Scale Science for Pedestrian-Scale Solutions (PSS4PSS): A Computational Toolbox Leveraging Molecular Simulation for Pedestrian Dynamics</title>
<link>https://hdl.handle.net/1721.1/157615</link>
<description>Pico-Scale Science for Pedestrian-Scale Solutions (PSS4PSS): A Computational Toolbox Leveraging Molecular Simulation for Pedestrian Dynamics
Chen, Samuel; Collins, Emerson; Cheng, Vincent; Kramer, Kelby; Wang, Gerald
Efficient and accurate simulations of pedestrian dynamics are critical for the smart cities of the future. In this work, we present a computational toolbox that accelerates such simulations relative to a popularly used pedestrian simulation tool by leveraging computational frameworks initially developed for molecular simulation. We make the argument that the field of pedestrian dynamics could benefit to a significant extent from a serendipitous interdisciplinary synergy with the molecular-simulation community. We provide arguments and representative examples in support of this premise, demonstrating that molecular simulation tools can be repurposed to solve precisely the same governing equations as traditional pedestrian-dynamics simulation tools, yielding the same results in significantly reduced computational time. We also describe a computational tool that we have developed that streamlines the conversion of indoor maps into boundary conditions for pedestrian simulations.
Buildsys ’24, November 7–8, 2024, Hangzhou, China
</description>
<pubDate>Tue, 29 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157615</guid>
<dc:date>2024-10-29T00:00:00Z</dc:date>
</item>
<item>
<title>Tango 2: Aligning Diffusion-based Text-to-Audio Generations through Direct Preference Optimization</title>
<link>https://hdl.handle.net/1721.1/157614</link>
<description>Tango 2: Aligning Diffusion-based Text-to-Audio Generations through Direct Preference Optimization
Majumder, Navonil; Hung, Chia-Yu; Ghosal, Deepanway; Hsu, Wei-Ning; Mihalcea, Rada; Poria, Soujanya
Generative multimodal content is increasingly prevalent in much of the content creation arena, as it has the potential to allow artists and media personnel to create pre-production mockups by quickly bringing their ideas to life. The generation of audio from text prompts is an important aspect of such processes in the music and film industry. Many of the recent diffusion-based text-to-audio models focus on training increasingly sophisticated diffusion models on a large set of datasets of prompt-audio pairs. These models do not explicitly focus on the presence of concepts or events and their temporal ordering in the output audio with respect to the input prompt. Our hypothesis is focusing on how these aspects of audio generation could improve audio generation performance in the presence of limited data. As such, in this work, using an existing text-to-audio model Tango, we synthetically create a preference dataset where each prompt has a winner audio output and some loser audio outputs for the diffusion model to learn from. The loser outputs, in theory, have some concepts from the prompt missing or in an incorrect order. We fine-tune the publicly available Tango text-to-audio model using diffusion-DPO (direct preference optimization) loss on our preference dataset and show that it leads to improved audio output over Tango and AudioLDM2, in terms of both automatic- and manual-evaluation metrics.
</description>
<pubDate>Mon, 28 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157614</guid>
<dc:date>2024-10-28T00:00:00Z</dc:date>
</item>
<item>
<title>FSL-QuickBoost: Minimal-Cost Ensemble for Few-Shot Learning</title>
<link>https://hdl.handle.net/1721.1/157613</link>
<description>FSL-QuickBoost: Minimal-Cost Ensemble for Few-Shot Learning
Bai, Yunwei; Cai, Bill Yang; Tan, Ying Kiat; Zheng, Zangwei; Chen, Shiming; Chen, Tsuhan
Few-shot learning (FSL) usually trains models on data from one set of classes, but tests them on data from a different set of classes, providing a few labeled support samples of the unseen classes as a reference for the trained model. Due to the lack of target-relevant training data, there is usually high generalization error with respect to the test classes. In this work, we conduct empirical explorations and propose an ensemble method (namely QuickBoost), which is efficient and effective for improving the generalization of FSL. Specifically, QuickBoost includes an alternative-architecture pretrained encoder with a one-vs-all binary classifier (namely FSL-Forest) based on random forest algorithm, and is ensembled with the off-the-shelf FSL models via logit-level averaging. Experiments on three benchmarks demonstrate that our method achieves state-of-the-art performance with good efficiency. Codes are available at https://github.com/WendyBaiYunwei/FSL-QuickBoost.
MM ’24, October 28-November 1, 2024, Melbourne, VIC, Australia
</description>
<pubDate>Mon, 28 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157613</guid>
<dc:date>2024-10-28T00:00:00Z</dc:date>
</item>
<item>
<title>The Future of Urban Accessibility: The Role of AI</title>
<link>https://hdl.handle.net/1721.1/157612</link>
<description>The Future of Urban Accessibility: The Role of AI
Froehlich, Jon; Li, Chu; Hosseini, Maryam; Miranda, Fabio; Sevtsuk, Andres; Eisenberg, Yochai
We have entered a new era of computing—one where AI permeates every aspect of society from education to healthcare. In this workshop, we examine the emerging role of AI in the design of equitable and accessible cities, transportation systems, and interactive tools for mapping and navigation. We will solicit short papers around key Urban AI + disability themes, including autonomous vehicles, intelligent wheelchairs, assistive human-robotic interaction, assessing and navigating pedestrian pathways, indoor accessibility, and overarching challenges related to ethics, bias, and data privacy and security. We invite both traditional HCI and accessibility researchers as well as scholars and practitioners from other disciplines relevant to this workshop, including disability studies, gerontology, social work, community psychology, and law. Our overarching goal is to identify open challenges, share current work across disciplines, and spur new collaborations related to AI and urban accessibility.
ASSETS ’24, October 27–30, 2024, St. John’s, NL, Canada
</description>
<pubDate>Sun, 27 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157612</guid>
<dc:date>2024-10-27T00:00:00Z</dc:date>
</item>
<item>
<title>Bluefish: Composing Diagrams with Declarative Relations</title>
<link>https://hdl.handle.net/1721.1/157611</link>
<description>Bluefish: Composing Diagrams with Declarative Relations
Pollock, Josh; Mei, Catherine; Huang, Grace; Evans, Elliot; Jackson, Daniel; Satyanarayan, Arvind
Diagrams are essential tools for problem-solving and communication as they externalize conceptual structures using spatial relationships. But when picking a diagramming framework, users are faced with a dilemma. They can either use a highly expressive but low-level toolkit, whose API does not match their domain-specific concepts, or select a high-level typology, which offers a recognizable vocabulary but supports a limited range of diagrams. To address this gap, we introduce Bluefish: a diagramming framework inspired by component-based user interface (UI) libraries. Bluefish lets users create diagrams using relations: declarative, composable, and extensible diagram fragments that relax the concept of a UI component. Unlike a component, a relation does not have sole ownership over its children nor does it need to fully specify their layout. To render diagrams, Bluefish extends a traditional tree-based scenegraph to a compound graph that captures both hierarchical and adjacent relationships between nodes. To evaluate our system, we construct a diverse example gallery covering many domains including mathematics, physics, computer science, and even cooking. We show that Bluefish’s relations are effective declarative primitives for diagrams. Bluefish is open source, and we aim to shape it into both a usable tool and a research platform.
</description>
<pubDate>Sun, 13 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157611</guid>
<dc:date>2024-10-13T00:00:00Z</dc:date>
</item>
<item>
<title>PortaChrome: A Portable Contact Light Source for Integrated Re-Programmable Multi-Color Textures</title>
<link>https://hdl.handle.net/1721.1/157610</link>
<description>PortaChrome: A Portable Contact Light Source for Integrated Re-Programmable Multi-Color Textures
Zhu, Yunyi; Honnet, Cedric; Kang, Yixiao; Zhu, Junyi; Zheng, Angelina; Heinz, Kyle; Tang, Grace; Musk, Luca; Wessely, Michael; Mueller, Stefanie
In this paper, we present PortaChrome, a portable light source that can be attached to everyday objects to reprogram the color and texture of surfaces that come in contact with them. When PortaChrome makes contact with objects previously coated with photochromic dye, the UV and RGB LEDs inside PortaChrome create multi-color textures on the objects. In contrast to prior work, which used projectors for the color-change, PortaChrome has a thin and flexible form factor, which allows the color-change process to be integrated into everyday user interaction. Because of the close distance between the light source and the photochromic object, PortaChrome creates color textures in less than 4 minutes on average, which is 8 times faster than prior work. We demonstrate PortaChrome with four application examples, including data visualizations on textiles and dynamic designs on wearables.
UIST ’24, October 13–16, 2024, Pittsburgh, PA, USA
</description>
<pubDate>Sun, 13 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157610</guid>
<dc:date>2024-10-13T00:00:00Z</dc:date>
</item>
<item>
<title>Speed-Modulated Ironing: High-Resolution Shade and Texture Gradients in Single-Material 3D Printing</title>
<link>https://hdl.handle.net/1721.1/157609</link>
<description>Speed-Modulated Ironing: High-Resolution Shade and Texture Gradients in Single-Material 3D Printing
Ozdemir, Mehmet; AlAlawi, Marwa; Dogan, Mustafa Doga; Martinez Castro, Jose; Mueller, Stefanie; Doubrovski, Zjenja
We present Speed-Modulated Ironing, a new fabrication method for programming visual and tactile properties in single-material 3D printing. We use one nozzle to 3D print and a second nozzle to reheat printed areas at varying speeds, controlling the material’s temperature-response. The rapid adjustments of speed allow for fine-grained reheating, enabling high-resolution color and texture variations. We implemented our method in a tool that allows users to assign desired properties to 3D models and creates corresponding 3D printing instructions. We demonstrate our method with three temperature-responsive materials: a foaming filament, a filament with wood fibers, and a filament with cork particles. These filaments respond to temperature by changing color, roughness, transparency, and gloss. Our technical evaluation reveals the capabilities of our method in achieving sufficient resolution and color shade range that allows surface details such as small text, photos, and QR codes on 3D-printed objects. Finally, we provide application examples demonstrating the new design capabilities enabled by Speed-Modulated Ironing.
UIST ’24, October 13–16, 2024, Pittsburgh, PA, USA
</description>
<pubDate>Sun, 13 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157609</guid>
<dc:date>2024-10-13T00:00:00Z</dc:date>
</item>
<item>
<title>MouthIO: Fabricating Customizable Oral User Interfaces with Integrated Sensing and Actuation</title>
<link>https://hdl.handle.net/1721.1/157608</link>
<description>MouthIO: Fabricating Customizable Oral User Interfaces with Integrated Sensing and Actuation
Jiang, Yijing; Kleinau, Julia; Eckroth, Till Max; Hoggan, Eve; Mueller, Stefanie; Wessely, Michael
This paper introduces MouthIO, the first customizable intraoral user interface that can be equipped with various sensors and output components. MouthIO consists of an SLA-printed brace that houses a flexible PCB within a bite-proof enclosure positioned between the molar teeth and inner cheeks. Our MouthIO design and fabrication technique enables makers to customize the oral user interfaces in both form and function at low cost. All parts in contact with the oral cavity are made of bio-compatible materials to ensure safety, while the design takes into account both comfort and portability. We demonstrate MouthIO through three application examples ranging from beverage consumption monitoring, health monitoring, to assistive technology. Results from our full-day user study indicate high wearability and social acceptance levels, while our technical evaluation demonstrates the device’s ability to withstand adult bite forces.
</description>
<pubDate>Sun, 13 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157608</guid>
<dc:date>2024-10-13T00:00:00Z</dc:date>
</item>
<item>
<title>VizAbility: Enhancing Chart Accessibility with LLM-based Conversational Interaction</title>
<link>https://hdl.handle.net/1721.1/157607</link>
<description>VizAbility: Enhancing Chart Accessibility with LLM-based Conversational Interaction
Gorniak, Joshua; Kim, Yoon; Wei, Donglai; Kim, Nam Wook
Traditional accessibility methods like alternative text and data tables typically underrepresent data visualization’s full potential. Keyboard-based chart navigation has emerged as a potential solution, yet efficient data exploration remains challenging. We present VizAbility, a novel system that enriches chart content navigation with conversational interaction, enabling users to use natural language for querying visual data trends. VizAbility adapts to the user’s navigation context for improved response accuracy and facilitates verbal command-based chart navigation. Furthermore, it can address queries for contextual information, designed to address the needs of visually impaired users. We designed a large language model (LLM)-based pipeline to address these user queries, leveraging chart data &amp; encoding, user context, and external web knowledge. We conducted both qualitative and quantitative studies to evaluate VizAbility’s multimodal approach. We discuss further opportunities based on the results, including improved benchmark testing, incorporation of vision models, and integration with visualization workflows.
</description>
<pubDate>Sun, 13 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157607</guid>
<dc:date>2024-10-13T00:00:00Z</dc:date>
</item>
<item>
<title>WasteBanned: Supporting Zero Waste Fashion Design Through Linked Edits</title>
<link>https://hdl.handle.net/1721.1/157606</link>
<description>WasteBanned: Supporting Zero Waste Fashion Design Through Linked Edits
Zhang, Ruowang; Mueller, Stefanie; Bernstein, Gilbert; Schulz, Adriana; Leake, Mackenzie
The commonly used cut-and-sew garment construction process, in which 2D fabric panels are cut from sheets of fabric and assembled into 3D garments, contributes to widespread textile waste in the fashion industry. There is often a significant divide between the design of the garment and the layout of the panels. One opportunity for bridging this gap is the emerging study and practice of zero waste fashion design, which involves creating clothing designs with maximum layout efficiency. Enforcing the strict constraints of zero waste sewing is challenging, as edits to one region of the garment necessarily affect neighboring panels. Based on our formative work to understand this emerging area within fashion design, we present WasteBanned, a tool that combines CAM and CAD to help users prioritize efficient material usage, work within these zero waste constraints, and edit existing zero waste garment patterns. Our user evaluation indicates that our tool helps fashion designers edit zero waste patterns to fit different bodies and add stylistic variation, while creating highly efficient fabric layouts.
UIST ’24, October 13–16, 2024, Pittsburgh, PA, USA
</description>
<pubDate>Sun, 13 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157606</guid>
<dc:date>2024-10-13T00:00:00Z</dc:date>
</item>
<item>
<title>Structural Forces in Ionic Liquids: The Role of Ionic Size Asymmetry</title>
<link>https://hdl.handle.net/1721.1/157605</link>
<description>Structural Forces in Ionic Liquids: The Role of Ionic Size Asymmetry
de Souza, J Pedro; Pivnic, Karina; Bazant, Martin Z; Urbakh, Michael; Kornyshev, Alexei A
Ionic liquids (ILs) are charged fluids composed of anions and cations of different size and shape. The ordering of charge and density in ILs confined between charged interfaces underlies numerous applications of IL electrolytes. Here, we analyze the screening behavior and the resulting structural forces of a representative IL confined between two charge-varied plates. Using both molecular dynamics simulations and a continuum theory, we contrast the screening features of a more-realistic asymmetric system and a less-realistic symmetric one. The ionic size asymmetry plays a nontrivial role in charge screening, affecting both the ionic density profiles and the disjoining pressure distance dependence. Ionic systems with size asymmetry are stronger coupled systems, and this manifests itself both in their response to the electrode polarization and spontaneous structure formation at the interface. Analytical expressions for decay lengths of the disjoining pressure are obtained in agreement with the pressure profiles computed from molecular dynamics simulations.
</description>
<pubDate>Thu, 17 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157605</guid>
<dc:date>2022-02-17T00:00:00Z</dc:date>
</item>
<item>
<title>Dip coating of bidisperse particulate suspensions</title>
<link>https://hdl.handle.net/1721.1/157604</link>
<description>Dip coating of bidisperse particulate suspensions
Jeong, Deok-Hoon; Lee, Michael Ka Ho; Thiévenaz, Virgile; Bazant, Martin Z; Sauret, Alban
Dip coating consists of withdrawing a substrate from a bath to coat it with a thin liquid layer. This process is well understood for homogeneous fluids, but heterogeneities, such as particles dispersed in liquid, lead to more complex situations. Indeed, particles introduce a new length scale, their size, in addition to the thickness of the coating film. Recent studies have shown that, at first order, the thickness of the coating film for monodisperse particles can be captured by an effective capillary number based on the viscosity of the suspension, providing that the film is thicker than the particle diameter. However, suspensions involved in most practical applications are polydisperse, characterized by a wide range of particle sizes, introducing additional length scales. In this study, we investigate the dip coating of suspensions having a bimodal size distribution of particles. We show that the effective viscosity approach is still valid in the regime where the coating film is thicker than the diameter of the largest particles, although bidisperse suspensions are less viscous than monodisperse suspensions of the same solid fraction. We also characterize the intermediate regime that consists of a heterogeneous coating layer and where the composition of the film is different from the composition of the bath. A model to predict the probability of entraining the particles in the liquid film depending on their sizes is proposed and captures our measurements. In this regime, corresponding to a specific range of withdrawal velocities, capillarity filters the large particles out of the film.
</description>
<pubDate>Sun, 10 Apr 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157604</guid>
<dc:date>2022-04-10T00:00:00Z</dc:date>
</item>
<item>
<title>Rectified and Salt Concentration Dependent Wetting of Hydrophobic Nanopores</title>
<link>https://hdl.handle.net/1721.1/157602</link>
<description>Rectified and Salt Concentration Dependent Wetting of Hydrophobic Nanopores
Polster, Jake W; Aydin, Fikret; de Souza, J Pedro; Bazant, Martin Z; Pham, Tuan Anh; Siwy, Zuzanna S
Nanopores lined with hydrophobic groups function as switches for water and all dissolved species, such that transport is allowed only when applying a sufficiently high transmembrane pressure difference or voltage. Here we show a hydrophobic nanopore system whose wetting and ability to transport water and ions is rectified and can be controlled with salt concentration. The nanopore we study contains a junction between a hydrophobic zone and a positively charged hydrophilic zone. The nanopore is closed for transport at low salt concentrations and exhibits finite current only when the concentration reaches a threshold value that is dependent on the pore opening diameter, voltage polarity and magnitude, and type of electrolyte. The smallest nanopore studied here had a 4 nm diameter and did not open for transport in any concentration of KCl or KI examined. A 12 nm nanopore was closed for all KCl solutions but conducted current in KI at concentrations above 100 mM for negative voltages and opened for both voltage polarities at 500 mM KI. Nanopores with a hydrophobic/hydrophilic junction can thus function as diodes, such that one can identify a range of salt concentrations where the pores transport water and ions for only one voltage polarity. Molecular dynamics simulations together with continuum models provided a multiscale explanation of the observed phenomena and linked the salt concentration dependence of wetting with an electrowetting model. Results presented are crucial for designing next-generation chemical and ionic separation devices as well as understanding fundamental properties of hydrophobic interfaces under nanoconfinement.
</description>
<pubDate>Wed, 06 Jul 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157602</guid>
<dc:date>2022-07-06T00:00:00Z</dc:date>
</item>
<item>
<title>VI-VS: calibrated identification of feature dependencies in single-cell multiomics</title>
<link>https://hdl.handle.net/1721.1/157562</link>
<description>VI-VS: calibrated identification of feature dependencies in single-cell multiomics
Boyeau, Pierre; Bates, Stephen; Ergen, Can; Jordan, Michael I.; Yosef, Nir
Unveiling functional relationships between various molecular cell phenotypes from data using machine learning models is a key promise of multiomics. Existing methods either use flexible but hard-to-interpret models or simpler, misspecified models. VI-VS (Variational Inference for Variable Selection) balances flexibility and interpretability to identify relevant feature relationships in multiomic data. It uses deep generative models to identify conditionally dependent features, with false discovery rate control. VI-VS is available as an open-source Python package, providing a robust solution to identify features more likely representing genuine causal relationships.
</description>
<pubDate>Fri, 15 Nov 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157562</guid>
<dc:date>2024-11-15T00:00:00Z</dc:date>
</item>
<item>
<title>Measurement of the B s 0 → J / ψK S 0 effective lifetime from proton-proton collisions at s = 13 TeV</title>
<link>https://hdl.handle.net/1721.1/157561</link>
<description>Measurement of the B s 0 → J / ψK S 0 effective lifetime from proton-proton collisions at s = 13 TeV
Hayrapetyan, A.; Tumasyan, A.; Adam, W.; Andrejkovic, J. W.; Bergauer, T.; Chatterjee, S.; Damanakis, K.; Dragicevic, M.; Hussain, P. S.; Jeitler, M.; Krammer, N.; Li, A.; Liko, D.; Mikulec, I.; Schieck, J.; Schöfbeck, R.; Schwarz, D.
The effective lifetime of the B s 0 meson in the decay B s 0 → J / ψK S 0 is measured using data collected during 2016–2018 with the CMS detector in s = 13 TeV proton-proton collisions at the LHC, corresponding to an integrated luminosity of 140 fb−1. The effective lifetime is determined by performing a two-dimensional unbinned maximum likelihood fit to the B s 0 meson invariant mass and proper decay time distributions. The resulting value of 1.59 ± 0.07(stat) ± 0.03(syst) ps is the most precise measurement to date and is in good agreement with the expected value.
</description>
<pubDate>Thu, 31 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157561</guid>
<dc:date>2024-10-31T00:00:00Z</dc:date>
</item>
<item>
<title>Exploring the impact of COVID-19 on the grammar of schools in project-based learning contexts</title>
<link>https://hdl.handle.net/1721.1/157560</link>
<description>Exploring the impact of COVID-19 on the grammar of schools in project-based learning contexts
Woods, Peter J.; Anderson, Emma; Hira, Avneet
While scholars and public figures have positioned the COVID-19 pandemic as an opportunity for school reform, the response to this potential for change by teachers remains underexplored. In turn, we attend to the following research question: how do teachers at project-based learning high schools conceptualize the changes to education that have occurred in response to the COVID-19 pandemic? In analyzing temporally dispersed interviews with eight teachers from four different schools in the United States between 2020 and 2022, we found that participants recognized changes in the pedagogies, curricula, assessments, and structures in their school systems. In particular, teachers conceptualized these educational shifts through the lenses of technological change, a push for student-centered practices, and an embrace of real world applications of learning. However, they also described a reversal of these changes once in person schooling returned, illustrating an inability of the pandemic to affect the “grammar of schools” (Tyack &amp; Tobin, 1994).
</description>
<pubDate>Fri, 15 Nov 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157560</guid>
<dc:date>2024-11-15T00:00:00Z</dc:date>
</item>
<item>
<title>Local geometry of NAE-SAT solutions in the condensation regime</title>
<link>https://hdl.handle.net/1721.1/157559</link>
<description>Local geometry of NAE-SAT solutions in the condensation regime
Sly, Allan; Sohn, Youngtak
The local behavior of typical solutions of random constraint satisfaction problems (csp) describes many important phenomena including clustering thresholds, decay of correlations, and the behavior of message passing algorithms. When the constraint density is low, studying the planted model is a powerful technique for determining this local behavior which in many examples has a simple Markovian structure. The work of Coja-Oghlan, Kapetanopoulos, Müller (Comb Prob Comput 29:346-422, 2020) showed that for a wide class of models, this description applies up to the so-called condensation threshold. Understanding the local behavior after the condensation threshold is more complex due to long-range correlations. In this work, we revisit the random regular nae-sat model in the condensation regime and determine the local weak limit which describes a random solution around a typical variable. This limit exhibits a complicated non-Markovian structure arising from the space of solutions being dominated by a small number of large clusters. This is the first description of the local weak limit in the condensation regime for any sparse random csps in the one-step replica symmetry breaking (1rsb) class. Our result is non-asymptotic and characterizes the tight fluctuation O ( n - 1 / 2 ) around the limit. Our proof is based on coupling the local neighborhoods of an infinite spin system, which encodes the structure of the clusters, to a broadcast model on trees whose channel is given by the 1rsb belief-propagation fixed point. We believe that our proof technique has broad applicability to random csps in the 1rsb class.
</description>
<pubDate>Thu, 14 Nov 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157559</guid>
<dc:date>2024-11-14T00:00:00Z</dc:date>
</item>
<item>
<title>Excess Mortality and its Determinants During the COVID-19 Pandemic in 21 Countries: An Ecological Study from the C-MOR Project, 2020 and 2021</title>
<link>https://hdl.handle.net/1721.1/157558</link>
<description>Excess Mortality and its Determinants During the COVID-19 Pandemic in 21 Countries: An Ecological Study from the C-MOR Project, 2020 and 2021
Rahmanian Haghighi, Mohammad Reza; Pallari, Chryso T.; Achilleos, Souzana; Quattrocchi, Annalisa; Gabel, John; Artemiou, Andreas; Athanasiadou, Maria; Papatheodorou, Stefania; Liu, Tianyu; Cernuda Martínez, José Antonio; Denissov, Gleb; Łyszczarz, Błażej; Huang, Qian; Athanasakis, Kostas; Bennett, Catherine M.
Introduction The COVID-19 pandemic overwhelmed health systems, resulting in a surge in excess deaths. This study clustered countries based on excess mortality to understand their response to the pandemic and the influence of various factors on excess mortality within each cluster. Materials and Methods This ecological study is part of the COVID-19 MORtality (C-MOR) Consortium. Mortality data were gathered from 21 countries and were previously used to calculate weekly all-cause excess mortality. Thirty exposure variables were considered in five categories as factors potentially associated with excess mortality: population factors, health care resources, socioeconomic factors, air pollution, and COVID-19 policy. Estimation of Latent Class Linear Mixed Model (LCMM) was used to cluster countries based on response trajectory and Generalized Linear Mixture Model (GLMM) for each cluster was run separately. Results Using LCMM, two clusters were reached. Among 21 countries, Brazil, the USA, Georgia, and Poland were assigned to a separate cluster, with the mean of excess mortality z-score in 2020 and 2021 around 4.4, compared to 1.5 for all other countries assigned to the second cluster. In both clusters the population incidence of COVID-19 had the greatest positive relationship with excess mortality while interactions between the incidence of COVID-19, fully vaccinated people, and stringency index were negatively associated with excess mortality. Moreover, governmental variables (government revenue and government effectiveness) were the most protective against excess mortality. Conclusion This study highlighted that clustering countries based on excess mortality can provide insights to gain a broader understanding of countries' responses to the pandemic and their effectiveness.
</description>
<pubDate>Mon, 11 Nov 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157558</guid>
<dc:date>2024-11-11T00:00:00Z</dc:date>
</item>
<item>
<title>Van der Waals magnetic materials for current-induced control toward spintronic applications</title>
<link>https://hdl.handle.net/1721.1/157557</link>
<description>Van der Waals magnetic materials for current-induced control toward spintronic applications
Ryu, Jeongchun; Kajale, Shivam N.; Sarkar, Deblina
Spintronics, leveraging electron spin for information processing, promises substantial advancements in energy-efficient computing. Van der Waals (vdW) magnetic materials, with their unique-layered structures and exceptional magnetic properties, have emerged as pivotal components in this field. This report explores the current-based control of vdW magnets, focusing on the spin–orbit torque (SOT) mechanism, which is crucial for spintronic applications. Key studies on Fe3GaTe2/Pt and Fe3GaTe2/WTe2 heterostructures are highlighted, demonstrating efficient SOT switching at room temperature. The advantages of vdW magnets for SOT switching, including high spin-torque efficiencies and superior interface quality, are discussed. The report also examines future directions, such as wafer-scale growth techniques, materials design for enhanced Curie temperatures (Tc), and the development of magneto tunnel junctions using all-vdW materials. These advancements underscore the potential of vdW magnetic materials in developing scalable, high-performance spintronic devices, paving the way for significant breakthroughs in energy-efficient computing. Graphical abstract
</description>
<pubDate>Mon, 11 Nov 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157557</guid>
<dc:date>2024-11-11T00:00:00Z</dc:date>
</item>
<item>
<title>Augmented Object Intelligence with XR-Objects</title>
<link>https://hdl.handle.net/1721.1/157556</link>
<description>Augmented Object Intelligence with XR-Objects
Dogan, Mustafa Doga; Gonzalez, Eric; Ahuja, Karan; Du, Ruofei; Cola?o, Andrea; Lee, Johnny; Gonzalez-Franco, Mar; Kim, David
Seamless integration of physical objects as interactive digital entities remains a challenge for spatial computing. This paper explores Augmented Object Intelligence  (AOI) in the context of XR, an interaction paradigm that aims to blur the lines between digital and physical by equipping real-world objects with the ability to interact as if they were digital, where every object has the potential to serve as a portal to digital functionalities. Our approach utilizes real-time object segmentation and classification, combined with the power of Multimodal Large Language Models (MLLMs), to facilitate these interactions without the need for object pre-registration. We implement the AOI concept in the form of XR-Objects, an open-source prototype system that provides a platform for users to engage with their physical environment in contextually relevant ways using object-based context menus. This system enables analog objects to not only convey information but also to initiate digital actions, such as querying for details or executing tasks. Our contributions are threefold: (1) we define the AOI concept and detail its advantages over traditional AI assistants, (2) detail the XR-Objects  system’s open-source design and implementation, and (3) show its versatility through various use cases and a user study.
UIST ’24, October 13–16, 2024, Pittsburgh, PA, USA
</description>
<pubDate>Sun, 13 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157556</guid>
<dc:date>2024-10-13T00:00:00Z</dc:date>
</item>
<item>
<title>X-Hair: 3D Printing Hair-like Structures with Multi-form, Multi-property and Multi-function</title>
<link>https://hdl.handle.net/1721.1/157555</link>
<description>X-Hair: 3D Printing Hair-like Structures with Multi-form, Multi-property and Multi-function
Wang, Guanyun; Ji, Junzhe; Xu, Yunkai; Ren, Lei; Wu, Xiaoyang; Zheng, Chunyuan; Zhou, Xiaojing; Tang, Xin; Feng, Boyu; Sun, Lingyun; Tao, Ye; Li, Jiaji
In this paper, we present X-Hair, a method that enables 3D-printed hair with various forms, properties, and functions. We developed a two-step suspend printing strategy to fabricate hair-like structures in different forms (e.g. fluff, bristle, barb) by adjusting parameters including Extrusion Length Ratio and Total Length. Moreover, a design tool is also established for users to customize hair-like structures with various properties (e.g. pointy, stiff, soft) on imported 3D models, which virtually shows the results for previewing and generates G-code files for 3D printing. We demonstrate the design space of X-Hair and evaluate the properties of them with different parameters. Through a series of applications with hair-like structures, we validate X-hair’s practical usage of biomimicry, decoration, heat preservation, adhesion, and haptic interaction.
UIST ’24, October 13–16, 2024, Pittsburgh, PA, USA
</description>
<pubDate>Sun, 13 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157555</guid>
<dc:date>2024-10-13T00:00:00Z</dc:date>
</item>
<item>
<title>Optimal Slicing and Scheduling with Service Guarantees in Multi-Hop Wireless Networks</title>
<link>https://hdl.handle.net/1721.1/157554</link>
<description>Optimal Slicing and Scheduling with Service Guarantees in Multi-Hop Wireless Networks
Jones, Nicholas; Modiano, Eytan
We analyze the problem of scheduling in wireless networks to meet end-to-end service guarantees. Using network slicing to decouple the queueing dynamics between flows, we show that the network's ability to meet hard throughput and deadline requirements is largely influenced by the scheduling policy. We characterize the feasible throughput/deadline region for a flow under a fixed route and set of slices, and find throughput- and deadline-optimal policies for a solitary flow. We formulate the feasibility problem for multiple flows in a general topology, and show its equivalence to finding a bounded-cost cycle on an exponentially large graph, which is un-solvable in polynomial time by the best-known algorithm. Using a novel concept called delay deficit, we develop a sufficient condition for meeting deadlines as a function of inter-scheduling times, and show that regular schedules are optimal for satisfying this condition. Motivated by this, we design a polynomial-time algorithm that returns an (almost) regular schedule, optimized to meet service guarantees for all flows.
</description>
<pubDate>Mon, 14 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157554</guid>
<dc:date>2024-10-14T00:00:00Z</dc:date>
</item>
<item>
<title>Intervention-Assisted Online Deep Reinforcement Learning for Stochastic Queuing Network Optimization</title>
<link>https://hdl.handle.net/1721.1/157553</link>
<description>Intervention-Assisted Online Deep Reinforcement Learning for Stochastic Queuing Network Optimization
Wigmore, Jerrod; Shrader, Brooke; Modiano, Eytan
Deep Reinforcement Learning (DRL) offers a powerful approach to training neural network control policies for stochastic queuing networks (SQN). However, traditional DRL methods rely on offline simulations or static datasets, limiting their real-world application in SQN control. This work proposes Online Deep Reinforcement Learning-based Controls (ODRLC) as an alternative, where an intelligent agent interacts directly with a real environment and learns an optimal control policy from these online interactions. SQNs present a challenge for ODRLC due to the unbounded nature of the queues within the network resulting in an unbounded state-space. An unbounded state-space is particularly challenging for neural network policies as neural networks are notoriously poor at extrapolating to unseen states. To address this challenge, we propose an intervention-assisted framework that leverages strategic interventions from known stable policies to ensure the queue sizes remain bounded. This framework combines the learning power of neural networks with the guaranteed stability of classical control policies for SQNs. We introduce a method to design these intervention-assisted policies to ensure strong stability of the network. Furthermore, we extend foundational DRL theorems for intervention-assisted policies and develop two practical algorithms specifically for ODRLC of SQNs. Finally, we demonstrate through experiments that our proposed algorithms outperform both classical control approaches and prior ODRLC algorithms.
MOBIHOC '24, October 14-17, 2024, Athens, Greece
</description>
<pubDate>Mon, 14 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157553</guid>
<dc:date>2024-10-14T00:00:00Z</dc:date>
</item>
<item>
<title>The flexible urban grid: adaptation, expansion and evolution in Philadelphia's city block morphology</title>
<link>https://hdl.handle.net/1721.1/157552</link>
<description>The flexible urban grid: adaptation, expansion and evolution in Philadelphia's city block morphology
Ryan, Brent D.; Wang, Elaine
his study examines the evolution of Philadelphia’s city block morphology between 1683, when the city was planned by William Penn, and 1900, when urban expansion abandoned the grid. The study uses  both  quantitative  and  qualitative  assessment.  The  city  grid  underwent  evolution  during  this  time  that resolved deficiencies of the original Penn plan, improving circulation and maximizing block area for rowhouse development. The Penn grid had large rectilinear blocks with irregular dimensions: it experienced two types of evolution. The first was adaptation through infill, as large 1683 blocks were subdivided by secondary through streets and tertiary streets. The second was adaptation through expansion of the grid, first  an  irregular,  ‘unplanned’  grid,  and  later  a  regular,  ‘planned’  grid.  Both  expansions  reduced  1683 block depths to permit additional east- west circulation and to increase developable block frontage. Mean block depths of 666 ft in the Penn grid were reduced to 383 ft in the adapted grid, to 328 ft (south) and 393 ft (north) in the unplanned expansion grid, and to 422 ft (south) and 534 ft (north) in the planned expansion grid.  In  the  expansion  grid,  tertiary  streets  and  rowhouse  dimensions  and  heights  were  integrated  with  quaternary streets (pedestrian alleys), permitting high levels of housing density and diversity.
</description>
<pubDate>Sun, 01 Jan 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157552</guid>
<dc:date>2023-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Learning from Decommunization: What Eastern Europe Can Teach Us about Slavery-Related Urban Fallism in the United Kingdom and United States</title>
<link>https://hdl.handle.net/1721.1/157551</link>
<description>Learning from Decommunization: What Eastern Europe Can Teach Us about Slavery-Related Urban Fallism in the United Kingdom and United States
Vlasenko, Yegor; Ryan, Brent D.
This study investigates the spatial effects of the ongoing “decommunization” campaign in&#13;
Ukraine, a state-led attack on Soviet symbols and ideology in the urban space of the capital,&#13;
Kyiv. We examine decommunization through the lens of an extensive legacy of architectural,&#13;
urban design, and monumental art projects erected for the celebration of the 1500th&#13;
anniversary of the city of Kyiv held in 1982. We focus on four ideological narratives and&#13;
examine the outcomes of decommunization on four monuments. We find that&#13;
decommunization’s effect is limited; Communist symbolism has been annotated with&#13;
Ukrainian identity symbols or neglected, not demolished. We conclude that decommunization&#13;
has focused on the comparatively superficial qualities of toponomy and Lenin symbols, that&#13;
the legacy of Soviet identity in Kyiv’s cityscape is much deeper and has proved surprisingly&#13;
persistent, and that the historiography of the newly independent nation of Ukraine is still in a&#13;
process of reformation and revision.
</description>
<pubDate>Fri, 21 Jan 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157551</guid>
<dc:date>2022-01-21T00:00:00Z</dc:date>
</item>
<item>
<title>Contradictions and compromises in sustainability planning: The case of the sub-Arctic city of Yakutsk, Russia</title>
<link>https://hdl.handle.net/1721.1/157550</link>
<description>Contradictions and compromises in sustainability planning: The case of the sub-Arctic city of Yakutsk, Russia
Durova, Aleksandra; Ryan, Brent D.
Sustainability assessment frameworks often fall short of elucidating context-specific conflicts inherent in planning practice and its contribution to diverse sustainability priorities. This study explores the integration of priorities and principles associated with sustainability in the spatial planning of the sub-Arctic city of Yakutsk. It also investigates how conflicting priorities manifest in the city's development. The research involves exploratory interviews with planning stakeholders, an analysis of General Plan iterations, and profiling of two expanding residential areas. Contrasting cases of residential growth untangle tensions between environmental, development, and social dimensions, emphasizing the prioritization of specific aspects over others. The study underscores that these tensions are intricately linked to historical, political, planning, and governance contexts and reflect the complexities of urban development politics. Despite planning documents encompassing a range of principles associated with sustainable planning, current practices prioritize specific dimensions but contradict others. A targeted emphasis on specific sustainability aspects may obscure interests in particular development types and equity compromises. This study raises concerns about the effectiveness of normative evaluations of sustainable planning, overlooking conflicting dimensions evident in practice. It calls for a more in-depth examination of how principles are valued, prioritized, and compromised in specific contexts.
</description>
<pubDate>Fri, 29 Mar 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157550</guid>
<dc:date>2024-03-29T00:00:00Z</dc:date>
</item>
<item>
<title>Demolition after decline: Understanding and explaining demolition patterns in US and German shrinking cities</title>
<link>https://hdl.handle.net/1721.1/157549</link>
<description>Demolition after decline: Understanding and explaining demolition patterns in US and German shrinking cities
Gao, Shuqi; Jansen, Hendrik; Ryan, Brent D.
Demolition is one of shrinking cities' most important strategies to deal with vacant and abandoned properties, but processes and outcomes vary between nations. How do demolition patterns differ or agree between shrinking cities in different nations, and what explains agreement or difference? This study analyzes demolition patterns in two mid-sized, isolated shrinking cities, the U.S. city of Flint and the German city of Dessau, between 2002 and 2016. We found significantly different patterns of demolition in the two cities. Demolition is more concentrated in Dessau, and more diffuse in Flint. We explain this difference in demolition patterns through three factors: housing tenure, social and physical structure, and demolition policy. Compared with Flint, Dessau has a much higher level of rental housing that concentrates in its urban center, facilitating tenant relocation into analogous units and permitting concentrated demolition. Flint's urban structure is homogenous with repetitive blocks of privatelyowned single-family housing, presenting a barrier for public intervention in vacancy. Dessau's demolition is financed by federal policy with explicit spatial intentions, whereas Flint's demolition is complaint-driven without substantial spatial consideration. The study findings indicate that demolition pattern is embedded in structural, historical, and national factors.
</description>
<pubDate>Thu, 12 Jan 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157549</guid>
<dc:date>2023-01-12T00:00:00Z</dc:date>
</item>
<item>
<title>Hermes: Boosting the Performance of Machine-Learning-Based Intrusion Detection System through Geometric Feature Learning</title>
<link>https://hdl.handle.net/1721.1/157548</link>
<description>Hermes: Boosting the Performance of Machine-Learning-Based Intrusion Detection System through Geometric Feature Learning
Zhang, Chaoyu; Shi, Shanghao; Wang, Ning; Xu, Xiangxiang; Li, Shaoyu; Zheng, Lizhong; Marchany, Randy; Gardner, Mark; Hou, Y. Thomas; Lou, Wenjing
Anomaly-Based Intrusion Detection Systems (IDSs) have been extensively researched for their ability to detect zero-day attacks. These systems establish a baseline of normal behavior using benign traffic data and flag deviations from this norm as potential threats. They generally experience higher false alarm rates than signature-based IDSs. Unlike image data, where the observed features provide immediate utility, raw network traffic necessitates additional processing for effective detection. It is challenging to learn useful patterns directly from raw traffic data or simple traffic statistics (e.g., connection duration, package inter-arrival time) as the complex relationships are difficult to distinguish. Therefore, some feature engineering becomes imperative to extract and transform raw data into new feature representations that can directly improve the detection capability and reduce the false positive rate. We propose a geometric feature learning method to optimize the feature extraction process. We employ contrastive feature learning to learn a feature space where normal traffic instances reside in a compact cluster. We further utilize H-Score feature learning to maximize the compactness of the cluster representing the normal behavior, enhancing the subsequent anomaly detection performance. Our evaluations using the NSL-KDD and N-BaloT datasets demonstrate that the proposed IDS powered by feature learning can consistently outperform state-of-the-art anomaly-based IDS methods by significantly lowering the false positive rate. Furthermore, we deploy the proposed IDS on a Raspberry Pi 4 and demonstrate its applicability on resource-constrained Internet of Things (IoT) devices, highlighting its versatility for diverse application scenarios.
MobiHoc '24: Twenty-fifth International Symposium on Theory, Algorithmic Foundations, and Protocol Design for Mobile Networks and Mobile Computing Athens Greece October 14 - 17, 2024
</description>
<pubDate>Mon, 14 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157548</guid>
<dc:date>2024-10-14T00:00:00Z</dc:date>
</item>
<item>
<title>MEDFuse: Multimodal EHR Data Fusion with Masked Lab-Test Modeling and Large Language Models</title>
<link>https://hdl.handle.net/1721.1/157546</link>
<description>MEDFuse: Multimodal EHR Data Fusion with Masked Lab-Test Modeling and Large Language Models
Thao, Phan Nguyen Minh; Dao, Cong-Tinh; Wu, Chenwei; Wang, Jian-Zhe; Liu, Shun; Ding, Jun-En; Restrepo, David; Liu, Feng; Hung, Fang-Ming; Peng, Wen-Chih
Electronic health records (EHRs) are multimodal by nature, consisting of structured tabular features like lab tests and unstructured clinical notes. In real-life clinical practice, doctors use complementary multimodal EHR data sources to get a clearer picture of patients' health and support clinical decision-making. However, most EHR predictive models do not reflect these procedures, as they either focus on a single modality or overlook the inter-modality interactions/redundancy. In this work, we propose MEDFuse, a Multimodal EHR Data Fusion framework that incorporates masked lab-test modeling and large language models (LLMs) to effectively integrate structured and unstructured medical data. MEDFuse leverages multimodal embeddings extracted from two sources: LLMs fine-tuned on free clinical text and masked tabular transformers trained on structured lab test results. We design a disentangled transformer module, optimized by a mutual information loss to 1) decouple modality-specific and modality-shared information and 2) extract useful joint representation from the noise and redundancy present in clinical notes. Through comprehensive validation on the public MIMIC-III dataset and the in-house FEMH dataset, MEDFuse demonstrates great potential in advancing clinical predictions, achieving over 90% F1 score in the 10-disease multi-label classification task.
CIKM ’24, October 21–25, 2024, Boise, ID, USA
</description>
<pubDate>Mon, 21 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157546</guid>
<dc:date>2024-10-21T00:00:00Z</dc:date>
</item>
<item>
<title>Improvements to Quantum Interior Point Method for Linear Optimization</title>
<link>https://hdl.handle.net/1721.1/157545</link>
<description>Improvements to Quantum Interior Point Method for Linear Optimization
Mohammadisiahroudi, Mohammadhossein; Wu, Zeguan; Augustino, Brandon; Carr, Arielle; Terlaky, Tam?s
Quantum linear system algorithms (QLSA) have the potential to speed up Interior Point Methods (IPM). However, a major bottleneck is the inexactness of quantum Tomography to extract classical solutions from quantum states. In addition, QLSAs are sensitive to the condition number, and this sensitivity is exacerbated when the Newton systems arising in IPMs converge to a singular matrix. Recently, an Inexact Feasible Quantum IPM (IF-QIPM) has been developed that addresses the inexactness of QLSAs. However, this method requires a large number of gates and qubits to be implemented. Here, we propose a new IF-QIPM using the normal equation system, which requires less number of gates and qubits. To mitigate the sensitivity to the condition number and other input data-related parameters, we use preconditioning coupled with iterative refinement to obtain better complexity. Finally, we demonstrate the effectiveness of our approach on IBM Qiskit simulators.
</description>
<pubDate>Tue, 29 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157545</guid>
<dc:date>2024-10-29T00:00:00Z</dc:date>
</item>
<item>
<title>Cooperation and Fairness in Multi-Agent Reinforcement Learning</title>
<link>https://hdl.handle.net/1721.1/157544</link>
<description>Cooperation and Fairness in Multi-Agent Reinforcement Learning
Aloor, Jasmine; Nayak, Siddharth Nagar; Dolan, Sydney; Balakrishnan, Hamsa
Multi-agent systems are trained to maximize shared cost objectives, which typically reflect system-level efficiency. However, in the resource-constrained environments of mobility and transportation systems, efficiency may be achieved at the expense of fairness --- certain agents may incur significantly greater costs or lower rewards compared to others. Tasks could be distributed inequitably, leading to some agents receiving an unfair advantage while others incur disproportionately high costs. It is, therefore, important to consider the tradeoffs between efficiency and fairness in such settings.     We consider the problem of fair multi-agent navigation for a group of decentralized agents using multi-agent reinforcement learning (MARL). We consider the reciprocal of the coefficient of variation of the distances traveled by different agents as a measure of fairness and investigate whether agents can learn to be fair without significantly sacrificing efficiency (i.e., increasing the total distance traveled). We find that by training agents using min-max fair distance goal assignments along with a reward term that incentivizes fairness as they move towards their goals, the agents (1) learn a fair assignment of goals and (2) achieve almost perfect goal coverage in navigation scenarios using only local observations. For goal coverage scenarios, we find that, on average, the proposed model yields a 14% improvement in efficiency and a 5% improvement in fairness over a baseline model that is trained using random assignments. Furthermore, an average of 21% improvement in fairness can be achieved by the proposed model as compared to a model trained on optimally efficient assignments; this increase in fairness comes at the expense of only a 7% decrease in efficiency. Finally, we extend our method to environments in which agents must complete coverage tasks in prescribed formations and show that it is possible to do so without tailoring the models to specific formation shapes.
</description>
<pubDate>Tue, 29 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157544</guid>
<dc:date>2024-10-29T00:00:00Z</dc:date>
</item>
<item>
<title>Tiny Pointers</title>
<link>https://hdl.handle.net/1721.1/157543</link>
<description>Tiny Pointers
Bender, Michael; Conway, Alex; Farach-Colton, Martin; Kuszmaul, William; Tagliavini, Guido
This paper introduces a new data-structural object that we call the tiny pointer. In many applications, traditional logn-bit pointers can be replaced with o(logn)-bit tiny pointers at the cost of only a constant-factor time overhead. We develop a comprehensive theory of tiny pointers, and give optimal constructions for both fixed-size tiny pointers (i.e., settings in which all of the tiny pointers must be the same size) and variable-size tiny pointers (i.e., settings in which the average tiny-pointer size must be small, but some tiny pointers can be larger). If a tiny pointer references an element in an array filled to load factor 1?1/k, then the optimal tiny-pointer size is ?(logloglogn+logk) bits in the fixed-size case, and ?(logk) expected bits in the variable-size case. Our tiny-pointer constructions also require us to revisit several classic problems having to do with balls and bins; these results may be of independent interest.  Using tiny pointers, we revisit five classic data-structure problems: the data-retrieval problem, succinct dynamic binary search trees, space-efficient stable dictionaries, space-efficient dictionaries with variable-size keys, and the internal-memory stash problem. These are all well-studied problems, and in each case tiny pointers allow for us to take a natural space-inefficient solution that uses pointers and make it space-efficient for free.
</description>
<pubDate>Mon, 21 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157543</guid>
<dc:date>2024-10-21T00:00:00Z</dc:date>
</item>
<item>
<title>Cooperative Advisory Residual Policies for Congestion Mitigation</title>
<link>https://hdl.handle.net/1721.1/157542</link>
<description>Cooperative Advisory Residual Policies for Congestion Mitigation
Hasan, Aamir; Chakraborty, Neeloy; Chen, Haonan; Cho, Jung-Hoon; Wu, Cathy; Driggs-Campbell, Katherine
Fleets of autonomous vehicles can mitigate traffic congestion through simple actions, thus improving many socioeconomic factors such as commute time and gas costs. However, these approaches are limited in practice as they assume precise control over autonomous vehicle fleets, incur extensive installation costs for a centralized sensor ecosystem, and also fail to account for uncertainty in driver behavior. To this end, we develop a class of learned residual policies that can be used in cooperative advisory systems and only require the use of a single vehicle with a human driver. Our policies advise drivers to behave in ways that mitigate traffic congestion while accounting for diverse driver behaviors, particularly drivers? reactions to instructions, to provide an improved user experience. To realize such policies, we introduce an improved reward function that explicitly addresses congestion mitigation and driver attitudes to advice. We show that our residual policies can be personalized by conditioning them on an inferred driver trait that is learned in an unsupervised manner with a variational autoencoder. Our policies are trained in simulation with our novel instruction adherence driver model, and evaluated in simulation and through a user study (N=16) to capture the sentiments of human drivers. Our results show that our approaches successfully mitigate congestion while adapting to different driver behaviors, with up to 20% and 40% improvement as measured by a combination metric of speed and deviations in speed across time over baselines in our simulation tests and user study, respectively. Our user study further shows that our policies are human-compatible and personalize to drivers.
</description>
<pubDate>Tue, 08 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157542</guid>
<dc:date>2024-10-08T00:00:00Z</dc:date>
</item>
<item>
<title>Theory of Quantum Anomalous Hall Phases in Pentalayer Rhombohedral Graphene Moiré Structures</title>
<link>https://hdl.handle.net/1721.1/157541</link>
<description>Theory of Quantum Anomalous Hall Phases in Pentalayer Rhombohedral Graphene Moiré Structures
Dong, Zhihuan; Patri, Adarsh S.; Senthil, Todadri
Remarkable recent experiments on the moiré structure formed by pentalayer rhombohedral graphene aligned with a hexagonal boron nitride substrate report the discovery of a zero field fractional quantum Hall effect. These “(fractional) quantum anomalous Hall” [(F)QAH] phases occur for one sign of a perpendicular displacement field, and correspond, experimentally, to full or partial filling of a valley polarized Chern-1 band. Such a band is absent in the noninteracting band structure. Here we show that electron-electron interactions play a crucial role, and present microscopic theoretical calculations demonstrating the emergence of a nearly flat, isolated, Chern-1 band and FQAH phases in this system. We also study the four- and six-layer analogs and identify parameters where a nearly flat isolated Chern-1 band emerges which may be suitable to host FQAH physics.
</description>
<pubDate>Tue, 12 Nov 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157541</guid>
<dc:date>2024-11-12T00:00:00Z</dc:date>
</item>
<item>
<title>Electrochemical Methods for Water Purification, Ion Separations, and Energy Conversion</title>
<link>https://hdl.handle.net/1721.1/157540</link>
<description>Electrochemical Methods for Water Purification, Ion Separations, and Energy Conversion
Alkhadra, Mohammad A; Su, Xiao; Suss, Matthew E; Tian, Huanhuan; Guyes, Eric N; Shocron, Amit N; Conforti, Kameron M; de Souza, J Pedro; Kim, Nayeong; Tedesco, Michele; Khoiruddin, Khoiruddin; Wenten, I Gede; Santiago, Juan G; Hatton, T Alan; Bazant, Martin Z
Agricultural development, extensive industrialization, and rapid growth of the global population have inadvertently been accompanied by environmental pollution. Water pollution is exacerbated by the decreasing ability of traditional treatment methods to comply with tightening environmental standards. This review provides a comprehensive description of the principles and applications of electrochemical methods for water purification, ion separations, and energy conversion. Electrochemical methods have attractive features such as compact size, chemical selectivity, broad applicability, and reduced generation of secondary waste. Perhaps the greatest advantage of electrochemical methods, however, is that they remove contaminants directly from the water, while other technologies extract the water from the contaminants, which enables efficient removal of trace pollutants. The review begins with an overview of conventional electrochemical methods, which drive chemical or physical transformations via Faradaic reactions at electrodes, and proceeds to a detailed examination of the two primary mechanisms by which contaminants are separated in nondestructive electrochemical processes, namely electrokinetics and electrosorption. In these sections, special attention is given to emerging methods, such as shock electrodialysis and Faradaic electrosorption. Given the importance of generating clean, renewable energy, which may sometimes be combined with water purification, the review also discusses inverse methods of electrochemical energy conversion based on reverse electrosorption, electrowetting, and electrokinetic phenomena. The review concludes with a discussion of technology comparisons, remaining challenges, and potential innovations for the field such as process intensification and technoeconomic optimization.
</description>
<pubDate>Wed, 24 Aug 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157540</guid>
<dc:date>2022-08-24T00:00:00Z</dc:date>
</item>
<item>
<title>Gelation, clustering, and crowding in the electrical double layer of ionic liquids</title>
<link>https://hdl.handle.net/1721.1/157539</link>
<description>Gelation, clustering, and crowding in the electrical double layer of ionic liquids
Goodwin, Zachary AH; McEldrew, Michael; Pedro de Souza, J; Bazant, Martin Z; Kornyshev, Alexei A
Understanding the bulk and interfacial properties of super-concentrated electrolytes, such as ionic liquids (ILs), has attracted significant attention lately for their promising applications in supercapacitors and batteries. Recently, McEldrew et al. [J. Phys. Chem. B 125, 2677 (2021)] developed a theory for reversible ion associations in bulk ILs, which accounted for the formation of all possible (Cayley tree) clusters and a percolating ionic network (gel). Here, we adopt and develop this approach to understand the associations of ILs in the electrical double layer at electrified interfaces. With increasing charge of the electrode, the theory predicts a transition from a regime dominated by a gelled or clustered state to a crowding regime dominated by free ions. This transition from gelation to crowding is conceptually similar to the overscreening to crowding transition.
</description>
<pubDate>Wed, 07 Sep 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157539</guid>
<dc:date>2022-09-07T00:00:00Z</dc:date>
</item>
<item>
<title>A regulatory role for repeated decoy transcription factor binding sites in target gene expression</title>
<link>https://hdl.handle.net/1721.1/157538</link>
<description>A regulatory role for repeated decoy transcription factor binding sites in target gene expression
Lee, Tek‐Hyung; Maheshri, Narendra
Tandem repeats of DNA that contain transcription factor (TF) binding sites could serve as decoys, competitively binding to TFs and affecting target gene expression. Using a synthetic system in budding yeast, we demonstrate that repeated decoy sites inhibit gene expression by sequestering a transcriptional activator and converting the graded dose–response of target promoters to a sharper, sigmoidal‐like response. On the basis of both modeling and chromatin immunoprecipitation measurements, we attribute the altered response to TF binding decoy sites more tightly than promoter binding sites. Tight TF binding to arrays of contiguous repeated decoy sites only occurs when the arrays are mostly unoccupied. Finally, we show that the altered sigmoidal‐like response can convert the graded response of a transcriptional positive‐feedback loop to a bimodal response. Together, these results show how changing numbers of repeated TF binding sites lead to qualitative changes in behavior and raise new questions about the stability of TF/promoter binding.
</description>
<pubDate>Tue, 27 Mar 2012 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157538</guid>
<dc:date>2012-03-27T00:00:00Z</dc:date>
</item>
<item>
<title>The CMS Statistical Analysis and Combination Tool: Combine</title>
<link>https://hdl.handle.net/1721.1/157537</link>
<description>The CMS Statistical Analysis and Combination Tool: Combine
This paper describes the Combine software package used for statistical analyses by the CMS Collaboration. The package, originally designed to perform searches for a Higgs boson and the combined analysis of those searches, has evolved to become the statistical analysis tool presently used in the majority of measurements and searches performed by the CMS Collaboration. It is not specific to the CMS experiment, and this paper is intended to serve as a reference for users outside of the CMS Collaboration, providing an outline of the most salient features and capabilities. Readers are provided with the possibility to run Combine and reproduce examples provided in this paper using a publicly available container image. Since the package is constantly evolving to meet the demands of ever-increasing data sets and analysis sophistication, this paper cannot cover all details of Combine. However, the online documentation referenced within this paper provides an up-to-date and complete user guide.
</description>
<pubDate>Fri, 08 Nov 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157537</guid>
<dc:date>2024-11-08T00:00:00Z</dc:date>
</item>
<item>
<title>Materials beyond monolayers: The magnetic quasi-1D semiconductor CrSBr</title>
<link>https://hdl.handle.net/1721.1/157536</link>
<description>Materials beyond monolayers: The magnetic quasi-1D semiconductor CrSBr
Klein, Julian; Ross, Frances M.
The all-surface nature of atomically thin van der Waals materials can present challenges for practical applications. Fortunately, new layered materials are on the horizon that preserve their useful properties even when thicker than a monolayer. Here, we summarize our interest in one of these emergent materials, the magnetic semiconductor CrSBr. We describe monolayer properties exhibited by this material in its bulk form, discussing how the quasi-1D electronic structure of CrSBr allows mono- or bilayer physics to be displayed even in thick crystals. Long-range magnetic order offers additional tuning with the coupled lattice, spin, orbit, and charge degrees of freedom enabling magneto-correlated phenomena. We discuss the stability of CrSBr in air and show atomic scale structural manipulation through electron beam-driven transformations. We conclude that the stability and structural amenability of CrSBr provide opportunities for imagining devices that use bulk crystals yet exploit unique magnetic and quantum confinement effects.
</description>
<pubDate>Wed, 06 Nov 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157536</guid>
<dc:date>2024-11-06T00:00:00Z</dc:date>
</item>
<item>
<title>Expediting treatments in the 21st century: orphan drugs and accelerated approvals</title>
<link>https://hdl.handle.net/1721.1/157535</link>
<description>Expediting treatments in the 21st century: orphan drugs and accelerated approvals
Domike, Reuben; Raju, G. K.; Sullivan, Jamie; Kennedy, Annie
Background In response to activated patient communities’ catalyzation, two significant efforts by the FDA to expedite treatments have now been in place for multiple decades. In 1983, the United States Congress passed the Orphan Drug Act to provide financial incentives for development of drugs for rare diseases. In 1992, partly in response to the HIV epidemic, the FDA implemented Accelerated Approval (AA) to expedite access to promising new therapies to treat serious conditions with unmet medical need based on surrogate marker efficacy while additional clinical data is confirmed. The uses of these regulatory approaches over time are assessed in this study. Methods The following U.S. FDA CDER published lists were used in this analysis: 1. all orphan designations and approvals; 2. all AA and their details updated through December 31, 2022; new molecular entities (NMEs). Results Orphan drug designations and approvals have increased several-fold over the past four decades. The largest increase recently has been in therapies targeting oncological diseases (comprised of both oncology and malignant hematology). Although orphan drug approvals based on NMEs are the minority of orphan drug designations, the count of approved orphan drug NMEs has increased in recent years. The characteristics of orphan drug approvals show notable differences by disease area with rare diseases and medical genetics (49%) having a relatively large fraction of orphan drug approvals with NMEs compared to the oncological diseases (32%). Similar to the use of orphan drug designation, oncological disease therapies have been the largest utilizers of AA. Many therapies targeting these diseases address unmet medical need and can leverage surrogate markers that have previously been used in similar trials. The timings of conversion of AA (confirmed or withdrawn) were assessed and found to be consistent across decades and to have some dependency upon the broad disease area (when assessed by three large groups: HIV conversions were fastest; followed by oncology; followed by all others). By the end of 2022, 98% of the first 105 (approved in 2010 or earlier) AA had been converted to confirmed or withdrawn. Conclusions Although the typical timings for AA to be confirmed or withdrawn has not changed significantly over the decades, the disease areas utilizing orphan drug designation and AA have changed significantly over time. Both programs have had increases in their use for therapies targeting oncological diseases. The re-use of surrogate markers for oncological diseases has been an advantage in a way that may not be scientifically feasible in many other disease areas that have greater differentiation across disease etiology. For non-oncological diseases, applicability of AA is, in part, dependent upon greater focus on characterization and acceptance of novel surrogate markers.
</description>
<pubDate>Fri, 08 Nov 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157535</guid>
<dc:date>2024-11-08T00:00:00Z</dc:date>
</item>
<item>
<title>What the visual system can learn from the non-dominant hand: The effect of graphomotor engagement on visual discrimination</title>
<link>https://hdl.handle.net/1721.1/157534</link>
<description>What the visual system can learn from the non-dominant hand: The effect of graphomotor engagement on visual discrimination
Ben-Ami, Shlomit; Buaron, Batel; Yaron, Ori; Keane, Kyle; Sun, Virginia H.; Phillips, Flip; Friedman, Jason; Sinha, Pawan; Mukamel, Roy
Previous studies have demonstrated that engaging in graphomotor activity for creating graphemes can enhance their subsequent visual discrimination. This suggests a positive influence of the motor system on visual learning. However, existing studies have emphasized the dominant hand, which is superiorly dexterous in fine-motor movements. This near-exclusive focus prompts the inquiry of whether the observed perceptual facilitation is a general characteristic of the motor system, or specific to pathways controlling the skilled over-trained dominant hand. Furthermore, the mechanistic underpinning of visual facilitation from graphomotor training (i.e., the individual contribution of motor activity, temporal evolution of the visual trace, variability of visual output) remain unclear. To address these questions, we assessed visual discrimination capabilities of healthy right-handed participants (N = 60) before and after graphomotor or visual training. Contrary to our initial expectation, graphomotor engagement with the non-dominant hand did not yield additional benefits to visual learning beyond those attainable through visual training alone. Moreover, graphomotor training with the non-dominant hand resulted in visual discrimination improvements comparable to those of dominant hand training, despite the inherent differences between hands in motor performance and in the amount of improvement in shape tracing throughout training. We conclude that the motor components of graphomotor activity may not be critical for visual learning of shapes through tracing activity. Instead, our results are in agreement with the symbolic theoretical account, suggesting that basic shape features required for discrimination can be acquired through visual inspection alone, providing a perspective on the improvements observed in prior studies.
</description>
<pubDate>Tue, 05 Nov 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157534</guid>
<dc:date>2024-11-05T00:00:00Z</dc:date>
</item>
<item>
<title>Selective and Chemical-Free Removal of Toxic Heavy Metal Cations from Water Using Shock Ion Extraction</title>
<link>https://hdl.handle.net/1721.1/157533</link>
<description>Selective and Chemical-Free Removal of Toxic Heavy Metal Cations from Water Using Shock Ion Extraction
Alkhadra, Mohammad A; Jordan, Matthew L; Tian, Huanhuan; Arges, Christopher G; Bazant, Martin Z
Electrochemical methods are known to have attractive features and capabilities when used for ion separations and water purification. In this study, we developed a new process called shock ion extraction (shock IX) for selective and chemical-free removal of toxic heavy metals from water. Shock IX is a hybrid process that combines shock electrodialysis (shock ED) and ion exchange using an ion exchange resin wafer (IERW), and this method can be thought of functionally as an electrochemically assisted variation of traditional ion exchange. In particular, shock IX exhibits greater ion removal and selectivity for longer periods of time, compared to the use of ion exchange alone. The use of an IERW in shock ED also increases multivalent ion selectivity, reduces energy consumption, and improves the hydrodynamics and scalability of the system.
</description>
<pubDate>Tue, 04 Oct 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157533</guid>
<dc:date>2022-10-04T00:00:00Z</dc:date>
</item>
<item>
<title>Novel Operating Modes for the Charging of Lithium-ion Batteries</title>
<link>https://hdl.handle.net/1721.1/157532</link>
<description>Novel Operating Modes for the Charging of Lithium-ion Batteries
Berliner, Marc D; Jiang, Benben; Cogswell, Daniel A; Bazant, Martin Z; Braatz, Richard D
Conventional battery simulation tools offer current, voltage, and power operating modes. This article presents General Operating Modes (GOMs), which move beyond these standard modes and allow battery models of any scale to simulate novel operating modes such as constant temperature, constant lithium plating overpotential, and constant concentration. The governing equations of the battery model are solved alongside a single algebraic constraint that determines the current. The operating modes are simulated efficiently and deterministically inside a differential-algebraic equation (DAE) solver, and constraints are satisfied within solver tolerances. We propose a mixed-continuous discrete (aka hybrid) solution to the constrained charging problem, using the GOMs to satisfy charging constraints. This approach enables nonlinear model predictive control (NMPC) to be implementable in real-time while directly using sophisticated physics-based battery models. The approach is demonstrated for three models of various complexity: a thin-film nickel hydroxide electrode model, a Single-Particle (SP) model, and a Porous Electrode Theory (PET) model. The hybrid fast charging algorithm is shown to be slightly suboptimal for the thermal SP model in some cases, which is not of practical importance for NMPC.
</description>
<pubDate>Sat, 01 Oct 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157532</guid>
<dc:date>2022-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Theory of Layered-Oxide Cathode Degradation in Li-ion Batteries by Oxidation-Induced Cation Disorder</title>
<link>https://hdl.handle.net/1721.1/157531</link>
<description>Theory of Layered-Oxide Cathode Degradation in Li-ion Batteries by Oxidation-Induced Cation Disorder
Zhuang, Debbie; Bazant, Martin Z
Disorder-driven degradation phenomena, such as structural phase transformations and surface reconstructions, can significantly reduce the lifetime of Li-ion batteries, especially those with nickel-rich layered-oxide cathodes. We develop a general free energy model for layered-oxide ion-intercalation materials as a function of the degree of disorder, which represents the density of defects in the host crystal. The model accounts for defect core energies, long-range dipolar electrostatic forces, and configurational entropy of the solid solution. In the case of nickel-rich oxides, we hypothesize that nickel with a high concentration of defects is driven into the bulk by electrostatic forces as oxidation reactions at the solid-electrolyte interface reduce nickel and either evolve oxygen or oxidize the organic electrolyte at high potentials (&amp;gt;4.4 V vs Li/Li&lt;jats:sup&gt;+&lt;/jats:sup&gt;). The model is used in battery cycling simulations to describe the extent of cathode degradation when using different voltage cutoffs, in agreement with experimental observations that lower-voltage cycling can substantially reduce cathode degradation. The theory provides a framework to guide the development of cathode compositions, coatings and electrolytes to enhance rate capability and enhance battery lifetime. The general theory of cation-disorder formation may also find applications in electrochemical water treatment and ion separations, such as lithium extraction from brines, based on competitive ion intercalation in battery materials.
</description>
<pubDate>Sat, 01 Oct 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157531</guid>
<dc:date>2022-10-01T00:00:00Z</dc:date>
</item>
<item>
<title>Beyond Preferences in AI Alignment</title>
<link>https://hdl.handle.net/1721.1/157530</link>
<description>Beyond Preferences in AI Alignment
Zhi-Xuan, Tan; Carroll, Micah; Franklin, Matija; Ashton, Hal
The dominant practice of AI alignment assumes (1) that preferences are an adequate representation of human values, (2) that human rationality can be understood in terms of maximizing the satisfaction of preferences, and (3) that AI systems should be aligned with the preferences of one or more humans to ensure that they behave safely and in accordance with our values. Whether implicitly followed or explicitly endorsed, these commitments constitute what we term a preferentist approach to AI alignment. In this paper, we characterize and challenge the preferentist approach, describing conceptual and technical alternatives that are ripe for further research. We first survey the limits of rational choice theory as a descriptive model, explaining how preferences fail to capture the thick semantic content of human values, and how utility representations neglect the possible incommensurability of those values. We then critique the normativity of expected utility theory (EUT) for humans and AI, drawing upon arguments showing how rational agents need not comply with EUT, while highlighting how EUT is silent on which preferences are normatively acceptable. Finally, we argue that these limitations motivate a reframing of the targets of AI alignment: Instead of alignment with the preferences of a human user, developer, or humanity-writ-large, AI systems should be aligned with normative standards appropriate to their social roles, such as the role of a general-purpose assistant. Furthermore, these standards should be negotiated and agreed upon by all relevant stakeholders. On this alternative conception of alignment, a multiplicity of AI systems will be able to serve diverse ends, aligned with normative standards that promote mutual benefit and limit harm despite our plural and divergent values.
</description>
<pubDate>Sat, 09 Nov 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157530</guid>
<dc:date>2024-11-09T00:00:00Z</dc:date>
</item>
<item>
<title>Topological recursion for hyperbolic string field theory</title>
<link>https://hdl.handle.net/1721.1/157529</link>
<description>Topological recursion for hyperbolic string field theory
Fırat, Atakan H.; Valdes-Meller, Nico
We derive an analog of Mirzakhani’s recursion relation for hyperbolic string vertices and investigate its implications for closed string field theory. Central to our construction are systolic volumes: the Weil-Petersson volumes of regions in moduli spaces of Riemann surfaces whose elements have systoles L ≥ 0. These volumes can be shown to satisfy a recursion relation through a modification of Mirzakhani’s recursion as long as L ≤ 2 sinh−1 1. Applying the pants decomposition of Riemann surfaces to off-shell string amplitudes, we promote this recursion to hyperbolic string field theory and demonstrate the higher order vertices are determined by the cubic vertex iteratively for any background. Such structure implies the solutions of closed string field theory obey a quadratic integral equation. We illustrate the utility of our approach in an example of a stubbed scalar theory.
</description>
<pubDate>Tue, 05 Nov 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157529</guid>
<dc:date>2024-11-05T00:00:00Z</dc:date>
</item>
<item>
<title>Measurement of diferential ZZ + jets production cross sections in pp collisions at √s = 13 TeV</title>
<link>https://hdl.handle.net/1721.1/157528</link>
<description>Measurement of diferential ZZ + jets production cross sections in pp collisions at √s = 13 TeV
Hayrapetyan, A.; Tumasyan, A.; Adam, W.; Andrejkovic, J. W.; Bergauer, T.; Chatterjee, S.; Damanakis, K.; Dragicevic, M.; Escalante Del Valle, A.; Hussain, P. S.; Jeitler, M.; Krammer, N.; Liko, D.; Mikulec, I.; Schieck, J.; Schöfbeck, R.; Schwarz, D.; Sonawane, M.
Diboson production in association with jets is studied in the fully leptonic final states, pp → (Z/γ*)(Z/γ*) + jets → 2ℓ2ℓ′ + jets, (ℓ, ℓ′ = e or μ) in proton-proton collisions at a center-of-mass energy of 13 TeV. The data sample corresponds to an integrated luminosity of 138 fb−1 collected with the CMS detector at the LHC. Differential distributions and normalized differential cross sections are measured as a function of jet multiplicity, transverse momentum pT, pseudorapidity η, invariant mass and ∆η of the highest-pT and second-highest-pT jets, and as a function of invariant mass of the four-lepton system for events with various jet multiplicities. These differential cross sections are compared with theoretical predictions that mostly agree with the experimental data. However, in a few regions we observe discrepancies between the predicted and measured values. Further improvement of the predictions is required to describe the ZZ+jets production in the whole phase space.
</description>
<pubDate>Tue, 29 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157528</guid>
<dc:date>2024-10-29T00:00:00Z</dc:date>
</item>
<item>
<title>Machine learning-guided discovery of gas evolving electrode bubble inactivation</title>
<link>https://hdl.handle.net/1721.1/157527</link>
<description>Machine learning-guided discovery of gas evolving electrode bubble inactivation
Lake, Jack R; Rufer, Simon; James, Jim; Pruyne, Nathan; Scourtas, Aristana; Schwarting, Marcus; Ambadkar, Aadit; Foster, Ian; Blaiszik, Ben; Varanasi, Kripa K
The adverse effects of electrochemical bubbles on the performance of gas-evolving electrodes are well known, but studies on the degree of adhered bubble-caused inactivation, and how inactivation changes during bubble evolution are limited. We study electrode inactivation caused by oxygen evolution while using surface engineering to control bubble formation. We find that the inactivation of the entire projected area, as is currently believed, is a poor approximation which leads to non-physical results. Using a machine learning-based image-based bubble detection method to analyze large quantities of experimental data, we show that bubble impacts are small for surface engineered electrodes which promote high bubble projected areas while maintaining low direct bubble contact. We thus propose a simple methodology for more accurately estimating the true extent of bubble inactivation, which is closer to the area which is directly in contact with the bubbles.
</description>
<pubDate>Tue, 08 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157527</guid>
<dc:date>2024-10-08T00:00:00Z</dc:date>
</item>
<item>
<title>A “seat-squatting” strategy via lithium substitution to suppress Fe-migration in Na layered oxide cathodes</title>
<link>https://hdl.handle.net/1721.1/157526</link>
<description>A “seat-squatting” strategy via lithium substitution to suppress Fe-migration in Na layered oxide cathodes
Niu, Yaoshen; Hu, Zilin; Mao, Huican; Zhou, Lin; Wang, Liguang; Lou, Xiaobing; Zhang, Bo; Xiao, Dongdong; Yang, Yang; Ding, Feixiang; Rong, Xiaohui; Xu, Juping; Yin, Wen; Zhang, Nian; Li, Zhiwei; Lu, Yaxiang; Hu, Bingwen; Lu, Jun; Li, Ju; Hu, Yong-Sheng
Na-ion batteries (NIBs) are emerging as a promising alternative to Li-ion batteries (LIBs). To align with sustainability principles, the design of electrode materials must incorporate considerations for abundant and environmentally friendly elements, such as redox-active Fe. Despite its appeal, the enduring challenge of Fe migration in layered cathodes remains inadequately addressed over decades. Here, we propose a “seat-squatting” strategy via Li-substitution to fundamentally suppress Fe migration. Li is strategically introduced to migrate first, occupying available migration sites without inducing structural damage and effectively raising the activation energy for Fe migration. Experimental and theoretical validation using O3-Na0.83Li0.17Fe0.33Mn0.5O2 (NaLFM) demonstrates a robust suppression of irreversible Fe migration. As a result, the NaLFM cathode delivers enhanced structural and electrochemical cycling stability. This work illustrates a compelling strategy to curb irreversible Fe migration in NIBs, offering a pathway for the development of stable and cost-effective layered oxides based on Fe redox centers.
</description>
<pubDate>Tue, 15 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157526</guid>
<dc:date>2024-10-15T00:00:00Z</dc:date>
</item>
<item>
<title>Phenomenological observations of quinone-mediated zinc oxidation in an alkaline environment</title>
<link>https://hdl.handle.net/1721.1/157525</link>
<description>Phenomenological observations of quinone-mediated zinc oxidation in an alkaline environment
Mallia, Christopher T; Brushett, Fikile R
Redox-mediated electrochemistry is an area of growing interest, particularly in the context of energy storage. The development of such systems requires knowledge of underlying reaction mechanisms, which bear similarities to the processes that underpin corrosion and semiconductor electrochemistry. Herein we discuss an example system, quinone-mediated zinc oxidation in an alkaline environment, using knowledge from the corrosion and semiconductor fields to understand the phenomenological aspects of the reaction.
</description>
<pubDate>Thu, 03 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157525</guid>
<dc:date>2024-10-03T00:00:00Z</dc:date>
</item>
<item>
<title>Explaining the spread in measurement of PDMS elastic properties: influence of test method and curing protocol</title>
<link>https://hdl.handle.net/1721.1/157524</link>
<description>Explaining the spread in measurement of PDMS elastic properties: influence of test method and curing protocol
Varner, Hannah; Cohen, Tal
Accuracy in the measurement of mechanical properties is essential for precision engineering and for the interrogation of composition–property relationships. Conventional methods of mechanical testing, such as uniaxial tension, compression, and nanoindentation, provide highly repeatable and reliable results for stiff materials, for which they were originally developed. However, when applied to the characterization of soft and biological materials, the same cannot be said, and the spread of reported properties of similar materials is vast. Polydimethylsiloxane (PDMS), commonly obtained from Dow as SYLGARD 184, is a ubiquitous such material, which has been integral to the rapid development of biocompatible microfluidic devices and flexible electronics in recent decades. However, reported shear moduli of this material range over 2 orders of magnitude for similar chemical compositions. Taking advantage of the increased mechanical scrutiny afforded to SYLGARD 184 in recent years, we combine both published and new experimental data obtained using 9 mechanical test methods. A statistical analysis then elucidates the significant bias induced by the test method itself, and distinguishes this bias from the influence of curing protocols on the mechanical properties. The goal of this work is thus two-fold: (i) it provides a quantitative understanding of the different factors that influence reported properties of this particular material, and (ii) it serves as a cautionary tale. As researchers in the field of mechanics strive to quantify the properties of increasingly complex soft and biological materials, converging on a standardized measurement of PDMS is a necessary first step.
</description>
<pubDate>Mon, 16 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157524</guid>
<dc:date>2024-09-16T00:00:00Z</dc:date>
</item>
<item>
<title>Microfluidic Hanging Droplet as a Programmable Platform for Mammalian Egg Vitrification</title>
<link>https://hdl.handle.net/1721.1/157523</link>
<description>Microfluidic Hanging Droplet as a Programmable Platform for Mammalian Egg Vitrification
Feng, Haidong; Katsikis, Georgios; Napier, India; Du, Gong; Lim, Josh; Doyle, Joseph; Manalis, Scott R; Griffith, Linda G
Egg (oocyte) vitrification is the dominant method for preserving fertility for women of reproductive age. However, the method is typically performed by hand, requiring precise (∼0.1 to 10 μL) and time-sensitive (∼1 s) liquid exchange of cryoprotectants (CPA) around eggs as well as fine handling of eggs (∼100 μm) for immersion into liquid nitrogen (LN2). Here, we developed a microfluidic platform for programmable vitrification. Our platform is based on a millimeter-sized hanging droplet inside which a given egg is suspended and subjected to liquid exchanges within seconds. After programmable exposures to CPA, the egg is extracted from the liquid–air interface of the droplet using a motorized fine-tip instrument and immersed into LN2 for vitrification. To benchmark our platform with the manual method, we vitrified over a hundred mouse eggs and found comparable percentages (∼95%) for post-vitrification survivability. In addition, our platform performs real-time microscopy of the egg thereby enabling future studies where its morphology may be linked to functional outcomes. Our study contributes to the ongoing efforts to enhance the automation of embryology techniques towards broader applications in reproductive medicine both for clinical and research purposes.
</description>
<pubDate>Wed, 30 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157523</guid>
<dc:date>2024-10-30T00:00:00Z</dc:date>
</item>
<item>
<title>Multiscale dynamics of charging and plating in graphite electrodes coupling operando microscopy and phase-field modelling</title>
<link>https://hdl.handle.net/1721.1/157522</link>
<description>Multiscale dynamics of charging and plating in graphite electrodes coupling operando microscopy and phase-field modelling
Lu, Xuekun; Lagnoni, Marco; Bertei, Antonio; Das, Supratim; Owen, Rhodri E; Li, Qi; O’Regan, Kieran; Wade, Aaron; Finegan, Donal P; Kendrick, Emma; Bazant, Martin Z; Brett, Dan JL; Shearing, Paul R
The phase separation dynamics in graphitic anodes significantly affects lithium plating propensity, which is the major degradation mechanism that impairs the safety and fast charge capabilities of automotive lithium-ion batteries. In this study, we present comprehensive investigation employing operando high-resolution optical microscopy combined with non-equilibrium thermodynamics implemented in a multi-dimensional (1D+1D to 3D) phase-field modeling framework to reveal the rate-dependent spatial dynamics of phase separation and plating in graphite electrodes. Here we visualize and provide mechanistic understanding of the multistage phase separation, plating, inter/intra-particle lithium exchange and plated lithium back-intercalation phenomena. A strong dependence of intra-particle lithiation heterogeneity on the particle size, shape, orientation, surface condition and C-rate at the particle level is observed, which leads to early onset of plating spatially resolved by a 3D image-based phase-field model. Moreover, we highlight the distinct relaxation processes at different state-of-charges (SOCs), wherein thermodynamically unstable graphite particles undergo a drastic intra-particle lithium redistribution and inter-particle lithium exchange at intermediate SOCs, whereas the electrode equilibrates much slower at low and high SOCs. These physics-based insights into the distinct SOC-dependent relaxation efficiency provide new perspective towards developing advanced fast charge protocols to suppress plating and shorten the constant voltage regime.
</description>
<pubDate>Sun, 01 Jan 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157522</guid>
<dc:date>2023-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Phase-Field Computational Framework for Addressing Challenges in Solid-State Batteries</title>
<link>https://hdl.handle.net/1721.1/157521</link>
<description>Phase-Field Computational Framework for Addressing Challenges in Solid-State Batteries
Schwietert, Tammo K; Ombrini, Pierfrancesco; Ootes, Laura S; Oostrum, Leon; Azizi, Victor; Cogswell, Daniel; Zhu, Juner; Bazant, Martin Z; Wagemaker, Marnix; Vasileiadis, Alexandros
All-solid-state batteries are attracting increasing interest due to their higher promised energy densities without the use of flammable liquid electrolytes. Two main challenges for solid-state batteries are contact loss and interphase formation; these play a central role in the quality of the solid-electrolyte–electrode interfaces. Here, we present a modular phase-field modeling framework that is generally applicable to solid-state batteries with different electrodes and corresponding microstructures. The model is based on multiphase porous electrode theory, where Li-ion diffusion in solid electrolytes and electrode materials is integrated through a regular solution free energy functional. Modules for contact loss and diffusive interlayers, able to capture solid-solid and solid-liquid interfaces such as solid-electrolyte interphase formation and coatings, are also implemented, providing numerous modeling options for a comprehensive understanding of electrochemical systems. A thorough comparison between the solid-state and conventional liquid-electrolyte models for phase-separating electrodes reveals the optimal conditions and bottlenecks of solid-state diffusion and failure mechanisms. The predictions underline contact loss and interphase formation as the crucial mesoscopic morphological characteristics of solid-state systems, setting the basis for in-depth understanding and optimized performance in all-solid-state batteries.
</description>
<pubDate>Sun, 01 Jan 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157521</guid>
<dc:date>2023-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Theory of Cation Solvation and Ionic Association in Nonaqueous Solvent Mixtures</title>
<link>https://hdl.handle.net/1721.1/157520</link>
<description>Theory of Cation Solvation and Ionic Association in Nonaqueous Solvent Mixtures
Goodwin, Zachary AH; McEldrew, Michael; Kozinsky, Boris; Bazant, Martin Z
Conventional lithium-ion batteries, and many next-generation technologies, rely on organic electrolytes with multiple solvents to achieve the desired physicochemical and interfacial properties. The complex interplay between these properties can often be elucidated via the coordination environment of the cation. We develop a theory for the coordination shell of cations in nonaqueous solvent mixtures that can be applied with high fidelity, up to extremely high salt concentrations. Our theory can naturally explain simulation and experimental values of cation solvation in “classical” nonaqueous electrolytes. Moreover, we utilize our theory to understand general design principles of emerging classes of nonaqueous electrolyte mixtures, such as high entropy electrolytes. It is hoped that this theory provides a systematic framework to understand simulations and experiments that engineer the solvation structure and ionic associations of concentrated nonaqueous electrolytes.
</description>
<pubDate>Sun, 01 Jan 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157520</guid>
<dc:date>2023-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Dynamic interaction networks in a hierarchically organized tissue</title>
<link>https://hdl.handle.net/1721.1/157519</link>
<description>Dynamic interaction networks in a hierarchically organized tissue
Kirouac, Daniel C.; Ito, Caryn; Csaszar, Elizabeth; Roch, Aline; Yu, Mei; Sykes, Edward A.; Bader, Gary D.; Zandstra, Peter W.
Intercellular (between cell) communication networks maintain homeostasis and coordinate regenerative and developmental cues in multicellular organisms. Despite the importance of intercellular networks in stem cell biology, their rules, structure and molecular components are poorly understood. Herein, we describe the structure and dynamics of intercellular and intracellular networks in a stem cell derived, hierarchically organized tissue using experimental and theoretical analyses of cultured human umbilical cord blood progenitors. By integrating high‐throughput molecular profiling, database and literature mining, mechanistic modeling, and cell culture experiments, we show that secreted factor‐mediated intercellular communication networks regulate blood stem cell fate decisions. In particular, self‐renewal is modulated by a coupled positive–negative intercellular feedback circuit composed of megakaryocyte‐derived stimulatory growth factors (VEGF, PDGF, EGF, and serotonin) versus monocyte‐derived inhibitory factors (CCL3, CCL4, CXCL10, TGFB2, and TNFSF9). We reconstruct a stem cell intracellular network, and identify PI3K, Raf, Akt, and PLC as functionally distinct signal integration nodes, linking extracellular, and intracellular signaling. This represents the first systematic characterization of how stem cell fate decisions are regulated non‐autonomously through lineage‐specific interactions with differentiated progeny.
</description>
<pubDate>Tue, 05 Oct 2010 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157519</guid>
<dc:date>2010-10-05T00:00:00Z</dc:date>
</item>
<item>
<title>Targeted hematopoietic stem cell depletion through SCF-blockade</title>
<link>https://hdl.handle.net/1721.1/157518</link>
<description>Targeted hematopoietic stem cell depletion through SCF-blockade
Chan, Yan Y.; Ho, Pui Y.; Dib, Carla; Swartzrock, Leah; Rayburn, Maire; Willner, Hana; Ko, Ethan; Ho, Katie; Down, Julian D.; Wilkinson, Adam C.; Nakauchi, Hiro; Denis, Morgane; Cool, Taylor; Czechowicz, Agnieszka
Abstract Background Hematopoietic stem cell transplantation (HSCT) is a curative treatment for many diverse blood and immune diseases. However, HSCT regimens currently commonly utilize genotoxic chemotherapy and/or total body irradiation (TBI) conditioning which causes significant morbidity and mortality through inducing broad tissue damage triggering infections, graft vs. host disease, infertility, and secondary cancers. We previously demonstrated that targeted monoclonal antibody (mAb)-based HSC depletion with anti(α)-CD117 mAbs could be an effective alternative conditioning approach for HSCT without toxicity in severe combined immunodeficiency (SCID) mouse models, which has prompted parallel clinical αCD117 mAbs to be developed and tested as conditioning agents in clinical trials starting with treatment of patients with SCID. Subsequent efforts have built upon this work to develop various combination approaches, though none are optimal and how any of these mAbs fully function is unknown. Methods To improve efficacy of mAb-based conditioning as a stand-alone conditioning approach for all HSCT settings, it is critical to understand the mechanistic action of αCD117 mAbs on HSCs. Here, we compare the antagonistic properties of αCD117 mAb clones including ACK2, 2B8, and 3C11 as well as ACK2 fragments in vitro and in vivo in both SCID and wildtype (WT) mouse models. Further, to augment efficacy, combination regimens were also explored. Results We confirm that only ACK2 inhibits SCF binding fully and prevents HSC proliferation in vitro. Further, we verify that this corresponds to HSC depletion in vivo and donor engraftment post HSCT in SCID mice. We also show that SCF-blocking αCD117 mAb fragment derivatives retain similar HSC depletion capacity with enhanced engraftment post HSCT in SCID settings, but only full αCD117 mAb ACK2 in combination with αCD47 mAb enables enhanced donor HSC engraftment in WT settings, highlighting that the Fc region is not required for single-agent efficacy in SCID settings but is required in immunocompetent settings. This combination was the only non-genotoxic conditioning approach that enabled robust donor engraftment post HSCT in WT mice. Conclusion These findings shed new insights into the mechanism of αCD117 mAb-mediated HSC depletion. Further, they highlight multiple approaches for efficacy in SCID settings and optimal combinations for WT settings. This work is likely to aid in the development of clinical non-genotoxic HSCT conditioning approaches that could benefit millions of people world-wide.
</description>
<pubDate>Tue, 29 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157518</guid>
<dc:date>2024-10-29T00:00:00Z</dc:date>
</item>
<item>
<title>On the non-perturbative bulk Hilbert space of JT gravity</title>
<link>https://hdl.handle.net/1721.1/157517</link>
<description>On the non-perturbative bulk Hilbert space of JT gravity
Iliesiu, Luca V.; Levine, Adam; Lin, Henry W.; Maxfield, Henry; Mezei, Márk
What is the bulk Hilbert space of quantum gravity? In this paper, we resolve this problem in 2d JT gravity, both with and without matter, providing an explicit definition of a non-perturbative Hilbert space specified in terms of metric variables. The states are wavefunctions of the length and matter state, but with a non-trivial and highly degenerate inner product. We explicitly identify the null states, and discuss their importance for defining operators non-perturbatively. To highlight the power of the formalism we developed, we study the non-perturbative effects for two bulk linear operators that may serve as proxies for the experience of an observer falling into a two-sided black hole: one captures the length of an Einstein-Rosen bridge and the other captures the center-of-mass collision energy between two particles falling from opposite sides. We track the behavior of these operators up to times of order e S BH , at which point the wavefunction spreads to the complete set of eigenstates of these operators. If these observables are indeed good proxies for the experience of an infalling observer, our results indicate an O(1) probability of detecting a firewall at late times that is self-averaging and universal.
</description>
<pubDate>Tue, 29 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157517</guid>
<dc:date>2024-10-29T00:00:00Z</dc:date>
</item>
<item>
<title>Health-related quality of life dynamics: modeling insights from immunotherapy</title>
<link>https://hdl.handle.net/1721.1/157516</link>
<description>Health-related quality of life dynamics: modeling insights from immunotherapy
Hasgul, Zeynep; Spanjaart, Anne; Javed, Sumreen; Akhavan, Ali; Kersten, Marie J.; Jalali, Mohammad S.
Understanding how treatments affect patients’ quality of life over time is crucial, but capturing the complex interactions of health factors poses a challenge for clinical and observational research. To overcome this, we have turned to simulation modeling, a method that allows for a more thorough exploration of these dynamics. Our study focuses on cancer immunotherapy, a treatment that, despite its potential to prolong survival, also comes with life-threatening risks. We evaluated the effectiveness of two strategies aimed at improving quality of life: reducing the time to treatment infusion and enhancing social support. These strategies were assessed across three different patient scenarios: those not initially eligible for treatment, patients experiencing a relapse, and patients showing a complete response. By using simulation modeling, we demonstrated how this approach can help explore the dynamics and interactions of various health factors and the impact of specific strategies.
</description>
<pubDate>Wed, 30 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157516</guid>
<dc:date>2024-10-30T00:00:00Z</dc:date>
</item>
<item>
<title>Inhibitory Potential of the Truncated Isoforms on Glutamate Transporter Oligomerization Identified by Computational Analysis of Gene-Centric Isoform Maps</title>
<link>https://hdl.handle.net/1721.1/157515</link>
<description>Inhibitory Potential of the Truncated Isoforms on Glutamate Transporter Oligomerization Identified by Computational Analysis of Gene-Centric Isoform Maps
Karagöl, Alper; Karagöl, Taner; Li, Mengke; Zhang, Shuguang
Objective Glutamate transporters play a key role in central nervous system physiology by maintaining excitatory neurotransmitter homeostasis. Biological assemblies of the transporters, consisting of cyclic homotrimers, emerge as a crucial aspect of glutamate transporter modulation. Hence targeting heteromerization promises an effective approach for modulator design. On the other hand, the dynamic nature of transcription allows for the generation of transporter isoforms in structurally distinct manners. Methods The potential isoforms were identified through the analysis of computationally generated gene-centric isoform maps. The conserved features of isoform sequences were revealed by computational chemistry methods and subsequent structural analysis of AlphaFold2 predictions. Truncated isoforms were further subjected to a wide range of docking analyses, 50ns molecular dynamics simulations, and evolutionary coupling analyses. Results Energetic landscapes of isoform-canonical transporter complexes suggested an inhibitory potential of truncated isoforms on glutamate transporter bio-assembly. Moreover, isoforms that mimic the trimerization domain (in particular, TM2 helices) exhibited stronger interactions with canonical transporters, underscoring the role of transmembrane helices in isoform interactions. Additionally, self-assembly dynamics observed in truncated isoforms mimicking canonical TM5 helices indicate a potential protective role against unwanted interactions with canonical transporters. Conclusion Our computational studies on glutamate transporters offer insights into the roles of alternative splicing on protein interactions and identifies potential drug targets for physiological or pathological processes.
</description>
<pubDate>Fri, 01 Nov 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157515</guid>
<dc:date>2024-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Revisiting values in evaluation: exploring the role of values in shaping evaluation practices and their influences on decision-making within English higher education providers</title>
<link>https://hdl.handle.net/1721.1/157514</link>
<description>Revisiting values in evaluation: exploring the role of values in shaping evaluation practices and their influences on decision-making within English higher education providers
Kelly, Catherine
Theoretical and empirical contributions to research on evaluation have advanced our understanding of how values influence evaluation practice. Yet rather than understand how values shape evaluation and its use, research on the evaluation of widening participation (WP) programmes delivered by English higher education (HE) providers has focused on methodological deficits. Rather, this study explores the complexity of how national policy, organisational imperatives and the individual values of staff responsible for WP within HE providers influence how evaluation is practised and used to inform decision-making. The results of semi-structured interviews with 17 staff members spanning the organisational hierarchy of three diverse English HE providers highlight conflicts between staff values, job roles and responsibilities and espoused organisational values, and how they can influence symbolic and legitimising evaluation practices. Alternatively, at the individual level staff values support the process and instrumental use of evaluation to inform programme improvements. The findings identify implications for how HE providers can shape their evaluation systems, and how staff choose to enact evaluation within their programme areas.
</description>
<pubDate>Mon, 28 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157514</guid>
<dc:date>2024-10-28T00:00:00Z</dc:date>
</item>
<item>
<title>Ladder Polyether Synthesis via Epoxide-Opening Cascades Using a Disappearing Directing Group</title>
<link>https://hdl.handle.net/1721.1/157513</link>
<description>Ladder Polyether Synthesis via Epoxide-Opening Cascades Using a Disappearing Directing Group
Simpson, Graham L; Heffron, Timothy P; Merino, Estíbaliz; Jamison, Timothy F
The combination of a trimethylsilyl group, a Brønsted base, a fluoride source, and a hydroxylic solvent enables the first construction of the tetrad of tetrahydropyran rings found in the majority of the ladder polyether natural products by way of a cascade of epoxide-opening events that emulates the final step of Nakanishi's proposed biosynthetic pathway. The trimethylsilyl group disappears during the course of the cascade, and thus these are the first epoxide ring-opening cascades that afford ladder polyether subunits containing no directing groups at the end of the cascade.
</description>
<pubDate>Wed, 01 Feb 2006 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157513</guid>
<dc:date>2006-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Electrostatic microfiltration (EM) enriches and recovers viable microorganisms at low-abundance in large-volume samples and enhances downstream detection</title>
<link>https://hdl.handle.net/1721.1/157512</link>
<description>Electrostatic microfiltration (EM) enriches and recovers viable microorganisms at low-abundance in large-volume samples and enhances downstream detection
Liu, Yaoping; Raymond, Joshua J; Wu, Xiaolin; Chua, Patrina Wei Lin; Ling, Sharon Yan Han; Chan, Chia Ching; Chan, Cheryl; Loh, Joanne Xin Yi; Song, Melody Xing Yen; Ong, Matilda Yu Yan; Ho, Peiying; Mcbee, Megan E; Springs, Stacy L; Yu, Hanry; Han, Jongyoon
Rapid and sensitive detection of pathogens in various samples is crucial for disease diagnosis, environmental surveillance, as well as food and water safety monitoring. However, the low abundance of pathogens (&lt;10 CFU) in large volume (1 mL−1 L) samples containing vast backgrounds critically limits the sensitivity of even the most advanced techniques, such as digital PCR. Therefore, there is a critical need for sample preparation that can enrich low-abundance pathogens from complex and large-volume samples. This study develops an efficient electrostatic microfiltration (EM)-based sample preparation technique capable of processing ultra-large-volume (≥500 mL) samples at high throughput (≥10 mL min−1). This approach achieves a significant enrichment (&gt;8000×) of extremely-low-abundance pathogens (down to level of 0.02 CFU mL−1, i.e., 10 CFU in 500 mL). Furthermore, EM-enabled sample preparation facilitates digital amplification techniques sensitively detecting broad pathogens, including bacteria, fungi, and viruses from various samples, in a rapid (≤3 h) sample-to-result workflow. Notably, the operational ease, portability, and compatibility/integrability with various downstream detection platforms highlight its great potential for widespread applications across diverse settings.
</description>
<pubDate>Tue, 10 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157512</guid>
<dc:date>2024-09-10T00:00:00Z</dc:date>
</item>
<item>
<title>Synthesis of α-methylene-δ-valerolactone and its selective polymerization from a product mixture for concurrent separation and polymer production</title>
<link>https://hdl.handle.net/1721.1/157511</link>
<description>Synthesis of α-methylene-δ-valerolactone and its selective polymerization from a product mixture for concurrent separation and polymer production
Khechfe, Alexander A; Eckstrom, Francesca D; Chokkapu, Eswara Rao; Baston, Lucas A; Liu, Bowei; Chen, Eugene Y-X; Román-Leshkov, Yuriy
We report the continuous, gas-phase synthesis of α-methylene-δ-valerolactone (MVL) from δ-valerolactone (DVL) and formaldehyde (FA) over alkaline earth oxide catalysts. MgO, CaO, and BaO supported on silica (∼5 wt%) were active for MVL production (613 K, 0.4 kPa DVL, 1.2 kPa FA, 101 kPa total pressure). CaO and BaO showed 90% and 83% selectivity to MVL at ∼60% DVL conversion, respectively. Decreasing contact times improved MVL selectivity for all three catalysts, achieving near quantitative selectivity at DVL conversions &lt;40% with CaO. Further studies with CaO indicated that increasing the FA partial pressure for a given DVL partial pressure negligibly changed conversion while maintaining high selectivity; however, increasing the reaction temperature generally resulted in lower MVL selectivity. Deactivation and carbon loss were attributed to non-volatile compound formation from series and parallel reactions that consume MVL and DVL and poison the catalyst surface. These side reactions were more pronounced at high temperatures and higher contact times. While slow deactivation poses a challenge, the catalyst could be fully regenerated by calcining at 773 K for 4 h under flowing air. As the product mixture of MVL and DVL is difficult to separate, we developed a selective polymerization strategy to convert either one or both monomers into valuable polymeric materials, thereby achieving efficient separation and concurrent polymer production. Using a model mixture of 30 wt% of MVL in DVL, vinyl-addition polymerization converted MVL to the corresponding vinyl polymer (PMVL)VAP in 98% yield, while DVL was recovered in 96% yield by distillation. Alternatively, ring-opening polymerization of the same mixture resulted in a DVL/MVL copolyester and separatable vinyl homopolymer P(MVL)VAP.
</description>
<pubDate>Mon, 14 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157511</guid>
<dc:date>2024-10-14T00:00:00Z</dc:date>
</item>
<item>
<title>Multiplicative resonant enhancement of chemical detection</title>
<link>https://hdl.handle.net/1721.1/157510</link>
<description>Multiplicative resonant enhancement of chemical detection
Ma, Wenchao; Pestourie, Raphaël; Lin, Zin; Aguirre-Soto, Alan; Sikes, Hadley D.; Johnson, Steven G.
Optical resonances can increase the sensitivity of measurements to material perturbations and also accelerate photochemical reactions. Here, we show that these two effects can be combined multiplicatively, to enhance the detection via weak or low-concentration photochemical reactions far beyond what could previously be attained. For an optical resonance with quality factor &#119876;, the sensitivity of our detection scheme is enhanced by ∼&#119876;2 (where ∼ denotes approximate proportionality), as demonstrated by both theoretical arguments and numerical simulations of a simple optical-grating resonance coupled with reaction-diffusion equations. Such an approach opens a door to further improvements by careful design of the resonance: even a three-parameter optimization of the grating resonance yields an additional ≈7 times improvement.
</description>
<pubDate>Mon, 04 Nov 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157510</guid>
<dc:date>2024-11-04T00:00:00Z</dc:date>
</item>
<item>
<title>Shear annealing of a self-interacting sheet</title>
<link>https://hdl.handle.net/1721.1/157509</link>
<description>Shear annealing of a self-interacting sheet
Funkenbusch, William T; Silmore, Kevin S; Doyle, Patrick S
2D materials such as graphene, graphene oxide, transition metal dichalcogenides, and 2D polymers have unique properties which allow them to be used in many applications from electronics to energy to biotechnology. Producing and applying these materials often involves solution processing. Previous computational studies have observed 2D sheets in shear and extensional flows, but have focused on steady flows, even though the dynamics of these materials might exhibit hysteresis. In this work, we study 2D sheets with short-ranged attractive interactions under time-varying shear. We show that, even with relatively simple protocols, the properties of sheet suspensions can be tuned.
</description>
<pubDate>Wed, 11 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157509</guid>
<dc:date>2024-09-11T00:00:00Z</dc:date>
</item>
<item>
<title>Controlling the thermally-driven crystallization of DNA-coated nanoparticles with formamide</title>
<link>https://hdl.handle.net/1721.1/157508</link>
<description>Controlling the thermally-driven crystallization of DNA-coated nanoparticles with formamide
Hueckel, Theodore; Woo, Seungyeon; Macfarlane, Robert J
DNA-coated nanoparticles, also known as programmable atom equivalents (PAEs), facilitate the construction of materials with nanoscopic precision. Thermal annealing plays a pivotal role by controlling DNA hybridization kinetics and thermodynamics, which ensures the formation of intended structures. While various design handles such as particle size, DNA design, and salt concentration influence the stability of the DNA duplexes linking PAEs in a lattice, their influence on the system's melting temperature (Tm) often follows complicated trends that make rational tuning of self-assembly challenging. In this work, the denaturant formamide is used to precisely tune the thermal response of PAEs. Our results reveal a clear and predictable trend in the PAEs’ response to formamide, enabling rational control over the Tm of a diverse set of PAE systems. Unlike adjustments made through alterations to PAE design or solution parameters such as ionic strength, formamide achieves its temperature shift without impacting the kinetics of assembly. As a result, PAEs can be rapidly crystallized at ambient temperatures, producing superlattices with similar quality to PAE crystals assembled through standard protocols that use higher temperatures. This study therefore positions formamide as a useful tool for enhancing the synthesis of complex nanostructures under mild conditions.
</description>
<pubDate>Wed, 28 Aug 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157508</guid>
<dc:date>2024-08-28T00:00:00Z</dc:date>
</item>
<item>
<title>Pentacyclic fused diborepinium ions with carbene- and carbone-mediated deep-blue to red emission</title>
<link>https://hdl.handle.net/1721.1/157507</link>
<description>Pentacyclic fused diborepinium ions with carbene- and carbone-mediated deep-blue to red emission
Hollister, Kimberly K; Molino, Andrew; Le, VuongVy V; Jones, Nula; Smith, Wyatt J; Müller, Peter; Dickie, Diane A; Wilson, David JD; Gilliard, Robert J
Designing molecules that can undergo late-stage modifications resulting in specific optical properties is useful for developing structure-function trends in materials, which ultimately advance optoelectronic applications. Herein, we report a series of fused diborepinium ions stabilized by carbene and carbone ligands (diamino-N-heterocyclic carbenes, cyclic(alkyl)(amino) carbenes, carbodicarbenes, and carbodiphosphoranes), including a detailed bonding analysis. These are the first structurally confirmed examples of diborepin dications and we detail how distortions in the core of the pentacyclic fused system impact aromaticity, stability, and their light-emitting properties. Using the same fused diborepin scaffold, coordinating ligands were used to dramatically shift the emission profile, which exhibit colors ranging from blue to red (358–643 nm). Notably, these diborepinium ions access expanded regions of the visible spectrum compared to known examples of borepins, with quantum yields up to 60%. Carbones were determined to be superior stabilizing ligands, resulting in improved stability in the solution and solid states. Density functional theory was used to provide insight into the bonding as well as the specific transitions that result in the observed photophysical properties.
</description>
<pubDate>Mon, 05 Aug 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157507</guid>
<dc:date>2024-08-05T00:00:00Z</dc:date>
</item>
<item>
<title>Beyond MELD Score: Association of Machine Learning-derived CT Body Composition with 90-Day Mortality Post Transjugular Intrahepatic Portosystemic Shunt Placement</title>
<link>https://hdl.handle.net/1721.1/157506</link>
<description>Beyond MELD Score: Association of Machine Learning-derived CT Body Composition with 90-Day Mortality Post Transjugular Intrahepatic Portosystemic Shunt Placement
Elhakim, Tarig; Mansur, Arian; Kondo, Jordan; Omar, Omar M. F.; Ahmed, Khalid; Tabari, Azadeh; Brea, Allison; Ndakwah, Gabriel; Iqbal, Shams; Allegretti, Andrew S.; Fintelmann, Florian J.; Wehrenberg-Klee, Eric; Bridge, Christopher; Daye, Dania
Purpose To determine the association of machine learning-derived CT body composition and 90-day mortality after transjugular intrahepatic portosystemic shunt (TIPS) and to assess its predictive performance as a complement to Model for End-Stage Liver Disease (MELD) score for mortality risk prediction. Materials and Methods This retrospective multi-center cohort study included patients who underwent TIPS from 1995 to 2018 and had a contrast-enhanced CT abdomen within 9 months prior to TIPS and at least 90 days of post-procedural clinical follow-up. A machine learning algorithm extracted CT body composition metrics at L3 vertebral level including skeletal muscle area (SMA), skeletal muscle index (SMI), skeletal muscle density (SMD), subcutaneous fat area (SFA), subcutaneous fat index (SFI), visceral fat area (VFA), visceral fat index (VFI), and visceral-to-subcutaneous fat ratio (VSR). Independent t-tests, logistic regression models, and ROC curve analysis were utilized to assess the association of those metrics in predicting 90-day mortality. Results A total of 122 patients (58 ± 11.8, 68% male) were included. Patients who died within 90 days of TIPS had significantly higher MELD (18.9 vs. 11.9, p &lt; 0.001) and lower SMA (123 vs. 144.5, p = 0.002), SMI (43.7 vs. 50.5, p = 0.03), SFA (122.4 vs. 190.8, p = 0.009), SFI (44.2 vs. 66.7, p = 0.04), VFA (105.5 vs. 171.2, p = 0.003), and VFI (35.7 vs. 57.5, p = 0.02) compared to those who survived past 90 days. There were no significant associations between 90-day mortality and BMI (26 vs. 27.1, p = 0.63), SMD (30.1 vs. 31.7, p = 0.44), or VSR (0.97 vs. 1.03, p = 0.66). Multivariable logistic regression showed that SMA (OR = 0.97, p &lt; 0.01), SMI (OR = 0.94, p = 0.03), SFA (OR = 0.99, p = 0.01), and VFA (OR = 0.99, p = 0.02) remained significant predictors of 90-day mortality when adjusted for MELD score. ROC curve analysis demonstrated that including SMA, SFA, and VFA improves the predictive power of MELD score in predicting 90-day mortality after TIPS (AUC, 0.84; 95% CI: 0.77, 0.91; p = 0.02). Conclusion CT body composition is positively predictive of 90-day mortality after TIPS and improves the predictive performance of MELD score. Level of Evidence: Level 3, Retrospective multi-center cohort study. Graphical Abstract
</description>
<pubDate>Tue, 29 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157506</guid>
<dc:date>2024-10-29T00:00:00Z</dc:date>
</item>
<item>
<title>Diauxic lags explain unexpected coexistence in multi‐resource environments</title>
<link>https://hdl.handle.net/1721.1/157505</link>
<description>Diauxic lags explain unexpected coexistence in multi‐resource environments
Bloxham, Blox; Lee, Hyunseok; Gore, Jeff
How the coexistence of species is affected by the presence of multiple resources is a major question in microbial ecology. We experimentally demonstrate that differences in diauxic lags, which occur as species deplete their own environments and adapt their metabolisms, allow slow‐growing microbes to stably coexist with faster‐growing species in multi‐resource environments despite being excluded in single‐resource environments. In our focal example, an Acinetobacter species (Aci2) competitively excludes Pseudomonas aurantiaca (Pa) on alanine and on glutamate. However, they coexist on the combination of both resources. Experiments reveal that Aci2 grows faster but Pa has shorter diauxic lags. We establish a tradeoff between Aci2’s fast growth and Pa’s short lags as their mechanism for coexistence. We model this tradeoff to accurately predict how environmental changes affect community composition. We extend our work by surveying a large set of competitions and observe coexistence nearly four times as frequently when the slow‐grower is the fast‐switcher. Our work illustrates a simple mechanism, based entirely on supplied‐resource growth dynamics, for the emergence of multi‐resource coexistence.
</description>
<pubDate>Wed, 04 May 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157505</guid>
<dc:date>2022-05-04T00:00:00Z</dc:date>
</item>
<item>
<title>A metabolic map of the DNA damage response identifies PRDX1 in the control of nuclear ROS scavenging and aspartate availability</title>
<link>https://hdl.handle.net/1721.1/157504</link>
<description>A metabolic map of the DNA damage response identifies PRDX1 in the control of nuclear ROS scavenging and aspartate availability
Moretton, Amandine; Kourtis, Savvas; Gañez Zapater, Antoni; Calabrò, Chiara; Espinar Calvo, Maria L.; Fontaine, Frédéric; Darai, Evangelia; Abad Cortel, Etna; Block, Samuel; Pascual‐Reguant, Laura; Pardo‐Lorente, Natalia; Ghose, Ritobrata; Vander Heiden, Matthew G.
While cellular metabolism impacts the DNA damage response, a systematic understanding of the metabolic requirements that are crucial for DNA damage repair has yet to be achieved. Here, we investigate the metabolic enzymes and processes that are essential for the resolution of DNA damage. By integrating functional genomics with chromatin proteomics and metabolomics, we provide a detailed description of the interplay between cellular metabolism and the DNA damage response. Further analysis identified that Peroxiredoxin 1, PRDX1, contributes to the DNA damage repair. During the DNA damage response, PRDX1 translocates to the nucleus where it reduces DNA damage‐induced nuclear reactive oxygen species. Moreover, PRDX1 loss lowers aspartate availability, which is required for the DNA damage‐induced upregulation of de novo nucleotide synthesis. In the absence of PRDX1, cells accumulate replication stress and DNA damage, leading to proliferation defects that are exacerbated in the presence of etoposide, thus revealing a role for PRDX1 as a DNA damage surveillance factor.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157504</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Temporal perturbation of ERK dynamics reveals network architecture of FGF2/MAPK signaling</title>
<link>https://hdl.handle.net/1721.1/157503</link>
<description>Temporal perturbation of ERK dynamics reveals network architecture of FGF2/MAPK signaling
Blum, Yannick; Mikelson, Jan; Dobrzyński, Maciej; Ryu, Hyunryul; Jacques, Marc‐Antoine; Jeon, Noo L.; Khammash, Mustafa; Pertz, Olivier
Stimulation of PC‐12 cells with epidermal (EGF) versus nerve (NGF) growth factors (GFs) biases the distribution between transient and sustained single‐cell ERK activity states, and between proliferation and differentiation fates within a cell population. We report that fibroblast GF (FGF2) evokes a distinct behavior that consists of a gradually changing population distribution of transient/sustained ERK signaling states in response to increasing inputs in a dose response. Temporally controlled GF perturbations of MAPK signaling dynamics applied using microfluidics reveal that this wider mix of ERK states emerges through the combination of an intracellular feedback, and competition of FGF2 binding to FGF receptors (FGFRs) and heparan sulfate proteoglycan (HSPG) co‐receptors. We show that the latter experimental modality is instructive for model selection using a Bayesian parameter inference. Our results provide novel insights into how different receptor tyrosine kinase (RTK) systems differentially wire the MAPK network to fine‐tune fate decisions at the cell population level.
</description>
<pubDate>Tue, 19 Nov 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157503</guid>
<dc:date>2019-11-19T00:00:00Z</dc:date>
</item>
<item>
<title>A self‐propagating, barcoded transposon system for the dynamic rewiring of genomic networks</title>
<link>https://hdl.handle.net/1721.1/157502</link>
<description>A self‐propagating, barcoded transposon system for the dynamic rewiring of genomic networks
English, Max A.; Alcantar, Miguel A.; Collins, James J.
In bacteria, natural transposon mobilization can drive adaptive genomic rearrangements. Here, we build on this capability and develop an inducible, self‐propagating transposon platform for continuous genome‐wide mutagenesis and the dynamic rewiring of gene networks in bacteria. We first use the platform to study the impact of transposon functionalization on the evolution of parallel Escherichia coli populations toward diverse carbon source utilization and antibiotic resistance phenotypes. We then develop a modular, combinatorial assembly pipeline for the functionalization of transposons with synthetic or endogenous gene regulatory elements (e.g., inducible promoters) as well as DNA barcodes. We compare parallel evolutions across alternating carbon sources and demonstrate the emergence of inducible, multigenic phenotypes and the ease with which barcoded transposons can be tracked longitudinally to identify the causative rewiring of gene networks. This work establishes a synthetic transposon platform that can be used to optimize strains for industrial and therapeutic applications, for example, by rewiring gene networks to improve growth on diverse feedstocks, as well as help address fundamental questions about the dynamic processes that have sculpted extant gene networks.
</description>
<pubDate>Mon, 27 Mar 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157502</guid>
<dc:date>2023-03-27T00:00:00Z</dc:date>
</item>
<item>
<title>Aneuploid senescent cells activate NF-κB to promote their immune clearance by NK cells</title>
<link>https://hdl.handle.net/1721.1/157501</link>
<description>Aneuploid senescent cells activate NF-κB to promote their immune clearance by NK cells
Wang, Ruoxi W.; Viganò, Sonia; Ben-David, Uri; Amon, Angelika; Santaguida, Stefano
The immune system plays a major role in the protection against cancer. Identifying and characterizing the pathways mediating this immune surveillance are thus critical for understanding how cancer cells are recognized and eliminated. Aneuploidy is a hallmark of cancer, and we previously found that untransformed cells that had undergone senescence due to highly abnormal karyotypes are eliminated by natural killer (NK) cells in vitro. However, the mechanisms underlying this process remained elusive. Here, using an in vitro NK cell killing system, we show that non‐cell‐autonomous mechanisms in aneuploid cells predominantly mediate their clearance by NK cells. Our data indicate that in untransformed aneuploid cells, NF‐κB signaling upregulation is central to elicit this immune response. Inactivating NF‐κB abolishes NK cell‐mediated clearance of untransformed aneuploid cells. In cancer cell lines, NF‐κB upregulation also correlates with the degree of aneuploidy. However, such upregulation in cancer cells is not sufficient to trigger NK cell‐mediated clearance, suggesting that additional mechanisms might be at play during cancer evolution to counteract NF‐κB‐mediated immunogenicity.
</description>
<pubDate>Tue, 08 Jun 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157501</guid>
<dc:date>2021-06-08T00:00:00Z</dc:date>
</item>
<item>
<title>Hypoxia and loss of PHD2 inactivate stromal fibroblasts to decrease tumour stiffness and metastasis</title>
<link>https://hdl.handle.net/1721.1/157500</link>
<description>Hypoxia and loss of PHD2 inactivate stromal fibroblasts to decrease tumour stiffness and metastasis
Madsen, Chris D.; Pedersen, Jesper T.; Venning, Freja A.; Singh, Lukram B.; Moeendarbary, Emad; Charras, Guillaume; Cox, Thomas R.; Sahai, Erik; Erler, Janine T.
Cancer‐associated fibroblasts (CAFs) interact with tumour cells and promote growth and metastasis. Here, we show that CAF activation is reversible: chronic hypoxia deactivates CAFs, resulting in the loss of contractile force, reduced remodelling of the surrounding extracellular matrix and, ultimately, impaired CAF‐mediated cancer cell invasion. Hypoxia inhibits prolyl hydroxylase domain protein 2 (PHD2), leading to hypoxia‐inducible factor (HIF)‐1α stabilisation, reduced expression of αSMA and periostin, and reduced myosin II activity. Loss of PHD2 in CAFs phenocopies the effects of hypoxia, which can be prevented by simultaneous depletion of HIF‐1α. Treatment with the PHD inhibitor DMOG in an orthotopic breast cancer model significantly decreases spontaneous metastases to the lungs and liver, associated with decreased tumour stiffness and fibroblast activation. PHD2 depletion in CAFs co‐injected with tumour cells similarly prevents CAF‐induced metastasis to lungs and liver. Our data argue that reversion of CAFs towards a less active state is possible and could have important clinical implications.
</description>
<pubDate>Mon, 31 Aug 2015 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157500</guid>
<dc:date>2015-08-31T00:00:00Z</dc:date>
</item>
<item>
<title>Contact Tracing Technologies: Methods and trade-offs</title>
<link>https://hdl.handle.net/1721.1/157499</link>
<description>Contact Tracing Technologies: Methods and trade-offs
Berke, Alex; Larson, Kent
Many organizations are working on technology for contact tracing, and the landscape is changing rapidly. This is an overview of existing contact tracing technologies, along with different methods and trade-offs to consider when&#13;
building new ones.
</description>
<pubDate>Thu, 14 May 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157499</guid>
<dc:date>2020-05-14T00:00:00Z</dc:date>
</item>
<item>
<title>Urban site characterization using DAS dark fibers on the MIT campus in Cambridge, Massachusetts</title>
<link>https://hdl.handle.net/1721.1/157498</link>
<description>Urban site characterization using DAS dark fibers on the MIT campus in Cambridge, Massachusetts
Chang, Hilary; Nakata, Nori
Telecommunication dark fibers with distributed acoustic sensing (DAS) are a useful survey tool for site characterization in urban environments. In this paper, we introduce our five-day student-led DAS experiment using dark fibers at the Massachusetts Institute of Technology campus in the city of Cambridge. The campus has been identified as an area that is highly susceptible to seismic hazards due to subsurface structure and soil properties. The experiment included survey planning, data acquisition, data analysis, subsurface characterization, and site-response estimations. Rayleigh waves collected by dark fibers in the urban environment are mostly from human activities and contain abundant higher-mode energies. We invert the phase velocity dispersions to resolve the shear-wave velocity (VS) in the top 120 m of the subsurface. The VS profiles show low VS (0.1–0.3 km/s) corresponding to unconsolidated materials such as artificial fills and clays overlying a hard bedrock (1.5–1.8 km/s). The depth to bedrock is 75–95 m on the west campus. The site near the waterfront has a lower VS and deeper bedrock. The 1D site-response modeling for shear waves suggests that the fundamental resonance frequency is at 0.6 and 1 Hz, with a sediment-to-bedrock amplitude ratio of 6–7. This should be considered in building design to mitigate seismic hazards. Our results agree with previous studies and can bridge the gap between measurements at nearby sites.
</description>
<pubDate>Fri, 01 Nov 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157498</guid>
<dc:date>2024-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Biased competition between Lgr5 intestinal stem cells driven by oncogenic mutation induces clonal expansion</title>
<link>https://hdl.handle.net/1721.1/157497</link>
<description>Biased competition between Lgr5 intestinal stem cells driven by oncogenic mutation induces clonal expansion
Snippert, Hugo J.; Schepers, Arnout G.; van Es, Johan H.; Simons, Benjamin D.; Clevers, Hans
The concept of ‘field cancerization’ describes the clonal expansion of genetically altered, but morphologically normal cells that predisposes a tissue to cancer development. Here, we demonstrate that biased stem cell competition in the mouse small intestine can initiate the expansion of such clones. We quantitatively analyze how the activation of oncogenic K‐ras in individual Lgr5+ stem cells accelerates their cell division rate and creates a biased drift towards crypt clonality. K‐ras mutant crypts then clonally expand within the epithelium through enhanced crypt fission, which distributes the existing Paneth cell niche over the two new crypts. Thus, an unequal competition between wild‐type and mutant intestinal stem cells initiates a biased drift that leads to the clonal expansion of crypts carrying oncogenic mutations.
</description>
<pubDate>Mon, 16 Dec 2013 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157497</guid>
<dc:date>2013-12-16T00:00:00Z</dc:date>
</item>
<item>
<title>Evidence for existence of an apoptosis‐inducing BH3‐only protein, sayonara, in Drosophila</title>
<link>https://hdl.handle.net/1721.1/157496</link>
<description>Evidence for existence of an apoptosis‐inducing BH3‐only protein, sayonara, in Drosophila
Ikegawa, Yuko; Combet, Christophe; Groussin, Mathieu; Navratil, Vincent; Safar‐Remali, Sabrina; Shiota, Takuya; Aouacheria, Abdel; Yoo, Sa K.
Cells need to sense stresses to initiate the execution of the dormant cell death program. Since the discovery of the first BH3‐only protein Bad, BH3‐only proteins have been recognized as indispensable stress sensors that induce apoptosis. BH3‐only proteins have so far not been identified in Drosophila despite their importance in other organisms. Here, we identify the first Drosophila BH3‐only protein and name it sayonara. Sayonara induces apoptosis in a BH3 motif‐dependent manner and interacts genetically and biochemically with the BCL‐2 homologous proteins, Buffy and Debcl. There is a positive feedback loop between Sayonara‐mediated caspase activation and autophagy. The BH3 motif of sayonara phylogenetically appeared at the time of the ancestral gene duplication that led to the formation of Buffy and Debcl in the dipteran lineage. To our knowledge, this is the first identification of a bona fide BH3‐only protein in Drosophila, thus providing a unique example of how cell death mechanisms can evolve both through time and across taxa.
</description>
<pubDate>Thu, 02 Feb 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157496</guid>
<dc:date>2023-02-02T00:00:00Z</dc:date>
</item>
<item>
<title>Multiplexed CRISPR/CAS9‐mediated engineering of pre‐clinical mouse models bearing native human B cell receptors</title>
<link>https://hdl.handle.net/1721.1/157495</link>
<description>Multiplexed CRISPR/CAS9‐mediated engineering of pre‐clinical mouse models bearing native human B cell receptors
Wang, Xuesong; Ray, Rashmi; Kratochvil, Sven; Melzi, Eleonora; Lin, Ying‐Cing; Giguere, Sophie; Xu, Liling; Warner, John; Cheon, Diane; Liguori, Alessia; Groschel, Bettina; Phelps, Nicole
B‐cell receptor (BCR) knock‐in (KI) mouse models play an important role in vaccine development and fundamental immunological studies. However, the time required to generate them poses a bottleneck. Here we report a one‐step CRISPR/Cas9 KI methodology to combine the insertion of human germline immunoglobulin heavy and light chains at their endogenous loci in mice. We validate this technology with the rapid generation of three BCR KI lines expressing native human precursors, instead of computationally inferred germline sequences, to HIV broadly neutralizing antibodies. We demonstrate that B cells from these mice are fully functional: upon transfer to congenic, wild type mice at controlled frequencies, such B cells can be primed by eOD‐GT8 60mer, a germline‐targeting immunogen currently in clinical trials, recruited to germinal centers, secrete class‐switched antibodies, undergo somatic hypermutation, and differentiate into memory B cells. KI mice expressing functional human BCRs promise to accelerate the development of vaccines for HIV and other infectious diseases.
</description>
<pubDate>Tue, 01 Dec 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157495</guid>
<dc:date>2020-12-01T00:00:00Z</dc:date>
</item>
<item>
<title>CD1d‐mediated lipid presentation by CD11c+ cells regulates intestinal homeostasis</title>
<link>https://hdl.handle.net/1721.1/157494</link>
<description>CD1d‐mediated lipid presentation by CD11c+ cells regulates intestinal homeostasis
Sáez de Guinoa, Julia; Jimeno, Rebeca; Gaya, Mauro; Kipling, David; Garzón, María J.; Dunn‐Walters, Deborah; Ubeda, Carles; Barral, Patricia
Intestinal homeostasis relies on a continuous dialogue between the commensal bacteria and the immune system. Natural killer T (NKT) cells, which recognize CD1d‐restricted microbial lipids and self‐lipids, contribute to the regulation of mucosal immunity, yet the mechanisms underlying their functions remain poorly understood. Here, we demonstrate that NKT cells respond to intestinal lipids and CD11c+ cells (including dendritic cells (DCs) and macrophages) are essential to mediate lipid presentation within the gut ultimately controlling intestinal NKT cell homeostasis and activation. Conversely, CD1d and NKT cells participate in the control of the intestinal bacteria composition and compartmentalization, in the regulation of the IgA repertoire and in the induction of regulatory T cells within the gut. These changes in intestinal homeostasis require CD1d expression on DC/macrophage populations as mice with conditional deletion of CD1d on CD11c+ cells exhibit dysbiosis and altered immune homeostasis. These results unveil the importance of CD11c+ cells in controlling lipid‐dependent immunity in the intestinal compartment and reveal an NKT cell–DC crosstalk as a key mechanism for the regulation of gut homeostasis.
</description>
<pubDate>Mon, 29 Jan 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157494</guid>
<dc:date>2018-01-29T00:00:00Z</dc:date>
</item>
<item>
<title>Thermal proteome profiling of breast cancer cells reveals proteasomal activation by CDK4/6 inhibitor palbociclib</title>
<link>https://hdl.handle.net/1721.1/157493</link>
<description>Thermal proteome profiling of breast cancer cells reveals proteasomal activation by CDK4/6 inhibitor palbociclib
Miettinen, Teemu P.; Peltier, Julien; Härtlova, Anetta; Gierliński, Marek; Jansen, Valerie M.; Trost, Matthias; Björklund, Mikael
Palbociclib is a CDK4/6 inhibitor approved for metastatic estrogen receptor‐positive breast cancer. In addition to G1 cell cycle arrest, palbociclib treatment results in cell senescence, a phenotype that is not readily explained by CDK4/6 inhibition. In order to identify a molecular mechanism responsible for palbociclib‐induced senescence, we performed thermal proteome profiling of MCF7 breast cancer cells. In addition to affecting known CDK4/6 targets, palbociclib induces a thermal stabilization of the 20S proteasome, despite not directly binding to it. We further show that palbociclib treatment increases proteasome activity independently of the ubiquitin pathway. This leads to cellular senescence, which can be counteracted by proteasome inhibitors. Palbociclib‐induced proteasome activation and senescence is mediated by reduced proteasomal association of ECM29. Loss of ECM29 activates the proteasome, blocks cell proliferation, and induces a senescence‐like phenotype. Finally, we find that ECM29 mRNA levels are predictive of relapse‐free survival in breast cancer patients treated with endocrine therapy. In conclusion, thermal proteome profiling identifies the proteasome and ECM29 protein as mediators of palbociclib activity in breast cancer cells.
</description>
<pubDate>Wed, 18 Apr 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157493</guid>
<dc:date>2018-04-18T00:00:00Z</dc:date>
</item>
<item>
<title>Sulfide Route to Chromium–Nickel–Molybdenum Ferroalloys for Stainless Steel Production</title>
<link>https://hdl.handle.net/1721.1/157492</link>
<description>Sulfide Route to Chromium–Nickel–Molybdenum Ferroalloys for Stainless Steel Production
Stinn, Caspar; Allanore, Antoine
New methods of materials separation and metal production utilizing sulfide chemistries may support a paradigm shift in sustainable metallurgy. We leverage sulfidation with elemental sulfur, aluminothermic reduction, and slag refining to obtain a chromium–nickel–molybdenum ferroalloy and stainless steel using a sulfide-based route without direct greenhouse gas emissions. The absence of carbothermic reduction from the mineral, concentrate, and matte feedstocks tried herein indicates that argon-oxygen-decarburization may no longer be necessary to refine stainless steel products.
</description>
<pubDate>Mon, 28 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157492</guid>
<dc:date>2024-10-28T00:00:00Z</dc:date>
</item>
<item>
<title>Extracting structured data from organic synthesis procedures using a fine-tuned large language model</title>
<link>https://hdl.handle.net/1721.1/157469</link>
<description>Extracting structured data from organic synthesis procedures using a fine-tuned large language model
Ai, Qianxiang; Meng, Fanwang; Shi, Jiale; Pelkie, Brenden; Coley, Connor W
The popularity of data-driven approaches and machine learning (ML) techniques in the field of organic chemistry and its various subfields has increased the value of structured reaction data. Most data in chemistry is represented by unstructured text, and despite the vastness of the organic chemistry literature (papers, patents), manual conversion from unstructured text to structured data remains a largely manual endeavor. Software tools for this task would facilitate downstream applications such as reaction prediction and condition recommendation. In this study, we fine-tune a large language model (LLM) to extract reaction information from organic synthesis procedure text into structured data following the Open Reaction Database (ORD) schema, a comprehensive data structure designed for organic reactions. The fine-tuned model produces syntactically correct ORD records with an average accuracy of 91.25% for ORD “messages” (e.g., full compound, workups, or condition definitions) and 92.25% for individual data fields (e.g., compound identifiers, mass quantities), with the ability to recognize compound-referencing tokens and to infer reaction roles. We investigate its failure modes and evaluate performance on specific subtasks such as reaction role classification.
</description>
<pubDate>Wed, 11 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157469</guid>
<dc:date>2024-09-11T00:00:00Z</dc:date>
</item>
<item>
<title>Imidazolium-based ionic liquids support biosimilar flavin electron transfer</title>
<link>https://hdl.handle.net/1721.1/157468</link>
<description>Imidazolium-based ionic liquids support biosimilar flavin electron transfer
Anderson, Grace I; Agee, Alec A; Furst, Ariel L
Understanding electron transport with electroactive microbes is key to engineering effective and scalable bio-electrochemical technologies. Much of this electron transfer occurs through small-molecule flavin mediators that perform one-electron transfers in abiotic systems but concerted two-electron transfer in biological systems, rendering abiotic systems less efficient. To boost efficiency, the principles guiding flavin electron transfer must be elucidated, necessitating a tunable system. Ionic liquids (ILs) offer such a platform due to their chemical diversity. In particular, imidazolium-containing ILs that resemble the amino acid histidine are bio-similar electrolytes that enable the study of flavin electron transfer. Using the model IL 1-ethyl-3-methylimidazolium ([Emim][BF4]), we observe concerted two-electron transfer between flavin mononucleotide and an unmodified glassy carbon electrode surface, while a one-electron transfer occurs in standard inorganic electrolytes. This work demonstrates the power of ILs to enable the mechanistic study of biological electron transfer, providing critical guidelines for improving electrochemical technologies based on these biological properties.
</description>
<pubDate>Tue, 27 Aug 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157468</guid>
<dc:date>2024-08-27T00:00:00Z</dc:date>
</item>
<item>
<title>Synthesis and degradation of FtsZ quantitatively predict the first cell division in starved bacteria</title>
<link>https://hdl.handle.net/1721.1/157466</link>
<description>Synthesis and degradation of FtsZ quantitatively predict the first cell division in starved bacteria
Sekar, Karthik; Rusconi, Roberto; Sauls, John T.; Fuhrer, Tobias; Noor, Elad; Nguyen, Jen; Fernandez, Vicente I.; Buffing, Marieke F.; Berney, Michael; Jun, Suckjoon; Stocker, Roman; Sauer, Uwe
In natural environments, microbes are typically non‐dividing and gauge when nutrients permit division. Current models are phenomenological and specific to nutrient‐rich, exponentially growing cells, thus cannot predict the first division under limiting nutrient availability. To assess this regime, we supplied starving Escherichia coli with glucose pulses at increasing frequencies. Real‐time metabolomics and microfluidic single‐cell microscopy revealed unexpected, rapid protein, and nucleic acid synthesis already from minuscule glucose pulses in non‐dividing cells. Additionally, the lag time to first division shortened as pulsing frequency increased. We pinpointed division timing and dependence on nutrient frequency to the changing abundance of the division protein FtsZ. A dynamic, mechanistic model quantitatively relates lag time to FtsZ synthesis from nutrient pulses and FtsZ protease‐dependent degradation. Lag time changed in model‐congruent manners, when we experimentally modulated the synthesis or degradation of FtsZ. Thus, limiting abundance of FtsZ can quantitatively predict timing of the first cell division.
</description>
<pubDate>Mon, 05 Nov 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157466</guid>
<dc:date>2018-11-05T00:00:00Z</dc:date>
</item>
<item>
<title>Coherent spin-control of S = 1 vanadium and molybdenum complexes</title>
<link>https://hdl.handle.net/1721.1/157463</link>
<description>Coherent spin-control of S = 1 vanadium and molybdenum complexes
Laorenza, Daniel W; Mullin, Kathleen R; Weiss, Leah R; Bayliss, Sam L; Deb, Pratiti; Awschalom, David D; Rondinelli, James M; Freedman, Danna E
The burgeoning field of quantum sensing hinges on the creation and control of quantum bits. To date, the most well-studied quantum sensors are optically active, paramagnetic defects residing in crystalline hosts. We previously developed analogous optically addressable molecules featuring a ground-state spin-triplet centered on a Cr4+ ion with an optical-spin interface. In this work, we evaluate isovalent V3+ and Mo4+ congeners, which offer unique advantages, such as an intrinsic nuclear spin for V3+ or larger spin–orbit coupling for Mo4+, as optically addressable spin systems. We assess the ground-state spin structure and dynamics for each complex, illustrating that all of these spin-triplet species can be coherently controlled. However, unlike the Cr4+ derivatives, these pseudo-tetrahedral V3+ and Mo4+ complexes exhibit no measurable emission. Coupling absorption spectroscopy with computational predictions, we investigate why these complexes exhibit no detectable photoluminescence. These cumulative results suggest that design of future V3+ complexes should target pseudo-tetrahedral symmetries using bidentate or tridentate ligand scaffolds, ideally with deuterated or fluorinated ligand environments. We also suggest that spin-triplet Mo4+, and by extension W4+, complexes may not be suitable candidate optically addressable qubit systems due to their low energy spin-singlet states. By understanding the failures and successes of these systems, we outline additional design features for optically addressable V- or Mo-based molecules to expand the library of tailor-made quantum sensors.
</description>
<pubDate>Mon, 05 Aug 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157463</guid>
<dc:date>2024-08-05T00:00:00Z</dc:date>
</item>
<item>
<title>Dynamic transformation of active sites in energy and environmental catalysis</title>
<link>https://hdl.handle.net/1721.1/157462</link>
<description>Dynamic transformation of active sites in energy and environmental catalysis
Zhang, Hao; Chen, Lei; Dong, Feng; Lu, Zhiwen; Lv, Enmin; Dong, Xinglong; Li, Huanxin; Yuan, Zhongyong; Peng, Xinwen; Yang, Shihe; Qiu, Jieshan; Guo, Zhengxiao; Wen, Zhenhai
Active sites play a pivotal role in photo/electrocatalysis, particularly in the transition from fossil fuels to clean, efficient and renewable energy sources. Precise identification of catalyst active sites and understanding of their dynamic transformation are crucial for engineering the activity, selectivity and stability of a catalyst for a specific reaction. Herein, we provide an in-depth and interdisciplinary overview of the recent advancements in dynamic transformation of active sites in photo/electrocatalysis. Firstly, we explore the underlying principles of the dynamic reconstruction, focusing on dynamic transformations in surface structure, composition and properties. Subsequently, advanced operando/in situ characterization for dynamic transformation is summarized, to provide mechanistic insights for the identification of such processes. In order to improve catalytic performance, we discussed comparatively the triggers and the corresponding reaction mechanisms of the dynamic process. Finally, we present an insightful analysis of the challenges and the future prospects for the applications of dynamic transformation of active sites in photo/electrocatalysis.
</description>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157462</guid>
</item>
<item>
<title>Structure and dynamics of the proton-selective histidine and the gating tryptophan in an inward rectifying hybrid influenza B and A virus M2 proton channel</title>
<link>https://hdl.handle.net/1721.1/157461</link>
<description>Structure and dynamics of the proton-selective histidine and the gating tryptophan in an inward rectifying hybrid influenza B and A virus M2 proton channel
Pankratova, Yanina; McKay, Matthew J; Ma, Chunlong; Tan, Haozhou; Wang, Jun; Hong, Mei
The M2 proteins of influenza A and B viruses form acid-activated proton channels that are essential for the virus lifecycle. Proton selectivity is achieved by a transmembrane (TM) histidine whereas gating is achieved by a tryptophan residue. Although this functional apparatus is conserved between AM2 and BM2 channels, AM2 conducts protons exclusively inward whereas BM2 conducts protons in either direction depending on the pH gradient. Previous studies showed that in AM2, mutations of D44 abolished inward rectification of AM2, suggesting that the tryptophan gate is destabilized. To elucidate how charged residues C-terminal to the tryptophan regulates channel gating, here we investigate the structure and dynamics of H19 and W23 in a BM2 mutant, GDR-BM2, in which three BM2 residues are mutated to the corresponding AM2 residues, S16G, G26D and H27R. Whole-cell electrophysiological data show that GDR-BM2 conducts protons with inward rectification, identical to wild-type (WT) AM2 but different from WT-BM2. Solid-state NMR 15N and 13C spectra of H19 indicate that the mutant BM2 channel contains higher populations of cationic histidine and neutral τ tautomers compared to WT-BM2 at acidic pH. Moreover, 19F NMR spectra of 5-19F-labeled W23 resolve three peaks at acidic pH, suggesting three tryptophan sidechain conformations. Comparison of these spectra with the tryptophan spectra of other M2 peptides suggests that these indole sidechain conformations arise from interactions with the C-terminal charged residues and with the N-terminal cationic histidine. Taken together, these solid-state NMR data show that inward rectification in M2 proton channels is accomplished by tryptophan interactions with charged residues on both its C-terminal and N-terminal sides. Gating of these M2 proton channels is thus accomplished by a multi-residue complex with finely tuned electrostatic and aromatic interactions.
</description>
<pubDate>Wed, 31 Jul 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157461</guid>
<dc:date>2024-07-31T00:00:00Z</dc:date>
</item>
<item>
<title>Embedding human knowledge in material screening pipeline as filters to identify novel synthesizable inorganic materials</title>
<link>https://hdl.handle.net/1721.1/157460</link>
<description>Embedding human knowledge in material screening pipeline as filters to identify novel synthesizable inorganic materials
Das, Basita; Ji, Kangyu; Sheng, Fang; McCall, Kyle M; Buonassisi, Tonio
How might one embed a chemist's knowledge into an automated materials-discovery pipeline? In generative design for inorganic crystalline materials, generating candidate compounds is no longer a bottleneck – there are now synthetic datasets of millions of compounds. However, weeding out unsynthesizable or difficult to synthesize compounds remains an outstanding challenge. Post-generation “filters” have been proposed as a means of embedding human domain knowledge, either in the form of scientific laws or rules of thumb. Examples include charge neutrality, electronegativity balance, and energy above hull. Some filters are “hard” and some are “soft” — for example, it is difficult to envision creating a stable compound while violating the rule of charge neutrality; however, several compounds break the Hume-Rothery rules. It is therefore natural to wonder: can one compile a comprehensive list of “filters” that embed domain knowledge, adopt a principled approach to classifying them as either non-conditional or conditional “filters,” and envision a software environment to implement combinations of these in a systematic manner? In this commentary we explore such questions, “filters” for screening of novel inorganic compounds for synthesizability.
</description>
<pubDate>Tue, 08 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157460</guid>
<dc:date>2024-10-08T00:00:00Z</dc:date>
</item>
<item>
<title>Skeletal myotubes expressing ALS mutant SOD1 induce pathogenic changes, impair mitochondrial axonal transport, and trigger motoneuron death</title>
<link>https://hdl.handle.net/1721.1/157459</link>
<description>Skeletal myotubes expressing ALS mutant SOD1 induce pathogenic changes, impair mitochondrial axonal transport, and trigger motoneuron death
Martínez, Pablo; Silva, Mónica; Abarzúa, Sebastián; Tevy, María F.; Jaimovich, Enrique; Constantine-Paton, Martha; Bustos, Fernando J.; van Zundert, Brigitte
Amyotrophic lateral sclerosis (ALS) is a fatal neurodegenerative disease characterized by the loss of motoneurons (MNs), and despite progress, there is no effective treatment. A large body of evidence shows that astrocytes expressing ALS-linked mutant proteins cause non-cell autonomous toxicity of MNs. Although MNs innervate muscle fibers and ALS is characterized by the early disruption of the neuromuscular junction (NMJ) and axon degeneration, there are controversies about whether muscle contributes to non-cell-autonomous toxicity to MNs. In this study, we generated primary skeletal myotubes from myoblasts derived from ALS mice expressing human mutant SOD1G93A (termed hereafter mutSOD1). Characterization revealed that mutSOD1 skeletal myotubes display intrinsic phenotypic and functional differences compared to control myotubes generated from non-transgenic (NTg) littermates. Next, we analyzed whether ALS myotubes exert non-cell-autonomous toxicity to MNs. We report that conditioned media from mutSOD1 myotubes (mutSOD1-MCM), but not from control myotubes (NTg-MCM), induced robust death of primary MNs in mixed spinal cord cultures and compartmentalized microfluidic chambers. Our study further revealed that applying mutSOD1-MCM to the MN axonal side in microfluidic devices rapidly reduces mitochondrial axonal transport while increasing Ca2 + transients and reactive oxygen species (i.e., H2O2). These results indicate that soluble factor(s) released by mutSOD1 myotubes cause MN axonopathy that leads to lethal pathogenic changes.
</description>
<pubDate>Fri, 25 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157459</guid>
<dc:date>2024-10-25T00:00:00Z</dc:date>
</item>
<item>
<title>Behaviorally informed digital campaigns and their association with social media engagement and COVID-19 vaccine uptake in Belize</title>
<link>https://hdl.handle.net/1721.1/157458</link>
<description>Behaviorally informed digital campaigns and their association with social media engagement and COVID-19 vaccine uptake in Belize
Daga, Giuliana; Kossuth, Lajos; Boruchowicz, Cynthia; Lopez Boo, Florencia; Largaespada Beer, Natalia
Background Increasing vaccination coverage was key to curbing the COVID-19 pandemic globally. However, lack of trust in the vaccine and fear of side effects in regions like the Caribbean resulted in a low uptake despite enough vaccine supply. Methods We conducted two correlational analyses and one experiment between five sequential behaviorally informed Facebook campaigns, social media performance outcomes, and district-level vaccination data. First, we ran multivariate linear regression models to estimate the mean differences between the campaigns in (i) social media performance (“Clicks” and “Engagement”) and (ii) COVID-19 vaccination uptake at the district level. “Clicks” were measured by the number of people who clicked on the respective Facebook advert and visited the official vaccination site. “Engagements” were the number of people interacting with the advert through likes and emojis. Second, we took advantage of the experimental design during one of the campaigns to analyze the differential effect of messages conveying information about the number of people reporting vaccination side effects using words (“Few”/ “Majority) and numbers (“3 out of 100 “) on social media performance. Results The correlational analysis showed that the number of “Clicks” and “Engagement” was similar among campaigns, except for the campaign focusing on vaccines’ effectiveness, which had 14.65 less clicks and 19.52 less engagements per advert (including controls and district-fixed effects) compared to the base “It’s safe” campaign. Vaccination rates were highest at times coinciding with campaigns focusing on vaccination safety and effectiveness. Our experimental results showed that informational messages related to side effects that were framed using words (“Majority did not report discomfort”/ “Few persons reported discomfort”) were better at generating “Clicks” compared to those using numbers (“3 out of 100 reported discomforts”). Conclusions Facebook adverts highlighting vaccine safety had a similar level of social media performance as other campaigns, except for adverts focusing on vaccine efficacy, which performed worse. Communicating side-effect information with words instead of numbers can expand social media interest in low-uptake regions like the Caribbean. Our results serve as preliminary evidence for public health officials to encourage vaccine uptake in high-hesitancy contexts.
</description>
<pubDate>Thu, 24 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157458</guid>
<dc:date>2024-10-24T00:00:00Z</dc:date>
</item>
<item>
<title>Avoiding electrochemical indentations: a CNT-cocooned LiCoO&lt;sub&gt;2&lt;/sub&gt; electrode with ultra-stable high-voltage cycling</title>
<link>https://hdl.handle.net/1721.1/157450</link>
<description>Avoiding electrochemical indentations: a CNT-cocooned LiCoO&lt;sub&gt;2&lt;/sub&gt; electrode with ultra-stable high-voltage cycling
Zhu, Zhi; Xu, Shuanglong; Wang, Zhenjie; Yan, Xiaohui; Xu, Guiyin; Huang, Yimeng; Wu, Yuping; Zhang, Yin; Li, Ju
Charging LiCoO2 (LCO) to above 4.5 V induces crystal cracking and seriously deteriorates the battery cycle life. Decreasing the range of the LCO misfit strain during deep de-lithiation is useful for preventing cracks, but this is not always achievable. Here, we demonstrate that the limited electrochemical contact area between electronically conductive carbon and the LCO crystal causes “electrochemical indentations” (ECIs) during charging and discharging. Particularly in fast charging, the high local ΔcLi gradient in LCO would cause a local volume of the surficial lattice to shrink while the rest of the crystal is still under stretching, and hence, drive the ECI to cause cracking. Increasing the electrochemical contact area would reduce the ECI and cracking risk. Therefore, we developed a free-standing CNT-LCO electrode in which all of the LCO particles were intimately wrapped with a dense CNT cocoon to establish a larger true electrical contact area. The simulations demonstrated that the radial ΔcLi and ECI decreased significantly in the cocooned LCO particles. The cocooned LCO electrode maintained good morphology and retained 94% of its energy density after 400 cycles when charged to 4.55 V. By removing the need for a current collector and binder, the volumetric energy density of the CNT-LCO cathode reached 3200 Wh L−1 (electrode).
</description>
<pubDate>Tue, 13 Aug 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157450</guid>
<dc:date>2024-08-13T00:00:00Z</dc:date>
</item>
<item>
<title>Ductile-to-brittle transition and yielding in soft amorphous materials: perspectives and open questions</title>
<link>https://hdl.handle.net/1721.1/157449</link>
<description>Ductile-to-brittle transition and yielding in soft amorphous materials: perspectives and open questions
Soft amorphous materials are viscoelastic solids ubiquitously found around us, from clays and cementitious pastes to emulsions and physical gels encountered in food or biomedical engineering. Under an external deformation, these materials undergo a noteworthy transition from a solid to a liquid state that reshapes the material microstructure. This yielding transition was the main theme of a workshop held from January 9 to 13, 2023 at the Lorentz Center in Leiden. The manuscript presented here offers a critical perspective on the subject, synthesizing insights from the various brainstorming sessions and informal discussions that unfolded during this week of vibrant exchange of ideas. The result of these exchanges takes the form of a series of open questions that represent outstanding experimental, numerical, and theoretical challenges to be tackled in the near future.
</description>
<pubDate>Wed, 11 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157449</guid>
<dc:date>2024-09-11T00:00:00Z</dc:date>
</item>
<item>
<title>Uniting activity design principles of anode catalysts for direct liquid fuel cells</title>
<link>https://hdl.handle.net/1721.1/157448</link>
<description>Uniting activity design principles of anode catalysts for direct liquid fuel cells
Zheng, Daniel J; Peng, Jiayu; McCormack, Kaylee; Xu, Hongbin; Kang, Jin Soo; Wang, Zhenshu; Ren, Zhichu; Li, Ju; Román-Leshkov, Yuriy; Shao-Horn, Yang
Direct liquid fuel cells have advantages over hydrogen-based fuel cells and lithium-ion batteries for portable and mobile applications due to their high volumetric energy density and the convenient storage or refueling of liquid fuels. Unfortunately, the electrochemical oxidation of liquid fuels (such as methanol, ethanol, and formic acid) currently corresponds to ∼50% of the energy losses of these devices at operating conditions. Moreover, state-of-the-art catalysts for such critical reactions are generally composed of precious metals such as Pt and Pd, hindering the cost-effective implementation of these technologies. The development of novel catalyst design principles for electrochemical liquid fuel oxidation has been constrained by its complex, structure-sensitive reaction energetics that can involve multiple parallel, competitive reaction intermediates and pathways. In this review, we aim to dissect and bridge the understanding of fundamental energetics and the materials engineering of novel catalysts for the electrochemical oxidation of various liquid fuels. By deconvoluting these reactions into the energetics of different critical elementary steps, we define essential descriptors that govern the activity and selectivity of electrochemical liquid fuel oxidation. Several universal and fundamental design principles are proposed to optimize the catalytic performance of state-to-the-art and emerging electrocatalysts by tuning the chemistry and electronic structure of active sites. This review aims to provide a unique perspective connecting the electro-oxidation energetics of different liquid fuels with mechanistic and materials-centric studies to provide a holistic picture connecting the fundamental surface science with materials engineering for the rational design of electrocatalysts for liquid fuel oxidation.
</description>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157448</guid>
</item>
<item>
<title>Leveraging natural language processing to curate the tmCAT, tmPHOTO, tmBIO, and tmSCO datasets of functional transition metal complexes</title>
<link>https://hdl.handle.net/1721.1/157447</link>
<description>Leveraging natural language processing to curate the tmCAT, tmPHOTO, tmBIO, and tmSCO datasets of functional transition metal complexes
Kevlishvili, Ilia; St. Michel, Roland G; Garrison, Aaron G; Toney, Jacob W; Adamji, Husain; Jia, Haojun; Román-Leshkov, Yuriy; Kulik, Heather J
The breadth of transition metal chemical space covered by databases such as the Cambridge Structural Database and the derived computational database tmQM is not conducive to application-specific modeling and the development of structure–property relationships. Here, we employ both supervised and unsupervised natural language processing (NLP) techniques to link experimentally synthesized compounds in the tmQM database to their respective applications. Leveraging NLP models, we curate four distinct datasets: tmCAT for catalysis, tmPHOTO for photophysical activity, tmBIO for biological relevance, and tmSCO for magnetism. Analyzing the chemical substructures within each dataset reveals common chemical motifs in each of the designated applications. We then use these common chemical structures to augment our initial datasets for each application, yielding a total of 21 631 compounds in tmCAT, 4599 in tmPHOTO, 2782 in tmBIO, and 983 in tmSCO. These datasets are expected to accelerate the more targeted computational screening and development of refined structure–property relationships with machine learning.
</description>
<pubDate>Fri, 20 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157447</guid>
<dc:date>2024-09-20T00:00:00Z</dc:date>
</item>
<item>
<title>Empirical estimation of metal powder bed fusion technological improvement rate</title>
<link>https://hdl.handle.net/1721.1/157446</link>
<description>Empirical estimation of metal powder bed fusion technological improvement rate
Alves de Campos, António; Torres Ferreira, Bruna; Gonçalves, Afonso; Leite, Marco; Ribeiro, Inês; L. Magee, Christopher; Henriques, Elsa
This study empirically estimates the technological improvement rate (TIR) of metal powder bed fusion (PBF) technology, widely used in aerospace, automotive, and medical industries. PBF's continuous long-term adoption growth is driven by its ability to enhance manufacturing efficiency in terms of time and raw material use, as well as its capability to produce high-quality, high-strength, complex-shaped parts. Measuring the technological development of PBF is crucial as itis enlarging its application domain and is increasingly considered a viable alternative to traditional manufacturing technologies across a broader range of applications. We resorted to the literature to collect information and assess which technical parameters are most relevant to measure the capabilities of PBF. With those, we established an ideal functional performance metric (FPM) capable of comprehensively assessing PBF's technological performance improvement. Considering all available data sources and PBF machines ever made commercially available, a data set of technical parameters was constructed. This was followed by a data curation process focusing on data availability and reliability. The resultant practical FPM was used to estimate the TIR of PBF technology. By employing regression analysis, we estimate a yearly improvement of 26.8%. This empirical rate comes as a more accurate and reliable substitute to the previously indirectly estimated patent-derived rate of 33.3%. Our findings underscore PBF's capability of keeping pace with its growing significance and wider industrial applications. The results of this study provide a key metric for those in the industry and research, confirming the rapid performance growth and establishing a standard for future industrial uses.
</description>
<pubDate>Tue, 22 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157446</guid>
<dc:date>2024-10-22T00:00:00Z</dc:date>
</item>
<item>
<title>Search for the B s 0 → μ+μ−γ decay</title>
<link>https://hdl.handle.net/1721.1/157445</link>
<description>Search for the B s 0 → μ+μ−γ decay
Aaij, R.; Abdelmotteleb, A. S. W.; Abellan Beteta, C.; Abudinén, F.; Ackernley, T.; Adefisoye, A. A.; Adeva, B.; Adinolfi, M.; Adlarson, P.; Agapopoulou, C.; Aidala, C. A.; Ajaltouni, Z.; Akar, S.; Akiba, K.; Albicocco, P.; Albrecht, J.
Abstract A search for the fully reconstructed B s 0 → μ+μ−γ decay is performed at the LHCb experiment using proton-proton collisions at s = 13 TeV corresponding to an integrated luminosity of 5.4 fb−1. No significant signal is found and upper limits on the branching fraction in intervals of the dimuon mass are set B B s 0 → μ + μ − γ &lt; 4.2 × 10 − 8 , m μ + μ − ∈ 2 m μ 1.70 GeV / c 2 , B B s 0 → μ + μ − γ &lt; 7.7 × 10 − 8 , m μ + μ − ∈ 1.70, 2.88 GeV / c 2 , B B s 0 → μ + μ − γ &lt; 4.2 × 10 − 8 , m μ + μ − ∈ 3.92 m B s 0 GeV / c 2 , at 95% confidence level. Additionally, upper limits are set on the branching fraction in the [2mμ, 1.70] GeV/c2 dimuon mass region excluding the contribution from the intermediate ϕ(1020) meson, and in the region combining all dimuon-mass intervals.
</description>
<pubDate>Thu, 11 Jul 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157445</guid>
<dc:date>2024-07-11T00:00:00Z</dc:date>
</item>
<item>
<title>Thermodynamics of multi-sublattice battery active materials: from an extended regular solution theory to a phase-field model of LiMnyFe1-yPO4</title>
<link>https://hdl.handle.net/1721.1/157444</link>
<description>Thermodynamics of multi-sublattice battery active materials: from an extended regular solution theory to a phase-field model of LiMnyFe1-yPO4
Ombrini, Pierfrancesco; Bazant, Martin Z; Wagemaker, Marnix; Vasileiadis, Alexandros
Phase separation during the lithiation of redox-active materials is a critical factor affecting battery performance, including energy density, charging rates, and cycle life. Accurate physical descriptions of these materials are necessary for understanding underlying lithiation mechanisms, performance limitations, and optimizing energy storage devices. This work presents an extended regular solution model that captures mutual interactions between sublattices of multi-sublattice battery materials, typically synthesized by metal substitution. We apply the model to phospho-olivine materials and demonstrate its quantitative accuracy in predicting the composition-dependent redox shift of the plateaus of LiMnyFe1-yPO4 (LFMP), LiCoyFe1-yPO4 (LFCP), LiCoxMnyFe1-x-yPO4 (LFMCP), as well as their phase separation behavior. Furthermore, we develop a phase-field model of LFMP that consistently matches experimental data and identifies LiMn0.4Fe0.6PO4 as a superior composition that favors a solid solution phase transition, making it ideal for high-power applications.
</description>
<pubDate>Sun, 01 Jan 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157444</guid>
<dc:date>2023-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Fluids and Electrolytes under Confinement in Single-Digit Nanopores</title>
<link>https://hdl.handle.net/1721.1/157441</link>
<description>Fluids and Electrolytes under Confinement in Single-Digit Nanopores
Confined fluids and electrolyte solutions in nanopores exhibit rich and surprising physics and chemistry that impact the mass transport and energy efficiency in many important natural systems and industrial applications. Existing theories often fail to predict the exotic effects observed in the narrowest of such pores, called single-digit nanopores (SDNs), which have diameters or conduit widths of less than 10 nm, and have only recently become accessible for experimental measurements. What SDNs reveal has been surprising, including a rapidly increasing number of examples such as extraordinarily fast water transport, distorted fluid-phase boundaries, strong ion-correlation and quantum effects, and dielectric anomalies that are not observed in larger pores. Exploiting these effects presents myriad opportunities in both basic and applied research that stand to impact a host of new technologies at the water-energy nexus, from new membranes for precise separations and water purification to new gas permeable materials for water electrolyzers and energy-storage devices. SDNs also present unique opportunities to achieve ultrasensitive and selective chemical sensing at the single-ion and single-molecule limit. In this review article, we summarize the progress on nanofluidics of SDNs, with a focus on the confinement effects that arise in these extremely narrow nanopores. The recent development of precision model systems, transformative experimental tools, and multiscale theories that have played enabling roles in advancing this frontier are reviewed. We also identify new knowledge gaps in our understanding of nanofluidic transport and provide an outlook for the future challenges and opportunities at this rapidly advancing frontier.
</description>
<pubDate>Wed, 22 Mar 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157441</guid>
<dc:date>2023-03-22T00:00:00Z</dc:date>
</item>
<item>
<title>Application of finite Gaussian process distribution of relaxation times on SOFC electrodes</title>
<link>https://hdl.handle.net/1721.1/157440</link>
<description>Application of finite Gaussian process distribution of relaxation times on SOFC electrodes
Williams, Nicholas J; Osborne, Conor; Seymour, Ieuan D; Bazant, Martin Z; Skinner, Stephen J
Electrochemical impedance spectroscopy (EIS) is a powerful tool in characterisation of processes in electrochemical systems, allowing us to elucidate the resistance and characteristic frequency of physical properties such&#13;
as reaction and transport rates. The essence of EIS is the relationship between current and potential at a given&#13;
frequency. However, it is often the case that we do not understand the electrochemical system well enough to fit&#13;
a meaningful physical model to EIS data. The distribution of relaxation times (DRT) calculation assumes an&#13;
infinite series of relaxation processes distributed over a characteristic timescale. The DRT calculation may&#13;
identify the number of processes occurring, as well as their respective resistivity and characteristic timescale, and&#13;
may resolve processes which have relatively similar timescales. Using a nonparametric tool known as Gaussian&#13;
process (GP) regression, we showcase a method of finding a unique solution to the ill-posed DRT problem by&#13;
optimising kernel hyperparameters as opposed to ad-hoc regularisation. In this work, we use finite GP regression&#13;
under inequality constraints (fGP) to analysed EIS data generated by a (Ni/CGO|CGO|YSZ|Reference Cathode)&#13;
solid-oxide fuel cell in a gas mixture of 0.5 bar H2/0.5 bar H2O and at a temperature of 600 ◦C. By varying the&#13;
current density, we can characterise the current-voltage relationship of the electrode and shed light on the reaction mechanism governing charge transfer at the solid-gas interface. Our findings also show that even at&#13;
relatively high current densities (±600 mA cm− 2) the electrode process is limited by charge transfer.
</description>
<pubDate>Sat, 01 Apr 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157440</guid>
<dc:date>2023-04-01T00:00:00Z</dc:date>
</item>
<item>
<title>Multiphase Polarization in Ion‐Intercalation Nanofilms: General Theory Including Various Surface Effects and Memory Applications</title>
<link>https://hdl.handle.net/1721.1/157439</link>
<description>Multiphase Polarization in Ion‐Intercalation Nanofilms: General Theory Including Various Surface Effects and Memory Applications
Tian, Huanhuan; Li, Ju; Bazant, Martin Z
Ion concentration polarization (CP, current‐induced concentration gradient adjacent to a charge‐selective interface) has been well studied for single‐phase mixed conductors (e.g., liquid electrolyte), but multiphase CP has been rarely addressed in literature. In our recent publication, we proposed that CP above certain threshold currents can flip the phase distribution in multiphase ion‐intercalation nanofilms sandwiched by ion‐blocking electrodes. This phenomenon is known as multiphase polarization (MP). It is then proposed that MP can further lead to nonvolatile interfacial resistive switching (RS) for asymmetric electrodes with ion‐modulated electron transfer, which theory can reproduce the experimental results of LTO memristors. In this study, a comprehensive 2D phase‐field model is derived for coupled ion‐electron transport in ion‐intercalation materials, with surface effects including electron transfer kinetics, non‐neutral wetting, energy relaxation, and surface charge. Then, the model is used to study MP. Time evolution of phase boundaries is presented, and analyze the switching time, current, energy, and cyclic voltammetry, for various boundary conditions. It is found that the switching performance can be improved significantly by manipulating surface conditions and mean concentration. Finally, the prospects of MP‐based memories and possible extensions of the current model is discussed.
</description>
<pubDate>Thu, 01 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157439</guid>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</item>
<item>
<title>Proton-coupled electron transfer at SOFC electrodes</title>
<link>https://hdl.handle.net/1721.1/157438</link>
<description>Proton-coupled electron transfer at SOFC electrodes
Williams, Nicholas J; Warburton, Robert E; Seymour, Ieuan D; Cohen, Alexander E; Bazant, Martin Z; Skinner, Stephen J
Understanding the charge transfer processes at solid oxide fuel cell (SOFC) electrodes is critical to designing more efficient and robust materials. Activation losses at SOFC electrodes have been widely attributed to the ambipolar migration of charges at the mixed ionic–electronic conductor–gas interface. Empirical Butler–Volmer kinetics based on the transition state theory is often used to model the current–voltage relationship, where charged particles transfer classically over an energy barrier. However, the hydrogen oxidation/water electrolysis reaction H2(g) + O2− ⇌ H2O(g) + 2e− must be modeled through concerted electron and proton tunneling events, where we unify the theory of the electrostatic surface potential with proton-coupled electron transfer kinetics. We derive a framework for the reaction rate that depends on the electrostatic surface potential, adsorbate dipole moment, the electronic structure of the electron donor/acceptor, and vibronic states of the hydrogen species. This theory was used to study the current–voltage characteristics of the Ni/gadolinium-doped ceria electrode in H2/H2O(g), where we find excellent validation of this novel model. These results yield the first reported quantification of the solvent reorganization energy for an SOFC material and suggest that the three-phase boundary mechanism is the dominant pathway for charge transfer at cermet electrodes.
</description>
<pubDate>Wed, 28 Jun 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157438</guid>
<dc:date>2023-06-28T00:00:00Z</dc:date>
</item>
<item>
<title>Antibody Fab‐Fc properties outperform titer in predictive models of SIV vaccine‐induced protection</title>
<link>https://hdl.handle.net/1721.1/157437</link>
<description>Antibody Fab‐Fc properties outperform titer in predictive models of SIV vaccine‐induced protection
Pittala, Srivamshi; Bagley, Kenneth; Schwartz, Jennifer A.; Brown, Eric P.; Weiner, Joshua A.; Prado, Ilia J.; Zhang, Wenlei; Xu, Rong; Ota‐Setlik, Ayuko; Pal, Ranajit; Shen, Xiaoying; Beck, Charles; Ferrari, Guido
Characterizing the antigen‐binding and innate immune‐recruiting properties of the humoral response offers the chance to obtain deeper insights into mechanisms of protection than revealed by measuring only overall antibody titer. Here, a high‐throughput, multiplexed Fab‐Fc Array was employed to profile rhesus macaques vaccinated with a gp120‐CD4 fusion protein in combination with different genetically encoded adjuvants, and subsequently subjected to multiple heterologous simian immunodeficiency virus (SIV) challenges. Systems analyses modeling protection and adjuvant differences using Fab‐Fc Array measurements revealed a set of correlates yielding strong and robust predictive performance, while models based on measurements of response magnitude alone exhibited significantly inferior performance. At the same time, rendering Fab‐Fc measurements mathematically independent of titer had relatively little impact on predictive performance. Similar analyses for a distinct SIV vaccine study also showed that Fab‐Fc measurements performed significantly better than titer. These results suggest that predictive modeling with measurements of antibody properties can provide detailed correlates with robust predictive power, suggest directions for vaccine improvement, and potentially enable discovery of mechanistic associations.
</description>
<pubDate>Thu, 02 May 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157437</guid>
<dc:date>2019-05-02T00:00:00Z</dc:date>
</item>
<item>
<title>North Atlantic Heat Transport Convergence Derived from a Regional Energy Budget Using Different Ocean Heat Content Estimates</title>
<link>https://hdl.handle.net/1721.1/157436</link>
<description>North Atlantic Heat Transport Convergence Derived from a Regional Energy Budget Using Different Ocean Heat Content Estimates
Meyssignac, B.; Fourest, S.; Mayer, Michael; Johnson, G. C.; Calafat, F. M.; Ablain, M.; Boyer, T.; Cheng, L.; Desbruyères, D.; Forget, G.
This study uses an oceanic energy budget to estimate the ocean heat transport convergence in the North Atlantic during 2005–2018. The horizontal convergence of the ocean heat transport is estimated using ocean heat content tendency primarily derived from satellite altimetry combined with space gravimetry. The net surface energy fluxes are inferred from mass-corrected divergence of atmospheric energy transport and tendency of the ECMWF ERA5 reanalysis combined with top-of-the-atmosphere radiative fluxes from the clouds and the Earth’s radiant energy system project. The indirectly estimated horizontal convergence of the ocean heat transport is integrated between the rapid climate change-meridional overturning circulation and heatflux array (RAPID) section at 26.5°N (operating since 2004) and the overturning in the subpolar north atlantic program (OSNAP) section, situated at 53°–60°N (operating since 2014). This is to validate the ocean heat transport convergence estimate against an independent estimate derived from RAPID and OSNAP in-situ measurements. The mean ocean energy budget of the North Atlantic is closed to within ± 0.25 PW between RAPID and OSNAP sections. The mean oceanic heat transport convergence between these sections is 0.58 ± 0.25 PW, which agrees well with observed section transports. Interannual variability of the inferred oceanic heat transport convergence is also in reasonable agreement with the interannual variability observed at RAPID and OSNAP, with a correlation of 0.54 between annual time series. The correlation increases to 0.67 for biannual time series. Other estimates of the ocean energy budget based on ocean heat content tendency derived from various methods give similar results. Despite a large spread, the correlation is always significant meaning the results are robust against the method to estimate the ocean heat content tendency.
</description>
<pubDate>Thu, 24 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157436</guid>
<dc:date>2024-10-24T00:00:00Z</dc:date>
</item>
<item>
<title>Enhancing the Antioxidant Activity of Tea (Camellia sinensis) Through Common Herbal Infusions</title>
<link>https://hdl.handle.net/1721.1/157435</link>
<description>Enhancing the Antioxidant Activity of Tea (Camellia sinensis) Through Common Herbal Infusions
Ortiz-Islas, Sofia; Espinosa-Leal, Claudia A.; González-Rodríguez, Tzitziki; García-Lara, Silverio
ea is the second most widely consumed beverage globally, after water, and is known for its substantial antioxidant properties, primarily due to its phenolic content. This study quantifies phenolic compounds and assesses antioxidant activity in ten types of tea and selected herbal infusions, individually and in combination. Our findings reveal that free phenolic compounds and their antioxidant activity were twelve times and eight times greater than bound phenolic compounds. Among individual infusions, white tea exhibited the highest antioxidant activity and phenolic content, with 172.51 &amp;micro;mol TE/1000 g and 7.83 mg GAE/1000 g, respectively. In combination, white/linden flower tea showed the highest antioxidant activity (374.44 &amp;micro;mol TE/1000 g), and white/orange tea contained the highest phenolic content (9.24 mg GAE/1000 g). This study identified primarily two phenolic compounds, epigallocatechin gallate and epicatechin gallate, and one alkaloid, caffeine, in tea and herbal combinations. Compared to other combinations, we observed significant variations in catechins and caffeine between white and dark teas. Integrating specific herbal infusions with tea can enhance antioxidant activity up to three-fold compared to tea alone. This research offers valuable insights into optimizing herbal infusions to maximize antioxidant benefits, creating new opportunities to enhance the health benefits of tea-based products.
</description>
<pubDate>Wed, 16 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157435</guid>
<dc:date>2024-10-16T00:00:00Z</dc:date>
</item>
<item>
<title>Photoplethysmography Features Correlated with Blood Pressure Changes</title>
<link>https://hdl.handle.net/1721.1/157434</link>
<description>Photoplethysmography Features Correlated with Blood Pressure Changes
Elgendi, Mohamed; Jost, Elisabeth; Alian, Aymen; Fletcher, Richard Ribon; Bomberg, Hagen; Eichenberger, Urs; Menon, Carlo
Blood pressure measurement is a key indicator of vascular health and a routine part of medical examinations. Given the ability of photoplethysmography (PPG) signals to provide insights into the microvascular bed and their compatibility with wearable devices, significant research has focused on using PPG signals for blood pressure estimation. This study aimed to identify specific clinical PPG features that vary with different blood pressure levels. Through a literature review of 297 publications, we selected 16 relevant studies and identified key time-dependent PPG features associated with blood pressure prediction. Our analysis highlighted the second derivative of PPG signals, particularly the &#119887;/&#119886;&#13;
 and &#119889;/&#119886;&#13;
 ratios, as the most frequently reported and significant predictors of systolic blood pressure. Additionally, features from the velocity and acceleration photoplethysmograms were also notable. In total, 29 features were analyzed, revealing novel temporal domain features that show promise for further research and application in blood pressure estimation.
</description>
<pubDate>Thu, 17 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157434</guid>
<dc:date>2024-10-17T00:00:00Z</dc:date>
</item>
<item>
<title>Modeling the Performance of a Multi-Hop LoRaWAN Linear Sensor Network for Energy-Efficient Pipeline Monitoring Systems</title>
<link>https://hdl.handle.net/1721.1/157433</link>
<description>Modeling the Performance of a Multi-Hop LoRaWAN Linear Sensor Network for Energy-Efficient Pipeline Monitoring Systems
Alhomyani, Haneen; Fadel, Mai; Dimitriou, Nikos; Bakhsh, Helen; Aldabbagh, Ghadah
In recent years, LoRa technology has emerged as a solution for wide-area coverage IoT applications. Deploying a LoRa single-hop network on applications may be challenging in cases of network deployments that require the installation of linear sensor network topologies covering very large distances over unpopulated areas with limited access to cellular networks and energy grids. In such cases, multi-hop communication may provide better alternative solutions to support these challenges. This research aims to study the deployment of multi-hop linear sensor networks that are energy efficient. The focus will be on assessing the coverage, throughput, and energy consumption benefits that can be achieved and the related tradeoffs that have to be considered when using multi-hop solutions. Since monitoring systems in long-distance infrastructures may benefit from solutions based on multi-hop communication, we consider oil pipeline infrastructures in the Saudi Arabian desert as a case study. An analytical model is considered for estimating the above-stated parameters and evaluating the performance of the multi-hop LoRa WSN (MHWSN) against the single-hop LoRa WSN (SHWSN). In addition, the model is used to study the tradeoffs between throughput and energy consumption in different settings of MHWSNs. Scenarios of oil pipeline monitoring systems in Saudi Arabia are specified for studying the proposed multi-hop system&amp;rsquo;s performance. The obtained results show that when we have a large-scale network, such as an oil pipeline with medium traffic load requirements, multi-hop topologies may be an efficient deployment solution.
</description>
<pubDate>Tue, 15 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157433</guid>
<dc:date>2024-10-15T00:00:00Z</dc:date>
</item>
<item>
<title>A Branched Convolutional Neural Network for Forecasting the Occurrence of Hazes in Paris Using Meteorological Maps with Different Characteristic Spatial Scales</title>
<link>https://hdl.handle.net/1721.1/157432</link>
<description>A Branched Convolutional Neural Network for Forecasting the Occurrence of Hazes in Paris Using Meteorological Maps with Different Characteristic Spatial Scales
Wang, Chien
A convolutional neural network (CNN) has been developed to forecast the occurrence of low-visibility events or hazes in the Paris area. It has been trained and validated using multi-decadal daily regional maps of many meteorological and hydrological variables alongside surface visibility observations. The strategy is to make the machine learn from available historical data to recognize various regional weather and hydrological regimes associated with low-visibility events. To better preserve the characteristic spatial information of input features in training, two branched architectures have recently been developed. These architectures process input features firstly through several branched CNNs with different kernel sizes to better preserve patterns with certain characteristic spatial scales. The outputs from the first part of the network are then processed by the second part, a deep non-branched CNN, to further deliver predictions. The CNNs with new architectures have been trained using data from 1975 to 2019 in a two-class (haze versus non-haze) classification mode as well as a regression mode that directly predicts the value of surface visibility. The predictions of regression have also been used to perform the two-class classification forecast using the same definition in the classification mode. This latter procedure is found to deliver a much better performance in making class-based forecasts than the direct classification machine does, primarily by reducing false alarm predictions. The branched architectures have improved the performance of the networks in the validation and also in an evaluation using the data from 2021 to 2023 that have not been used in the training and validation. Specifically, in the latter evaluation, branched machines captured 70% of the observed low-visibility events during the three-year period at Charles de Gaulle Airport. Among those predicted low-visibility events by the machines, 74% of them are true cases based on observation.
</description>
<pubDate>Thu, 17 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157432</guid>
<dc:date>2024-10-17T00:00:00Z</dc:date>
</item>
<item>
<title>Machine Learning Approaches for the Prediction of Postoperative Major Complications in Patients Undergoing Surgery for Bowel Obstruction</title>
<link>https://hdl.handle.net/1721.1/157431</link>
<description>Machine Learning Approaches for the Prediction of Postoperative Major Complications in Patients Undergoing Surgery for Bowel Obstruction
Mazzotta, Alessandro D.; Burti, Elisa; Causio, Francesco Andrea; Orlandi, Alex; Martinelli, Silvia; Longaroni, Mattia; Pinciroli, Tiziana; Debs, Tarek; Costa, Gianluca; Miccini, Michelangelo; Aurello, Paolo; Petrucciani, Niccolò
first_pagesettingsOrder Article Reprints&#13;
Open AccessArticle&#13;
Machine Learning Approaches for the Prediction of Postoperative Major Complications in Patients Undergoing Surgery for Bowel Obstruction&#13;
by Alessandro D. Mazzotta 1,2ORCID,Elisa Burti 3,Francesco Andrea Causio 4,*ORCID,Alex Orlandi 5,Silvia Martinelli 4,Mattia Longaroni 6,Tiziana Pinciroli 7,Tarek Debs 8,Gianluca Costa 9ORCID,Michelangelo Miccini 10ORCID,Paolo Aurello 3 andNiccolò Petrucciani 3&#13;
1&#13;
Department of Surgery, Vannini General Hospital, Oncological and General Surgery, 00177 Rome, Italy&#13;
2&#13;
The BioRobotics Institute, Sant’Anna School of Advanced Studies, 56127 Pisa, Italy&#13;
3&#13;
Department of Medical and Surgical Sciences and Translational Medicine, Division of General and Hepatobiliary Surgery, St. Andrea Hospital, Sapienza University of Rome, 00185 Roma, Italy&#13;
4&#13;
Section of Hygiene, Department of Life Sciences and Public Health, Università Cattolica del Sacro Cuore, 00168 Rome, Italy&#13;
5&#13;
EIT Digital Master School, Polytech Nice Sophia, 06410 Biot, France&#13;
6&#13;
Department of Surgery, Santa Maria della Misericordia Hospital, University of Perugia, 06123 Perugia, Italy&#13;
7&#13;
MIT Professional Education, Massachusetts Institute of Technology, Cambridge, MA 02139, USA&#13;
8&#13;
Département de Chirurgie Digestive, Centre Hospitalier Universitaire de Nice, CHU Nice, 06000 Nice, France&#13;
9&#13;
Department of Life Science, Health, and Health Professions, Link Campus University, 00165 Rome, Italy&#13;
10&#13;
Department of Surgery, Sapienza University of Rome, 00185 Roma, Italy&#13;
*&#13;
Author to whom correspondence should be addressed.&#13;
J. Pers. Med. 2024, 14(10), 1043; https://doi.org/10.3390/jpm14101043&#13;
Submission received: 27 July 2024 / Revised: 13 September 2024 / Accepted: 25 September 2024 / Published: 8 October 2024&#13;
(This article belongs to the Special Issue Artificial Intelligence Applied to Clinical Practice)&#13;
Downloadkeyboard_arrow_down Browse Figures Review Reports Versions Notes&#13;
&#13;
Abstract&#13;
Background: Performing emergency surgery for bowel obstruction continues to place a significant strain on the healthcare system. Conventional assessment methods for outcomes in bowel obstruction cases often concentrate on isolated factors, and the evaluation of results for individuals with bowel obstruction remains poorly studied. This study aimed to examine the risk factors associated with major postoperative complications. Methods: We retrospectively analyzed 99 patients undergoing surgery from 2015 to 2022. We divided the patients into two groups: (1) benign-related obstruction (n = 68) and (2) cancer-related obstruction (n = 31). We used logistic regression, KNN, and XGBOOST. We calculated the receiver operating characteristic curve and accuracy of the model. Results: Colon obstructions were more frequent in the cancer group (p = 0.005). Operative time, intestinal resection, and stoma were significantly more frequent in the cancer group. Major complications were at 41% for the cancer group vs. 20% in the benign group (p = 0.03). Uni- and multivariate analysis showed that the significant risk factors for major complications were cancer-related obstruction and CRP. The best model was KNN, with an accuracy of 0.82. Conclusions: Colonic obstruction is associated with tumor-related blockage. Malignant cancer and an increase in C-reactive protein (CRP) are significant risk factors for patients who have undergone emergency surgery due to major complications. KNN could improve the process of counseling and the perioperative management of patients with intestinal obstruction in emergency settings.
</description>
<pubDate>Tue, 08 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157431</guid>
<dc:date>2024-10-08T00:00:00Z</dc:date>
</item>
<item>
<title>Information FOMO: The Unhealthy Fear of Missing Out on Information&amp;mdash;A Method for Removing Misleading Data for Healthier Models</title>
<link>https://hdl.handle.net/1721.1/157430</link>
<description>Information FOMO: The Unhealthy Fear of Missing Out on Information&amp;mdash;A Method for Removing Misleading Data for Healthier Models
Pickering, Ethan; Sapsis, Themistoklis P.
Misleading or unnecessary data can have out-sized impacts on the health or accuracy of Machine Learning (ML) models. We present a Bayesian sequential selection method, akin to Bayesian experimental design, that identifies critically important information within a dataset while ignoring data that are either misleading or bring unnecessary complexity to the surrogate model of choice. Our method improves sample-wise error convergence and eliminates instances where more data lead to worse performance and instabilities of the surrogate model, often termed sample-wise &amp;ldquo;double descent&amp;rdquo;. We find these instabilities are a result of the complexity of the underlying map and are linked to extreme events and heavy tails. Our approach has two key features. First, the selection algorithm dynamically couples the chosen model and data. Data is chosen based on its merits towards improving the selected model, rather than being compared strictly against other data. Second, a natural convergence of the method removes the need for dividing the data into training, testing, and validation sets. Instead, the selection metric inherently assesses testing and validation error through global statistics of the model. This ensures that key information is never wasted in testing or validation. The method is applied using both Gaussian process regression and deep neural network surrogate models.
</description>
<pubDate>Mon, 30 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157430</guid>
<dc:date>2024-09-30T00:00:00Z</dc:date>
</item>
<item>
<title>A Highly Linear Ultra-Low-Area-and-Power CMOS Voltage-Controlled Oscillator for Autonomous Microsystems</title>
<link>https://hdl.handle.net/1721.1/157429</link>
<description>A Highly Linear Ultra-Low-Area-and-Power CMOS Voltage-Controlled Oscillator for Autonomous Microsystems
Pacheco, Javier de Mena; Palacios, Tomas; Hempel, Marek; Vallejo, Marisa Lopez
Voltage-controlled oscillators (VCOs) can be an excellent means of converting a magnitude into a readable value. However, their design becomes a real challenge for power-and-area-constrained applications, especially when a linear response is required. This paper presents a VCO for smart dust systems fabricated by 65 nm technology. It is designed to minimize leakage, limit high peak currents and provide an output whose frequency variation is linear with the input voltage, while allowing rail-to-rail input range swing. The oscillator occupies 592 μm2&#13;
, operates in a frequency range from 43 to 53 Hz and consumes a maximum average power of 210 pW at a supply voltage of 1 V and 4 pW at 0.3 V. In addition, the proposed VCO exhibits a quasi-linear response of frequency vs. supply voltage and temperature, allowing easy temperature compensation with complementary to absolute temperature (CTAT) voltage.
</description>
<pubDate>Thu, 26 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157429</guid>
<dc:date>2024-09-26T00:00:00Z</dc:date>
</item>
<item>
<title>Hybrid-MPET: An Open-Source Simulation Software for Hybrid Electrode Batteries</title>
<link>https://hdl.handle.net/1721.1/157427</link>
<description>Hybrid-MPET: An Open-Source Simulation Software for Hybrid Electrode Batteries
Liang, Qiaohao; Bazant, Martin Z
As the design of single-component battery electrodes has matured, the battery industry has turned to hybrid electrodes with blends of two or more active materials to enhance battery performance. Leveraging the best properties of each material while mitigating their drawbacks, multi-component hybrid electrodes open a vast new design space that could be most efficiently explored through simulations. In this article, we introduce a mathematical modeling framework and open-source battery simulation software package for Hybrid Multiphase Porous Electrode Theory (Hybrid-MPET), capable of accounting for the parallel reactions, phase transformations and multiscale heterogeneities in hybrid porous electrodes. Hybrid-MPET models can simulate both solid solution and multiphase active materials in hybrid electrodes at intra-particle and inter-particle scales. Its modular design also allows the combination of different active materials at any capacity fraction. To illustrate the novel features of Hybrid-MPET, we present experimentally validated models of silicon-graphite (Si-Gr) anodes used in electric vehicle batteries and carbon monofluoride (CFx) - silver vanadium oxide (SVO) cathodes used in implantable medical device batteries. The results demonstrate the potential of Hybrid-MPET models to accelerate the development of hybrid electrode batteries by providing fast predictions of their performance over a wide range of design parameters and operating protocols.
</description>
<pubDate>Fri, 01 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157427</guid>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</item>
<item>
<title>Learning heterogeneous reaction kinetics from X-ray videos pixel by pixel</title>
<link>https://hdl.handle.net/1721.1/157426</link>
<description>Learning heterogeneous reaction kinetics from X-ray videos pixel by pixel
Zhao, Hongbo; Deng, Haitao Dean; Cohen, Alexander E; Lim, Jongwoo; Li, Yiyang; Fraggedakis, Dimitrios; Jiang, Benben; Storey, Brian D; Chueh, William C; Braatz, Richard D; Bazant, Martin Z
Reaction rates at spatially heterogeneous, unstable interfaces are notoriously difficult to quantify, yet are essential in engineering many chemical systems, such as batteries1 and electrocatalysts2. Experimental characterizations of such materials by operando microscopy produce rich image datasets3,4,5,6, but data-driven methods to learn physics from these images are still lacking because of the complex coupling of reaction kinetics, surface chemistry and phase separation7. Here we show that heterogeneous reaction kinetics can be learned from in situ scanning transmission X-ray microscopy (STXM) images of carbon-coated lithium iron phosphate (LFP) nanoparticles. Combining a large dataset of STXM images with a thermodynamically consistent electrochemical phase-field model, partial differential equation (PDE)-constrained optimization and uncertainty quantification, we extract the free-energy landscape and reaction kinetics and verify their consistency with theoretical models. We also simultaneously learn the spatial heterogeneity of the reaction rate, which closely matches the carbon-coating thickness profiles obtained through Auger electron microscopy (AEM). Across 180,000 image pixels, the mean discrepancy with the learned model is remarkably small (&lt;7%) and comparable with experimental noise. Our results open the possibility of learning nonequilibrium material properties beyond the reach of traditional experimental methods and offer a new non-destructive technique for characterizing and optimizing heterogeneous reactive surfaces.
</description>
<pubDate>Thu, 14 Sep 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157426</guid>
<dc:date>2023-09-14T00:00:00Z</dc:date>
</item>
<item>
<title>A Universal Approximation for Conductance Blockade in Thin Nanopore Membranes</title>
<link>https://hdl.handle.net/1721.1/157425</link>
<description>A Universal Approximation for Conductance Blockade in Thin Nanopore Membranes
Shah, Arjav; Pathak, Shakul; Li, Kun; Garaj, Slaven; Bazant, Martin Z; Gupta, Ankur; Doyle, Patrick S
Nanopore-based sensing platforms have transformed single-molecule detection and analysis. The foundation of nanopore translocation experiments lies in conductance measurements, yet existing models, which are largely phenomenological, are inaccurate in critical experimental conditions such as thin and tightly fitting pores. Of the two components of the conductance blockade, channel and access resistance, the access resistance is poorly modeled. We present a comprehensive investigation of the access resistance and associated conductance blockade in thin nanopore membranes. By combining a first-principles approach, multiscale modeling, and experimental validation, we propose a unified theoretical modeling framework. The analytical model derived as a result surpasses current approaches across a broad parameter range. Beyond advancing our theoretical understanding, our framework's versatility enables analyte size inference and predictive insights into conductance blockade behavior. Our results will facilitate the design and optimization of nanopore devices for diverse applications, including nanopore base calling and data storage.
</description>
<pubDate>Mon, 01 Jan 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157425</guid>
<dc:date>2024-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Electric field induced associations in the double layer of salt-in-ionic-liquid electrolytes</title>
<link>https://hdl.handle.net/1721.1/157424</link>
<description>Electric field induced associations in the double layer of salt-in-ionic-liquid electrolytes
Markiewitz, Daniel M; Goodwin, Zachary AH; McEldrew, Michael; Pedro de Souza, J; Zhang, Xuhui; Espinosa-Marzal, Rosa M; Bazant, Martin Z
Ionic liquids (ILs) are an extremely exciting class of electrolytes for energy storage applications. Upon dissolving alkali metal salts, such as Li or Na based salts, with the same anion as the IL, an intrinsically asymmetric electrolyte can be created for use in batteries, known as a salt-in-ionic liquid (SiIL). These SiILs have been well studied in the bulk, where negative transference numbers of the alkali metal cation have been observed from the formation of small, negatively charged clusters. The properties of these SiILs at electrified interfaces, however, have received little to no attention. Here, we develop a theory for the electrical double layer (EDL) of SiILs where we consistently account for the thermoreversible association of ions into Cayley tree aggregates. The theory predicts that the IL cations first populate the EDL at negative voltages, as they are not strongly bound to the anions. However, at large negative voltages, which are strong enough to break the alkali metal cation–anion associations, these IL cations are exchanged for the alkali metal cation because of their higher charge density. At positive voltages, we find that the SiIL actually becomes more aggregated while screening the electrode charge from the formation of large, negatively charged aggregates. Therefore, in contrast to conventional intuition of associations in the EDL, SiILs appear to become more associated in certain electric fields. We present these theoretical predictions to be verified by molecular dynamics simulations and experimental measurements.
</description>
<pubDate>Mon, 01 Jan 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157424</guid>
<dc:date>2024-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Modeling Lithium Plating Onset on Porous Graphite Electrodes Under Fast Charging with Hierarchical Multiphase Porous Electrode Theory</title>
<link>https://hdl.handle.net/1721.1/157423</link>
<description>Modeling Lithium Plating Onset on Porous Graphite Electrodes Under Fast Charging with Hierarchical Multiphase Porous Electrode Theory
Lian, Huada; Bazant, Martin Z
Lithium plating during fast charging of porous graphite electrodes in lithium-ion batteries accelerates degradation and raises safety concerns. Predicting lithium plating is challenging due to the close redox potentials of lithium reduction and intercalation, obscured by the nonlinear dynamics of electrochemically driven phase separation in hierarchical pore structures. To resolve dynamical resistance of realistic porous graphite electrodes, we introduce a model of porous secondary graphite particles to the multiphase porous electrode theory (MPET), based on electrochemical nonequilibrium thermodynamics and volume averaging. The resulting computational framework of “hierarchical MPET” is validated and tested against experimental data over a wide range of fast charging conditions and capacities. With all parameters estimated from independent sources, the model is able to quantitatively predict the measured cell voltages, and, more importantly, the experimentally determined capacity for lithium plating onset at fast 2C to 6C rates. Spatial and temporal heterogeneities in the lithiation of porous graphite electrodes are revealed and explained theoretically, including key features, such as idle graphite particles and non-uniform plating, which have been observed experimentally.
</description>
<pubDate>Mon, 01 Jan 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157423</guid>
<dc:date>2024-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Unraveling the Dynamics of Mental and Visuospatial Workload in Virtual Reality Environments</title>
<link>https://hdl.handle.net/1721.1/157422</link>
<description>Unraveling the Dynamics of Mental and Visuospatial Workload in Virtual Reality Environments
Bernal, Guillermo; Jung, Hahrin; Yassı, İsmail Emir; Hidalgo, Nelson; Alemu, Yodahe; Barnes-Diana, Tyler; Maes, Pattie
Mental workload, visuospatial processes and autonomic nervous system (ANS) activity are highly intertwined phenomena crucial for achieving optimal performance and improved mental health. Virtual reality (VR) serves as an effective tool for creating variety of controlled environments to better probe these features. This study investigates the relationship between mental and visuospatial workload, physiological arousal, and performance during a high-demand task in a VR environment. We utilized a modified version of the popular computer game TETRIS as the task, involving 25 participants, and employed a physiological computing VR headset that simultaneously records multimodal physiological data. Our findings indicate a broadband increase in EEG power just prior to a helper event, followed by a spike of visuospatial engagement (parietal alpha and beta 0-1-3 s) occurring concurrently with a decrease in mental workload (frontal theta 2&amp;ndash;4 s), and subsequent decreases in visuospatial engagement (parietal theta at 14 s) and physiological arousal (HRV at 20 s). Regression analysis indicated that the subjective relief and helpfulness of the helper intervention was primarily driven by a decrease in physiological arousal and an increase in visuospatial engagement. These findings highlight the importance of multimodal physiological recording in rich environments, such as real world scenarios and VR, to understand the interplay between the various physiological responses involved in mental and visuospatial workload.
</description>
<pubDate>Thu, 26 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157422</guid>
<dc:date>2024-09-26T00:00:00Z</dc:date>
</item>
<item>
<title>Sororin actively maintains sister chromatid cohesion</title>
<link>https://hdl.handle.net/1721.1/157421</link>
<description>Sororin actively maintains sister chromatid cohesion
Ladurner, Rene; Kreidl, Emanuel; Ivanov, Miroslav P.; Ekker, Heinz; Idarraga‐Amado, Maria H.; Busslinger, Georg A.; Wutz, Gordana; Cisneros, David A.; Peters, Jan‐Michael
Cohesion between sister chromatids is established during DNA replication but needs to be maintained to enable proper chromosome–spindle attachments in mitosis or meiosis. Cohesion is mediated by cohesin, but also depends on cohesin acetylation and sororin. Sororin contributes to cohesion by stabilizing cohesin on DNA. Sororin achieves this by inhibiting WAPL, which otherwise releases cohesin from DNA and destroys cohesion. Here we describe mouse models which enable the controlled depletion of sororin by gene deletion or auxin‐induced degradation. We show that sororin is essential for embryonic development, cohesion maintenance, and proper chromosome segregation. We further show that the acetyltransferases ESCO1 and ESCO2 are essential for stabilizing cohesin on chromatin, that their only function in this process is to acetylate cohesin's SMC3 subunit, and that DNA replication is also required for stable cohesin–chromatin interactions. Unexpectedly, we find that sororin interacts dynamically with the cohesin complexes it stabilizes. This implies that sororin recruitment to cohesin does not depend on the DNA replication machinery or process itself, but on a property that cohesin acquires during cohesion establishment.
</description>
<pubDate>Mon, 22 Feb 2016 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157421</guid>
<dc:date>2016-02-22T00:00:00Z</dc:date>
</item>
<item>
<title>Integrating Multi-Cancer Early Detection (MCED) Tests with Standard Cancer Screening: System Dynamics Model Development and Feasibility Testing</title>
<link>https://hdl.handle.net/1721.1/157420</link>
<description>Integrating Multi-Cancer Early Detection (MCED) Tests with Standard Cancer Screening: System Dynamics Model Development and Feasibility Testing
Fagery, Mussab; Khorshidi, Hadi A.; Wong, Stephen Q.; Karanfil, Özge; Emery, Jon; IJzerman, Maarten J.
Background Cancer screening plays a critical role in early disease detection and improving outcomes. In Australia, established screening protocols for colorectal, breast and cervical cancer have significantly contributed to timely cancer detection. However, the recent introduction of multi-cancer early detection (MCED) tests arguably can disrupt current screening, yet the extent to which these tests provide additional benefits remains uncertain. We present the development and initial validation of a system dynamics (SD) model that estimates the additional cancer detections and costs associated with MCED tests. Aim This article describes the development of a simulation model built to evaluate the additional patient diagnoses and the economic impact of incorporating MCED testing alongside Australia’s well-established standard of care (SOC) screening programs for colorectal, breast, cervical and lung cancers. The model was designed to estimate the additional number of patients diagnosed at each cancer stage (stage I, II, III, IV, or unknown) and the associated costs. This simulation model allows for the analysis of multiple scenarios under a plausible set of assumptions regarding population-level participation rates. Methods An SD model was developed to represent the existing SOC national cancer screening pathways and to integrate potential clinical pathways that could be introduced by MCED tests. The SD model was built to investigate three scenarios for the use of MCED testing: firstly, to explore the viability of MCED testing as a substitute among individuals who are not opting for SOC screening for any reason; secondly, to implement MCED testing exclusively for individuals ineligible for SOC screening, yet have high-risk characteristics; and thirdly, to employ MCED testing after SOC screening to serve as a triaging/confirmatory tool for individuals receiving inconclusive test results. The three primary scenarios were constructed by varying diagnostic accuracy and uptake rates of MCED tests. Discussion The clinical utility and outcomes of MCED testing for screening and early detection still lack comprehensive evidence. Nonetheless, this simulation model facilitates a thorough analysis of MCED tests within the Australian healthcare context, providing insights into potential additional detections and costs to the healthcare system, which may help prioritise future evidence development. The adaptable yet novel SD model presented herein is anticipated to be of considerable interest to industry, policymakers, consumers and clinicians involved in informing clinical and economic decisions regarding integrating MCED tests as cancer screening and early detection tools. The expected results of applying this SD model will determine whether using MCED testing in conjunction with SOC screening offers any potential benefits, possibly guiding policy decisions and clinical practices towards the adoption of MCED tests.
</description>
<pubDate>Fri, 18 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157420</guid>
<dc:date>2024-10-18T00:00:00Z</dc:date>
</item>
<item>
<title>UROPOT: study protocol for a randomized, double-blind phase I/II trial for metabolism-based potentiation of antimicrobial prophylaxis in the urological tract</title>
<link>https://hdl.handle.net/1721.1/157419</link>
<description>UROPOT: study protocol for a randomized, double-blind phase I/II trial for metabolism-based potentiation of antimicrobial prophylaxis in the urological tract
Stritt, Kevin; Roth, Beat; Masnada, Audrey; Hammann, Felix; Jacot, Damien; Domingos-Pereira, Sonia; Crettenand, François; Bohner, Perrine; Sommer, Isabelle; Bréat, Emilien; Sauser, Julien; Derré, Laurent; Haschke, Manuel; Collins, James J.; McKinney, John
Background Urinary tract catheters, including Double-J or ureteral stents, are prone to bacterial colonization forming biofilms and leading to asymptomatic bacteriuria. In the context of asymptomatic bacteriuria, endourological procedures causing mucosa-inducing lesions can lead to severe infections. Antibiotic prophylaxis is warranted, yet its efficacy is limited by biofilm formation on stents. Biofilms promote antibiotic tolerance, the capacity of genetically susceptible bacteria to survive a normally lethal dose of antimicrobial therapy. The UROPOT study evaluates the effectiveness of a first-in-type metabolism-based aminoglycoside potentiation for (i) preventing infectious complications of asymptomatic bacteriuria during mucosa lesion-inducing endourological procedures and (ii) assessing its anti-tolerance efficacy. Methods The UROPOT trial is a phase I/II single-center (Lausanne University Hospital (CHUV), Switzerland) randomized double-blinded trial. Over 2 years, patients with asymptomatic Escherichia coli and/or Klebsiella pneumoniae bacteriuria, undergoing endourological procedures, will be randomly allocated to one of three treatment arms (1:1:1 randomization ratio, 30 patients per group) to evaluate the efficacy of mannitol-potentiated low-dose amikacin compared to established standard treatments (ceftriaxone or amikacin standard dose). Patients will be recruited at the CHUV Urology Outpatient Clinic. The primary outcome is the comparative incidence of postoperative urinary tract infections (assessed at 48 h) between the investigational amikacin/mannitol therapy and standard (ceftriaxone or amikacin) antibiotic prophylaxis, defined by specific systemic symptoms and/or positive blood and/or urine culture. Secondary outcomes include assessing microbiological eradication through anti-biofilm activity, sustained microbiological eradication, and mannitol and antibiotics pharmacokinetics in blood and urine. Safety outcomes will evaluate the incidence of adverse events following amikacin/mannitol therapy and postoperative surgical complications at postoperative day 14. Discussion UROPOT tests a novel antimicrobial strategy based on “metabolic potentiation” for prophylaxis enabling aminoglycoside dose reduction and targeting biofilm activity. The anti-biofilm effect may prove beneficial, particularly in patients who have a permanent stent in situ needing recurrent endourological manipulations strategies in preventing infections and achieving sustained microbiological eradication in pre-stented patients. Trial registration The protocol is approved by the local ethics committee (CER-VD, 2023–01369, protocole 2.0) and the Swiss Agency for Therapeutic Products (Swissmedic, 701,676) and is registered on the NIH’s ClinicalTrials.gov (trial registration number: NCT05761405). Registered on March 07, 2023.
</description>
<pubDate>Tue, 15 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157419</guid>
<dc:date>2024-10-15T00:00:00Z</dc:date>
</item>
<item>
<title>Observation of muonic Dalitz decays of chib mesons and precise spectroscopy of hidden-beauty states</title>
<link>https://hdl.handle.net/1721.1/157418</link>
<description>Observation of muonic Dalitz decays of chib mesons and precise spectroscopy of hidden-beauty states
Aaij, R.; Abdelmotteleb, A. S. W.; Abellan Beteta, C.; Abudinén, F.; Ackernley, T.; Adefisoye, A. A.; Adeva, B.; Adinolfi, M.; Adlarson, P.; Agapopoulou, C.; Aidala, C. A.; Ajaltouni, Z.; Akar, S.; Akiba, K.; Albicocco, P.; Albrecht, J.; Alessio, F.
The decays of the χb1(1P), χb2(1P), χb1(2P) and χb2(2P) mesons into the Υ(1S)μ+μ− final state are observed with a high significance using proton-proton collision data collected with the LHCb detector and corresponding to an integrated luminosity of 9 fb−1. The newly observed decays together with the Υ(2S) → Υ(1S)π+π− and Υ(3S) → Υ(2S)π+π− decay modes are used for precision measurements of the mass and mass splittings for the hidden-beauty states.
</description>
<pubDate>Wed, 16 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157418</guid>
<dc:date>2024-10-16T00:00:00Z</dc:date>
</item>
<item>
<title>Optimizing Transformation-Induced Plasticity to Resist Microvoid Softening</title>
<link>https://hdl.handle.net/1721.1/157417</link>
<description>Optimizing Transformation-Induced Plasticity to Resist Microvoid Softening
Snow, Brandon D.; Olson, G. B.; Parks, D. M.
Many high-performance steels that are critical for energy-efficient, lightweight designs rely on transformation-induced plasticity (TRIP) to achieve superior combinations of strength and ductility/toughness. Further development of these alloys will require greater optimization of the metastable (retained) austenite phase responsible for TRIP. Considering the complex nature of TRIP and its effects on ductile fracture, an integrated computational materials engineering (ICME) approach to materials optimization is desired. In this work, we report the results of a large series of micromechanical finite element calculations that probe the interaction of TRIP and void-mediated ductile fracture mechanisms. The simulations identify the optimal austenite stability for maximizing the benefit of TRIP across a wide range of stress states. The applied stress triaxiality significantly influences the microvoid growth rate and the computationally determined optimal stability. The simulation results are compared with existing experimental data, demonstrating good agreement.
</description>
<pubDate>Fri, 18 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157417</guid>
<dc:date>2024-10-18T00:00:00Z</dc:date>
</item>
<item>
<title>Formal contracts mitigate social dilemmas in multi-agent reinforcement learning</title>
<link>https://hdl.handle.net/1721.1/157416</link>
<description>Formal contracts mitigate social dilemmas in multi-agent reinforcement learning
Haupt, Andreas; Christoffersen, Phillip; Damani, Mehul; Hadfield-Menell, Dylan
Multi-agent Reinforcement Learning (MARL) is a powerful tool for training autonomous agents acting independently in a common environment. However, it can lead to sub-optimal behavior when individual incentives and group incentives diverge. Humans are remarkably capable at solving these social dilemmas. It is an open problem in MARL to replicate such cooperative behaviors in selfish agents. In this work, we draw upon the idea of formal contracting from economics to overcome diverging incentives between agents in MARL. We propose an augmentation to a Markov game where agents voluntarily agree to binding transfers of reward, under pre-specified conditions. Our contributions are theoretical and empirical. First, we show that this augmentation makes all subgame-perfect equilibria of all Fully Observable Markov Games exhibit socially optimal behavior, given a sufficiently rich space of contracts. Next, we show that for general contract spaces, and even under partial observability, richer contract spaces lead to higher welfare. Hence, contract space design solves an exploration-exploitation tradeoff, sidestepping incentive issues. We complement our theoretical analysis with experiments. Issues of exploration in the contracting augmentation are mitigated using a training methodology inspired by multi-objective reinforcement learning: Multi-Objective Contract Augmentation Learning. We test our methodology in static, single-move games, as well as dynamic domains that simulate traffic, pollution management, and common pool resource management.
</description>
<pubDate>Fri, 18 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157416</guid>
<dc:date>2024-10-18T00:00:00Z</dc:date>
</item>
<item>
<title>New Generalized Derivatives for Solving Variational Inequalities Using the Nonsmooth Newton Methods</title>
<link>https://hdl.handle.net/1721.1/157415</link>
<description>New Generalized Derivatives for Solving Variational Inequalities Using the Nonsmooth Newton Methods
Song, Yingkai; Barton, Paul I.
Variational inequality (VI) generalizes many mathematical programming problems and has a wide variety of applications. One class of VI solution methods is to reformulate a VI into a normal map nonsmooth equation system, which is then solved using nonsmooth equation-solving techniques. In this article, we propose a first practical approach for furnishing B-subdifferential elements of the normal map, which in turn enables solving the normal map equation system using variants of the B-subdifferential-based nonsmooth Newton method. It is shown that our new method requires less stringent conditions to achieve local convergence than some other established methods, and thus guarantees convergence in certain cases where other methods may fail. We compute a B-subdifferential element using the LD-derivative, which is a recently established generalized derivative concept. In our new approach, an LD-derivative is computed by solving a sequence of strictly convex quadratic programs, which can be terminated early under certain conditions. Numerical examples are provided to illustrate the convergence properties of our new method, based on a proof-of-concept implementation in Python.
</description>
<pubDate>Sat, 19 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157415</guid>
<dc:date>2024-10-19T00:00:00Z</dc:date>
</item>
<item>
<title>Possession and syntactic categories: An argument from Äiwoo</title>
<link>https://hdl.handle.net/1721.1/157414</link>
<description>Possession and syntactic categories: An argument from Äiwoo
Roversi, Giovanni
This paper argues that possession is syntactically category-flexible. While it is clear that in many languages possession is mostly grounded in and operates in the nominal extended projection (Szabolcsi 1983; Kayne 1993), I show that this cannot be universal. The empirical part of this article is a case study of Äiwoo, which I argue has an inherently verbal counterpart of English ’s, an abstract transitive verb I label poss. This verb can be used by itself to form clausal possession: ‘I poss this boat’ ≈ ‘this boat is mine.’ Possessed DPs also contain the verb poss: the object of this verb is extracted, forming a relative clause. Informally, ‘my boat’ really is ‘the boati ’ ≈ ‘the boat that is mine.’ Given this, Äiwoo simply lacks true nominal possessives. The theoretical consequence is that possession can be mapped onto different syntactic categories in different languages. This is a welcome result, as it makes the syntax-semantics mapping as flexible as it needs to be: if possession is just a tool to assert that a certain relation holds between two entities, nothing in our theory of grammar predicts that such a notion should only be limited to a specific syntactic category.
</description>
<pubDate>Fri, 18 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157414</guid>
<dc:date>2024-10-18T00:00:00Z</dc:date>
</item>
<item>
<title>Estimating transit’s land-use multiplier: direct and indirect effects on vehicle miles traveled</title>
<link>https://hdl.handle.net/1721.1/157413</link>
<description>Estimating transit’s land-use multiplier: direct and indirect effects on vehicle miles traveled
Sabouri, Sadegh; Ewing, Reid; Kalantari, Hannaneh A.
The significance of public transit in curbing greenhouse gas (GHG) emissions and reducing vehicle miles traveled (VMT) goes beyond its users. Investments in transit infrastructure, coupled with service enhancements and their consequential impacts on urban development (termed as indirect effects), have the potential to foster location efficiency. This concept encompasses the advantageous proximity of vital destinations such as workplaces and retail establishments to the residences that necessitate access. In this context, investments made in public transit systems exhibit a multiplier effect, commonly quantified as the reduction in VMT per each passenger mile of transit usage. While this topic has gained attention over the past few decades, an agreement regarding the size of the multiplier effect has yet to be reached among researchers. This study employs a multilevel structural equation model and leverages a comprehensive database of household travel survey data from 31 diverse regions. By utilizing trip-level data, this study provides results that possess external validity and generalizability, overcoming limitations identified in earlier research. Additionally, this study aims to present a simplified formula that enables transit agencies nationwide to compute their unique multipliers. The findings suggest that regions with extensive transit systems exhibit higher transit multipliers compared to regions with limited transit access. Furthermore, the impact of transit within a community extends well beyond merely the reduction in private vehicle usage by transit passengers. Rather, the alterations in the built environment in transit-served communities lead to substantial VMT savings, surpassing the effects solely attributed to transit passenger usage.
</description>
<pubDate>Thu, 17 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157413</guid>
<dc:date>2024-10-17T00:00:00Z</dc:date>
</item>
<item>
<title>Observation of the 0 b → J/ψ−K+ decay</title>
<link>https://hdl.handle.net/1721.1/157412</link>
<description>Observation of the 0 b → J/ψ−K+ decay
Hayrapetyan, A.; Tumasyan, A.; Adam, W.; Andrejkovic, J. W.; Bergauer, T.; Chatterjee, S.; Damanakis, K.; Dragicevic, M.; Valle, A. E. D.; Hussain, P. S.; Jeitler, M.; Krammer, N.; Liko, D.; Mikulec, I.; Schieck, J.; Schöfbeck, R.; Schwarz, D.; Sonawane, M.; Templ, S.; Waltenberger, W.
Abstract Using proton–proton collision data corresponding to an integrated luminosity of 140 fb - 1 collected by the CMS experiment at s = 13 Te V , the Λ b 0 → J / ψ Ξ - K + decay is observed for the first time, with a statistical significance exceeding 5 standard deviations. The relative branching fraction, with respect to the Λ b 0 → ψ ( 2 S ) Λ decay, is measured to be B ( Λ b 0 → J / ψ Ξ - K + ) / B ( Λ b 0 → ψ ( 2 S ) Λ ) = [ 3.38 ± 1.02 ± 0.61 ± 0.03 ] % , where the first uncertainty is statistical, the second is systematic, and the third is related to the uncertainties in B ( ψ ( 2 S ) → J / ψ π + π - ) and B ( Ξ - → Λ π - ) .
</description>
<pubDate>Tue, 15 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157412</guid>
<dc:date>2024-10-15T00:00:00Z</dc:date>
</item>
<item>
<title>Physics-Informed Design of Hybrid Pulse Power Characterization Tests for Rechargeable Batteries</title>
<link>https://hdl.handle.net/1721.1/157411</link>
<description>Physics-Informed Design of Hybrid Pulse Power Characterization Tests for Rechargeable Batteries
Zhuang, Debbie; Li, Michael L; Lam, Vivek N; Braatz, Richard D; Chueh, William C; Bazant, Martin Z
Industry-standard diagnostic methods for rechargeable batteries, such as hybrid pulse power characterization (HPPC) tests for hybrid electric vehicles, provide some indications of state of health (SoH), but lack a physical basis to guide protocol design and identify degradation mechanisms. We develop a physics-based theoretical framework for HPPC tests, which are able to accurately determine specific mechanisms for battery degradation in porous electrode simulations. We show that voltage pulses are generally preferable to current pulses, since voltage-resolved linearization more rapidly quantifies degradation without sacrificing accuracy or allowing significant state changes during the measurement. In addition, asymmetric amounts of information gain between charge /discharge pulses are found from differences in electrode kinetic scales. We demonstrate our approach of physics-informed HPPC on simulated Li-ion batteries with nickel-rich cathodes and graphite anodes. Multivariable optimization by physics-informed HPPC rapidly determines kinetic parameters that correlate with degradation phenomena at the anode, such as solid-electrolyte interphase (SEI) growth and lithium plating, as well as at the cathode, such as oxidation-induced cation disorder. If validated experimentally, standardized voltage protocols for HPPC tests could play a pivotal role in expediting battery SoH assessment and accelerating materials design by providing new electrochemical features for interpretable machine learning of battery degradation.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157411</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Rapid Simulation of Electro-Chemo-Mechanical Deformation of Li-ion Batteries Based On Porous Electrode Theory</title>
<link>https://hdl.handle.net/1721.1/157410</link>
<description>Rapid Simulation of Electro-Chemo-Mechanical Deformation of Li-ion Batteries Based On Porous Electrode Theory
Ipers, Gerrit; Jiao, Junning; Pathak, Shakul; Fang, Ruqing; Berliner, Marc D; Li, Wei; Li, Weihan; Braatz, Richard D; Bazant, Martin Z; Zhu, Juner
Lithium-ion batteries change their geometric dimensions during cycling as a macroscopic result of a series of microscale mechanisms, including but not limited to diffusion-induced expansion/shrinkage, gas evolution, growth of solid-electrolyte interphase, and particle cracking. Predicting the nonlinear dimensional changes with mathematical models is critical to the lifetime prediction, health management, and non-destructive assessment of batteries. In this study, we present an approach to implement an elastoplasticity model for powder materials into the porous electrode theory (PET). By decomposing the overall deformation into elastic, plastic, and diffusion-induced portions and using the powder plasticity model to describe the plastic portion, the model can capture the reversible thickness change caused by Li-ion (de-)intercalation as well as the irreversible thickness change due to the rearrangement and consolidation of particles. For real-world applications of the model to predict battery health and safety, the key lies in solving the mathematical equations rapidly. Here, we implemented the coupled model into the open-source software PETLION for millisecond-scale simulation. The computational model is parameterized using values gathered from literature, tested under varying conditions, briefly compared to real-world observations, and qualitatively analyzed to find parameter-output relations.
</description>
<pubDate>Wed, 01 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157410</guid>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</item>
<item>
<title>Optimum Model-Based Design of Diagnostics Experiments (DOE) with Hybrid Pulse Power Characterization (HPPC) for Lithium-Ion Batteries</title>
<link>https://hdl.handle.net/1721.1/157409</link>
<description>Optimum Model-Based Design of Diagnostics Experiments (DOE) with Hybrid Pulse Power Characterization (HPPC) for Lithium-Ion Batteries
Rhyu, Jinwook; Zhuang, Debbie; Bazant, Martin Z; Braatz, Richard D
Diagnostics of lithium-ion batteries are frequently performed in battery management systems for optimized operation of lithium-ion batteries or for second-life usage. However, attempting to extract dominant degradation information requires long rest times between diagnostic pulses, which compete with the need for efficient diagnostics. Here, we design a set of efficient optimal hybrid pulse power characterization (HPPC) diagnostics using model-based design of experiment (DOE) methods, applying knowledge of degradation effects on pulse kinetics and cell properties. We validate that these protocols are effective through minimization of uncertainty, and robust with Markov Chain Monte Carlo (MCMC) simulations. Contrary to traditional HPPC diagnostics which use fixed pulse magnitudes at uniformly distributed state of charges (SOC), we find that well-designed HPPC protocols using our framework outperform traditional protocols in terms of minimizing both parametric uncertainties and diagnostic time. Trade-offs between minimizing parametric uncertainty and total diagnostic time can be made based on different diagnostics needs.
</description>
<pubDate>Mon, 01 Jul 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157409</guid>
<dc:date>2024-07-01T00:00:00Z</dc:date>
</item>
<item>
<title>Understanding electrostatic interaction on strong cation-exchanger via co-ion valency effects</title>
<link>https://hdl.handle.net/1721.1/157408</link>
<description>Understanding electrostatic interaction on strong cation-exchanger via co-ion valency effects
Essert, GM; de Souza, JP; Schwaminger, SP; Bazant, MZ; Berensmeier, S
Empirical adsorption models have been extensively used to design and optimize ion exchange chromatography (IEC) processes for proteins. The equations go 40 years back to the qualitative findings about the electrical double layer (EDL) in ion exchangers and form the basis of the stoichiometric displacement (SD) model widely used in preparative chromatography. While the SD model reduces the experimental effort to find salt-eluting conditions for the separation, knowledge transfer is restricted from one system to another. However, this limitation can be overcome by understanding the physicochemical interaction mechanism between the solid adsorbent and the electrolyte. Via a theoretical and experimental approach, we investigated the physicochemical adsorption mechanism in IEC and developed a methodology to determine it quantitatively by measuring the effective EDL thickness. We performed negative adsorption experiments in high-performance liquid chromatography to measure the excluded volume of co-ions, citrate, or oxalate on strong cation exchange resin. Together with the physical specifications of the column and the deployment of a modified nonlinear Poisson-Boltzmann equation, we identified the effects of the electrolyte composition on the size of the EDL. While it depends on the concentration, valency, and size of the counterion, we derived that the expansion of the EDL is indicated by different valencies of the carboxylate co-ions in trace amounts. Our findings provide a self-consistent theory of the transport phenomena in a solid/fluid system with all parameters specified with the physical properties of the chromatographic process. Further, optimizing the resin design or improving the adsorption and desorption conditions for biomolecules may be facilitated. Altogether, our work may improve material designing and process development and, thereby, help to overcome the concurrent technological and economic bottlenecks of the well-deployed purification step of IEC.
</description>
<pubDate>Thu, 01 Aug 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157408</guid>
<dc:date>2024-08-01T00:00:00Z</dc:date>
</item>
<item>
<title>Fast Charging of Lithium-Ion Batteries While Accounting for Degradation and Cell-to-Cell Variability</title>
<link>https://hdl.handle.net/1721.1/157407</link>
<description>Fast Charging of Lithium-Ion Batteries While Accounting for Degradation and Cell-to-Cell Variability
Kim, Minsu; Schaeffer, Joachim; Berliner, Marc D; Pedret Sagnier, Berta; Bazant, Martin Z; Findeisen, Rolf; Braatz, Richard D
Safety and maintaining high performance are key considerations during the operation of lithium-ion batteries. Battery degradation, in particular lithium plating and loss of active material, is often accelerated by fast charging. This study explores a strategy for the design of fast charging protocols that takes into account the influence of the variability between battery cells on factors that can impact degradation. We employ a non-intrusive polynomial chaos expansion to identify the key parameters for each degradation condition. We explore the reduction of battery degradation by adjusting constraints such as the maximum C-rate and voltage. Tight control of the key adjustable parameters contributes significantly to reducing the confidence interval of the degradation factors, allowing reduced charging time with minimal degradation. The application of our approach to two state-dependent fast charging protocols for a LiC6/LiCoO2 battery indicates the value in explicitly accounting for uncertainties when designing charging protocols that minimize degradation.
</description>
<pubDate>Mon, 02 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157407</guid>
<dc:date>2024-09-02T00:00:00Z</dc:date>
</item>
<item>
<title>Demonstration of neutron identification in neutrino interactions in the MicroBooNE liquid argon time projection chamber</title>
<link>https://hdl.handle.net/1721.1/157406</link>
<description>Demonstration of neutron identification in neutrino interactions in the MicroBooNE liquid argon time projection chamber
Abratenko, P.; Alterkait, O.; Aldana, D. A.; Arellano, L.; Asaadi, J.; Ashkenazi, A.; Balasubramanian, S.; Baller, B.; Barnard, A.; Barr, G.; Barrow, D.; Barrow, J.; Basque, V.; Bateman, J.; Rodrigues, O. B.; Berkman, S.; Bhanderi, A.; Bhat, A.; Bhattacharya, M.; Bishai, M.
A significant challenge in measurements of neutrino oscillations is reconstructing the incoming neutrino energies. While modern fully-active tracking calorimeters such as liquid argon time projection chambers in principle allow the measurement of all final state particles above some detection threshold, undetected neutrons remain a considerable source of missing energy with little to no data constraining their production rates and kinematics. We present the first demonstration of tagging neutrino-induced neutrons in liquid argon time projection chambers using secondary protons emitted from neutron-argon interactions in the MicroBooNE detector. We describe the method developed to identify neutrino-induced neutrons and demonstrate its performance using neutrons produced in muon-neutrino charged current interactions. The method is validated using a small subset of MicroBooNE’s total dataset. The selection yields a sample with 60 % of selected tracks corresponding to neutron-induced secondary protons. At this purity, the integrated efficiency is 8.4% for neutrons that produce a detectable proton.
</description>
<pubDate>Mon, 14 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157406</guid>
<dc:date>2024-10-14T00:00:00Z</dc:date>
</item>
<item>
<title>Sums of GUE matrices and concentration of hives from correlation decay of eigengaps</title>
<link>https://hdl.handle.net/1721.1/157405</link>
<description>Sums of GUE matrices and concentration of hives from correlation decay of eigengaps
Narayanan, Hariharan; Sheffield, Scott; Tao, Terence
Associated to two given sequences of eigenvalues λ 1 ≥ ⋯ ≥ λ n and μ 1 ≥ ⋯ ≥ μ n is a natural polytope, the polytope of augmented hives with the specified boundary data, which is associated to sums of random Hermitian matrices with these eigenvalues. As a first step towards the asymptotic analysis of random hives, we show that if the eigenvalues are drawn from the GUE ensemble, then the associated augmented hives exhibit concentration as n → ∞ . Our main ingredients include a representation due to Speyer of augmented hives involving a supremum of linear functions applied to a product of Gelfand–Tsetlin polytopes; known results by Klartag on the KLS conjecture in order to handle the aforementioned supremum; covariance bounds of Cipolloni–Erdős–Schröder of eigenvalue gaps of GUE; and the use of the theory of determinantal processes to analyze the GUE minor process.
</description>
<pubDate>Thu, 28 Dec 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157405</guid>
<dc:date>2023-12-28T00:00:00Z</dc:date>
</item>
<item>
<title>Fabrication Strategies for 2D Halide Perovskite Towards Next-Generation Optoelectronic Applications</title>
<link>https://hdl.handle.net/1721.1/157404</link>
<description>Fabrication Strategies for 2D Halide Perovskite Towards Next-Generation Optoelectronic Applications
Cho, Seong H.; Jung, Yonghoon; Jang, Yeoun-Woo; Kim, Hyemin; Kim, Jaehyeon; Lim, Changhyun; Park, Ki-Tae; Kim, Seongheon; Chu, Young H.; Kim, Taehoon; Lee, Jieun; Lee, Changhee; Park, Junhyoung; Yoon, Kyung T.; Eom, Dongguen
Halide perovskites have emerged as promising materials in high-performance optoelectronics due to their exceptional optoelectrical properties, such as long carrier lifetime and tunable bandgap. Despite the promising capabilities of three-dimensional (3D) halide perovskites in applications like solar cells and light-emitting diodes, their operational stability remains a critical challenge. This review focuses on quasi-two-dimensional (2D) halide perovskites, which offer enhanced stability through their reduced dimensionality. We discuss the unique properties of these materials, including the ability to modify optical and electronic characteristics by altering the organic cations and the layer number in the perovskite structure. Additionally, we review various fabrication techniques, highlighting the shift from traditional low-temperature solution processes to more advanced solid, liquid, and vapor-phase methods, which address the limitations of conventional fabrication and enhance material quality. This comprehensive review aims to provide insights into the development of stable and efficient 2D halide perovskite-based optoelectronic devices, paving the way for their integration into next-generation optoelectronic applications.
</description>
<pubDate>Tue, 08 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157404</guid>
<dc:date>2024-10-08T00:00:00Z</dc:date>
</item>
<item>
<title>Understanding and training for the impact of large language models and artificial intelligence in healthcare practice: a narrative review</title>
<link>https://hdl.handle.net/1721.1/157403</link>
<description>Understanding and training for the impact of large language models and artificial intelligence in healthcare practice: a narrative review
McCoy, Liam G.; Ci Ng, Faye Y.; Sauer, Christopher M.; Yap Legaspi, Katelyn E.; Jain, Bhav; Gallifant, Jack; McClurkin, Michael; Hammond, Alessandro; Goode, Deirdre; Gichoya, Judy; Celi, Leo A.
Reports of Large Language Models (LLMs) passing board examinations have spurred medical enthusiasm for their clinical integration. Through a narrative review, we reflect upon the skill shifts necessary for clinicians to succeed in an LLM-enabled world, achieving benefits while minimizing risks. We suggest how medical education must evolve to prepare clinicians capable of navigating human-AI systems.
</description>
<pubDate>Mon, 07 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157403</guid>
<dc:date>2024-10-07T00:00:00Z</dc:date>
</item>
<item>
<title>On the spectrum and support theory of a finite tensor category</title>
<link>https://hdl.handle.net/1721.1/157402</link>
<description>On the spectrum and support theory of a finite tensor category
Nakano, Daniel K.; Vashaw, Kent B.; Yakimov, Milen T.
Finite tensor categories (FTCs) &#13;
 are important generalizations of the categories of finite dimensional modules of finite dimensional Hopf algebras, which play a key role in many areas of mathematics and mathematical physics. There are two fundamentally different support theories for them: a cohomological one and a universal one based on the noncommutative Balmer spectra of their stable (triangulated) categories &#13;
. In this paper we introduce the key notion of the categorical center &#13;
 of the cohomology ring &#13;
 of an FTC, &#13;
. This enables us to put forward a complete and detailed program to investigate the relationship between the two support theories, based on &#13;
 of the cohomology ring &#13;
 of an FTC, &#13;
. Our main result is the construction of a continuous map from the noncommutative Balmer spectrum of an arbitrary FTC, &#13;
, to the &#13;
 of the categorical center &#13;
 and a theorem that this map is surjective under a weaker finite generation assumption for &#13;
 than the one conjectured by Etingof–Ostrik. We conjecture that, for all FTCs, (i) the map is a homeomorphism and (ii) the two-sided thick ideals of &#13;
 are classified by the specialization closed subsets of &#13;
. We verify parts of the conjecture under stronger assumptions on the category &#13;
. Many examples are presented that demonstrate how in important cases &#13;
 arises as a fixed point subring of &#13;
 and how the two-sided thick ideals of &#13;
 are determined in a uniform fashion (while previous methods dealt on a case-by-case basis with case specific methods). The majority of our results are proved in the greater generality of monoidal triangulated categories and versions of them for Tate cohomology are also presented.
</description>
<pubDate>Fri, 24 Nov 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157402</guid>
<dc:date>2023-11-24T00:00:00Z</dc:date>
</item>
<item>
<title>Search for Higgs boson pair production with one associated vector boson in proton-proton collisions at √s = 13 TeV</title>
<link>https://hdl.handle.net/1721.1/157401</link>
<description>Search for Higgs boson pair production with one associated vector boson in proton-proton collisions at √s = 13 TeV
Hayrapetyan, A.; Tumasyan, A.; Adam, W.; Andrejkovic, J. W.; Bergauer, T.; Chatterjee, S.; Damanakis, K.; Dragicevic, M.; Escalante Del Valle, A.; Hussain, P. S.; Jeitler, M.; Krammer, N.; Liko, D.; Mikulec, I.; Schieck, J.; Schöfbeck, R.; Schwarz, D.
Abstract A search for Higgs boson pair (HH) production in association with a vector boson V (W or Z boson) is presented. The search is based on proton-proton collision data at a center-of-mass energy of 13 TeV, collected with the CMS detector at the LHC, corresponding to an integrated luminosity of 138 fb−1. Both hadronic and leptonic decays of V bosons are used. The leptons considered are electrons, muons, and neutrinos. The HH production is searched for in the b b ¯ b b ¯ decay channel. An observed (expected) upper limit at 95% confidence level of VHH production cross section is set at 294 (124) times the standard model prediction. Constraints are also set on the modifiers of the Higgs boson trilinear self-coupling, kλ, assuming k2V = 1, and vice versa on the coupling of two Higgs bosons with two vector bosons, k2V. The observed (expected) 95% confidence intervals of these coupling modifiers are −37.7 &lt; kλ &lt; 37.2 (−30.1 &lt; kλ &lt; 28.9) and −12.2 &lt; k2V &lt; 13.5 (−7.2 &lt; k2V &lt; 8.9), respectively.
</description>
<pubDate>Tue, 08 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157401</guid>
<dc:date>2024-10-08T00:00:00Z</dc:date>
</item>
<item>
<title>Black hole singularity from OPE</title>
<link>https://hdl.handle.net/1721.1/157400</link>
<description>Black hole singularity from OPE
Čeplak, Nejc; Liu, Hong; Parnachev, Andrei; Valach, Samuel
Eternal asymptotically AdS black holes are dual to thermofield double states in the boundary CFT. It has long been known that black hole singularities have certain signatures in boundary thermal two-point functions related to null geodesics bouncing off the singularities (bouncing geodesics). In this paper we shed light on the manifestations of black hole singularities in the dual CFT. We decompose the boundary CFT correlator of scalar operators using the Operator Product Expansion (OPE) and focus on the contributions from the identity, the stress tensor, and its products. We show that this part of the correlator develops singularities precisely at the points that are connected by bulk bouncing geodesics. Black hole singularities are thus encoded in the analytic behavior of the boundary correlators determined by multiple stress tensor exchanges. Furthermore, we show that in the limit where the conformal dimension of the operators is large, the sum of multi-stress-tensor contributions develops a branch point singularity as predicted by the geodesic analysis. We also argue that the appearance of complexified geodesics, which play an important role in computing the full correlator, is related to the contributions of the double-trace operators in the boundary CFT.
</description>
<pubDate>Fri, 11 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157400</guid>
<dc:date>2024-10-11T00:00:00Z</dc:date>
</item>
<item>
<title>Another Myth of Persistence?</title>
<link>https://hdl.handle.net/1721.1/157399</link>
<description>Another Myth of Persistence?
Byrne, Alex
Gender dysphoria is “the aversion to some or all of those physical characteristics or social roles that connote one’s own biological sex” (Schneider et al., 2009, p. 28). The onset of gender dysphoria may be in early childhood or “around puberty or even much later in life” (American Psychiatric and Association, 2022, p. 517). This Letter concerns childhood-onset gender dysphoria; not gender dysphoria that first manifests in adolescence or adulthood (Zucker et al., 2016). The reported new presentation of “rapid-onset gender dysphoria” (Diaz &amp; Bailey, 2023; Littman, 2018), mostly affecting adolescent natal females, is also not relevant.
</description>
<pubDate>Mon, 16 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157399</guid>
<dc:date>2024-09-16T00:00:00Z</dc:date>
</item>
<item>
<title>Dynamic event-triggered integrated task and motion planning for process-aware source seeking</title>
<link>https://hdl.handle.net/1721.1/157398</link>
<description>Dynamic event-triggered integrated task and motion planning for process-aware source seeking
Li, Yingke; Hou, Mengxue; Zhou, Enlu; Zhang, Fumin
The process-aware source seeking (PASS) problem in flow fields aims to find an informative trajectory to reach an unknown source location while taking the energy consumption in the flow fields into consideration. Taking advantage of the dynamic flow field partition technique, this paper formulates this problem as a task and motion planning (TAMP) problem and proposes a bi-level hierarchical planning framework to decouple the planning of inter-region transition and inner-region trajectory by introducing inter-region junctions. An integrated strategy is developed to enable efficient upper-level planning by investigating the optimal solution of the lower-level planner. In order to leverage the information acquisition and computational burden, a dynamic event-triggered mechanism is introduced to enable asynchronized estimation, region partitioning and re-plans. The proposed algorithm provides guaranteed convergence of the trajectory, and achieves automatic trade-offs of both exploration-exploitation and accuracy-efficiency. Simulations in a highly complicated and realistic ocean surface flow field validate the merits of the proposed algorithm, which demonstrates a significant reduction in computational burden without compromising planning optimality.
</description>
<pubDate>Sat, 12 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157398</guid>
<dc:date>2024-10-12T00:00:00Z</dc:date>
</item>
<item>
<title>Urban street clusters: unraveling the associations of street characteristics on urban vibrancy dynamics in age, time, and day</title>
<link>https://hdl.handle.net/1721.1/157397</link>
<description>Urban street clusters: unraveling the associations of street characteristics on urban vibrancy dynamics in age, time, and day
Jang, Kee M.; Suh, Hanew; Haddad, Fadi G.; Sun, Maoran; Duarte, Fábio; Kim, Youngchul
Understanding urban vibrancy has been considered crucial to promoting human activities and interactions in public open spaces. Recent advancements in urban big data have facilitated the potential to understand and measure vibrancy patterns throughout cities. While streets are considered the center stage of human activity, previous studies have often overlooked their multifaceted nature and their association with urban vibrancy. In this study, we incorporate multi-source big data and combine a set of features that comprehensively describe the scale, function, and topology of street segments in two Seoul districts: Jung-gu and Gangnam-gu. Using these features, we employ a machine learning clustering technique to classify them into five distinct typologies. Then, with street-level aggregated mobile phone tracking data, we investigate whether street typology characteristics are associated with urban vibrancy with respect to age groups, time of day, and day types (weekends/weekdays). The results show varying relationships between street characteristics with age-, time- and day-vibrancy measures by the identified street typology. Further, we contrast the results of the two districts to evaluate urban vibrancy differences in organic and planned urban layouts. This study enables a more nuanced understanding of urban streets to better comprehend their impact on people’s use of street space. The derived novel insights could assist planners and designers to better pinpoint street management solutions for different age- and time-dependent needs based on the complexities in urban vibrancy dynamics.
</description>
<pubDate>Thu, 03 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157397</guid>
<dc:date>2024-10-03T00:00:00Z</dc:date>
</item>
<item>
<title>Engineering synthetic and recombinant human lysosomal β-glucocerebrosidase for enzyme replacement therapy for Gaucher disease</title>
<link>https://hdl.handle.net/1721.1/157396</link>
<description>Engineering synthetic and recombinant human lysosomal β-glucocerebrosidase for enzyme replacement therapy for Gaucher disease
Figueiredo, Lílian L. S.; Junior, Wilson L.; da Silva Goncalves, Victor W.; Ramos, Ester S.; D’Almeida, Vania; de Souza, Lucas E. B.; Orellana, Maristela D.; Abraham, Kuruvilla J.; Lichtenstein, Flávio; Bleicher, Lucas
Gaucher Disease (GD) is an autosomal recessive, lysosomal storage disease caused by pathogenic variants in the glucocerebrosidase gene, leading to the loss of β-glucocerebrosidase (GCase) enzymatic activity. Enzyme replacement therapy (ERT) with recombinant GCase is the standard of care in GD patients. Our study investigates the combined use of in silico molecular evolution, synthetic biology and gene therapy approaches to develop a new synthetic recombinant enzyme. We engineered four GCases containing missense mutations in the signal peptide (SP) from four selected mammalian species, and compared them with human GCase without missense mutations in the SP. We investigated transcriptional regulation with CMV and hEF1a promoters alongside a GFP control construct in 293-FT human cells. One hEF1a-driven mutant GCase shows a 5.2-fold higher level of transcription than control GCase. In addition, this mutant exhibits up to a sixfold higher activity compared with the mock-control, and the predicted tertiary structure of this mutant GCase aligns with human GCase. We also evaluated conserved and coevolved residues mapped to functionally important positions. Further studies are needed to assess its functionality in a GD animal model. Altogether, our findings provide in vitro evidence of the potential of this engineered enzyme for improved therapeutic effects for GD.
</description>
<pubDate>Fri, 04 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157396</guid>
<dc:date>2024-10-04T00:00:00Z</dc:date>
</item>
<item>
<title>High-speed grinding: from mechanism to machine tool</title>
<link>https://hdl.handle.net/1721.1/157395</link>
<description>High-speed grinding: from mechanism to machine tool
Wang, Yu-Long; Zhang, Yan-Bin; Cui, Xin; Liang, Xiao-Liang; Li, Run-Ze; Wang, Ruo-Xin; Sharma, Shubham; Liu, Ming-Zheng; Gao, Teng; Zhou, Zong-Ming; Wang, Xiao-Ming; Dambatta, Yusuf S.; Li, Chang-He
High-speed grinding (HSG) is an advanced technology for precision machining of difficult-to-cut materials in aerospace and other fields, which could solve surface burns, defects and improve surface integrity by increasing the linear speed of the grinding wheel. The advantages of HSG have been preliminarily confirmed and the equipment has been built for experimental research, which can achieve a high grinding speed of more than 300 m/s. However, it is not yet widely used in manufacturing due to the insufficient understanding on material removal mechanism and characteristics of HSG machine tool. To fill this gap, this paper provides a comprehensive overview of HSG technologies. A new direction for adding auxiliary process in HSG is proposed. Firstly, the combined influence law of strain hardening, strain rate intensification, and thermal softening effects on material removal mechanism was revealed, and models of material removal strain rate, grinding force and grinding temperature were summarized. Secondly, the constitutive models under high strain rate boundaries were summarized by considering various properties of material and grinding parameters. Thirdly, the change law of material removal mechanism of HSG was revealed when the thermodynamic boundary conditions changed, by introducing lubrication conditions such as minimum quantity lubrication (MQL), nano-lubricant minimum quantity lubrication (NMQL) and cryogenic air (CA). Finally, the mechanical and dynamic characteristics of the key components of HSG machine tool were summarized, including main body, grinding wheel, spindle and dynamic balance system. Based on the content summarized in this paper, the prospect of HSG is put forward. This study establishes a solid foundation for future developments in the field and points to promising directions for further exploration.
</description>
<pubDate>Sat, 05 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157395</guid>
<dc:date>2024-10-05T00:00:00Z</dc:date>
</item>
<item>
<title>Global optimization: a machine learning approach</title>
<link>https://hdl.handle.net/1721.1/157394</link>
<description>Global optimization: a machine learning approach
Bertsimas, Dimitris; Margaritis, Georgios
Many approaches for addressing global optimization problems typically rely on relaxations of nonlinear constraints over specific mathematical primitives. This is restricting in applications with constraints that are implicit or consist of more general primitives. Trying to address such limitations, Bertsimas and Ozturk (2023) proposed OCTHaGOn as a way of solving very general global optimization problems by approximating the nonlinear constraints using hyperplane-based decision-trees and then using those trees to construct a unified MIO approximation of the original problem. We provide extensions to this approach, by (i) approximating the original problem using other MIO-representable ML models besides decision trees, such as gradient boosted trees, multi layer perceptrons and suport vector machines (ii) proposing adaptive sampling procedures for more accurate ML-based constraint approximations, (iii) utilizing robust optimization to account for the uncertainty of the sample-dependent training of the ML models, (iv) leveraging a family of relaxations to address the infeasibilities of the final MIO approximation. We then test the enhanced framework in 81 global optimization instances. We show improvements in solution feasibility and optimality in the majority of instances. We also compare against BARON, showing improved optimality gaps and solution times in more than 9 instances.
</description>
<pubDate>Mon, 07 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157394</guid>
<dc:date>2024-10-07T00:00:00Z</dc:date>
</item>
<item>
<title>The Mapping Imaging Spectrometer for Europa (MISE)</title>
<link>https://hdl.handle.net/1721.1/157393</link>
<description>The Mapping Imaging Spectrometer for Europa (MISE)
Blaney, Diana L.; Hibbitts, Karl; Diniega, Serina; Davies, Ashley G.; Clark, Roger N.; Green, Robert O.; Hedman, Matthew; Langevin, Yves; Lunine, Jonathan; McCord, Thomas B.; Murchie, Scott; Paranicas, Chris; Seelos, Frank; Soderblom, Jason M.; Cable, Morgan L.
Abstract The Mapping Imaging Spectrometer for Europa (MISE) is an infrared compositional instrument that will fly on NASA’s Europa Clipper mission to the Jupiter system. MISE is designed to meet the Level-1 science requirements related to the mission’s composition science objective to “understand the habitability of Europa’s ocean through composition and chemistry” and to contribute to the geology science and ice shell and ocean objectives, thereby helping Europa Clipper achieve its mission goal to “explore Europa to investigate its habitability.” MISE has a mass of 65 kg and uses an energy per flyby of 75.2 W-h. MISE will detect illumination from 0.8 to 5 μm with 10 nm spectral resolution, a spatial sampling of 25 m per pixel at 100 km altitude, and 300 cross-track pixels, enabling discrimination among the two principal states of water ice on Europa, identification of the main non-ice components of interest: salts, acids, and organics, and detection of trace materials as well as some thermal signatures. Furthermore, the spatial resolution and global coverage that MISE will achieve will be complemented by the higher spectral resolution of some Earth-based assets. MISE, combined with observations collected by the rest of the Europa Clipper payload, will enable significant advances in our understanding of how the large-scale structure of Europa’s surface is shaped by geological processes and inform our understanding of the surface at microscale. This paper describes the planned MISE science investigations, instrument design, concept of operations, and data products.
</description>
<pubDate>Wed, 09 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157393</guid>
<dc:date>2024-10-09T00:00:00Z</dc:date>
</item>
<item>
<title>Real-time chiral dynamics at finite temperature from quantum simulation</title>
<link>https://hdl.handle.net/1721.1/157392</link>
<description>Real-time chiral dynamics at finite temperature from quantum simulation
Ikeda, Kazuki; Kang, Zhong-Bo; Kharzeev, Dmitri E.; Qian, Wenyang; Zhao, Fanyi
In this study, we explore the real-time dynamics of the chiral magnetic effect (CME) at a finite temperature in the (1+1)-dimensional QED, the massive Schwinger model. By introducing a chiral chemical potential μ5 through a quench process, we drive the system out of equilibrium and analyze the induced vector currents and their evolution over time. The Hamiltonian is modified to include the time-dependent chiral chemical potential, thus allowing the investigation of the CME within a quantum computing framework. We employ the quantum imaginary time evolution (QITE) algorithm to study the thermal states, and utilize the Suzuki-Trotter decomposition for the real-time evolution. This study provides insights into the quantum simulation capabilities for modeling the CME and offers a pathway for studying chiral dynamics in low-dimensional quantum field theories.
</description>
<pubDate>Thu, 03 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157392</guid>
<dc:date>2024-10-03T00:00:00Z</dc:date>
</item>
<item>
<title>Search for a resonance decaying to a W boson and a photon in proton-proton collisions at  √s = 13 TeV using leptonic W boson decays</title>
<link>https://hdl.handle.net/1721.1/157391</link>
<description>Search for a resonance decaying to a W boson and a photon in proton-proton collisions at  √s = 13 TeV using leptonic W boson decays
Hayrapetyan, A.; Tumasyan, A.; Adam, W.; Andrejkovic, J. W.; Bergauer, T.; Chatterjee, S.; Damanakis, K.; Dragicevic, M.; Hussain, P. S.; Jeitler, M.; Krammer, N.; Li, A.; Liko, D.; Mikulec, I.; Schieck, J.; Schöfbeck, R.
A search for a new charged particle X with mass between 0.3 and 2.0 TeV decaying to a W boson and a photon is presented, using proton-proton collision data at a center-of-mass energy of 13 TeV, collected by the CMS experiment and corresponding to an integrated luminosity of 138 fb−1. Particle X has electric charge ±1 and is assumed to have spin 0. The search is performed using the electron and muon decays of the W boson. No significant excess above the predicted background is observed. The upper limit at 95% confidence level on the product of the production cross section of the X and its branching fraction to a W boson and a photon is found to be 94 (137) fb for a 0.3 TeV resonance and 0.75 (0.81) fb for a 2.0 TeV resonance, for an X width-to-mass ratio of 0.01% (5%). This search presents the most stringent constraints to date on the existence of such resonances across the probed mass range. A statistical combination with an earlier study based on the hadronic decay mode of the W boson is also performed, and the upper limit at 95% confidence level for a 2.0 TeV resonance is reduced to 0.50 (0.63) fb for an X width-to-mass ratio of 0.01% (5%).
</description>
<pubDate>Wed, 25 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157391</guid>
<dc:date>2024-09-25T00:00:00Z</dc:date>
</item>
<item>
<title>The topology of shapes made with points</title>
<link>https://hdl.handle.net/1721.1/157390</link>
<description>The topology of shapes made with points
Haridis, Alexandros
In architecture, city planning, visual arts, and other design areas, shapes are often made with points, or with structural representations based on point-sets. Shapes made with points can be understood more generally as finite arrangements formed with elements (i.e. points) of the algebra of shapes Ui, for i = 0. This paper examines the kind of topology that is applicable to such shapes. From a mathematical standpoint, any “shape made with points” is equivalent to a finite space, so that topology on a shape made with points is no different than topology on a finite space: the study of topological structure naturally coincides with the study of preorder relations on the points of the shape. After establishing this fact, some connections between the topology of shapes made with points and the topology of “point-free” pictorial shapes (when i &gt; 0) are defined and the main differences between the two are summarized.
</description>
<pubDate>Mon, 11 Feb 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157390</guid>
<dc:date>2019-02-11T00:00:00Z</dc:date>
</item>
<item>
<title>SHREC’21: Quantifying shape complexity</title>
<link>https://hdl.handle.net/1721.1/157389</link>
<description>SHREC’21: Quantifying shape complexity
Arslan, Mazlum Ferhat; Haridis, Alexandros; Rosin, Paul L.; Tari, Sibel; Brassey, Charlotte; Gardiner, James D.; Genctav, Asli; Genctav, Murat
This paper presents the results of SHREC’21 track: Quantifying Shape Complexity. Our goal is to investigate how good the submitted shape complexity measures are (i.e. with respect to ground truth) and investigate the relationships between these complexity measures (i.e. with respect to correlations). The dataset consists of three collections: 1800 perturbed cube and sphere models classified into 4 categories, 50 shapes inspired from the fields of architecture and design classified into 2 categories, and the data from the Princeton Segmentation Benchmark, which consists of 19 natural object categories. We evaluate the performances of the methods by computing Kendall rank correlation coefficients both between the orders produced by each complexity measure and the ground truth and between the pair of orders produced by each pair of complexity measures. Our work, being a quantitative and reproducible analysis with justified ground truths, presents an improved means and methodology for the evaluation of shape complexity.
</description>
<pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157389</guid>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</item>
<item>
<title>Defining Native American</title>
<link>https://hdl.handle.net/1721.1/157388</link>
<description>Defining Native American
Schiappa, Edward
This paper explores the question of who is defined as a Native American within the jurisdictions of the United States. Determining individual status can be seen as a two-step process: Is a given individual recognized by a specific tribe as a member? Then, is that specific tribe acknowledged by a relevant governmental unit? Though both seem simple questions, this paper illustrates that the question “Is Person X a Native American?” sometimes can be quite fraught, and manifests what I have described previously as definitional gaps and definitional ruptures. Ultimately, as is typical of regulatory definitions, the choice of definitional criteria to apply is a question of values, interests, and politics. I begin with a description of the varied definitional frameworks at work in determinations of whether a given group of people constitute a recognized tribe, then note how tribes themselves are institutions empowered to define who does or does not count as members through practices of enrollment and disenrollment. I then describe three case studies of definitional phenomena—one as a case of a definitional gap (college professors described as “Pretendians”), the second as a case of definitional rupture (determining Native American eligibility for free tuition within the University of California system), and a third as an illustration of regulatory versus self-definition (U.S. Census practices).
</description>
<pubDate>Thu, 03 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157388</guid>
<dc:date>2024-10-03T00:00:00Z</dc:date>
</item>
<item>
<title>New horizon symmetries, hydrodynamics, and quantum chaos</title>
<link>https://hdl.handle.net/1721.1/157387</link>
<description>New horizon symmetries, hydrodynamics, and quantum chaos
Knysh, Maria; Liu, Hong; Pinzani-Fokeeva, Natalia
We generalize the formulation of horizon symmetries presented in previous literature to include diffeomorphisms that can shift the location of the horizon. In the context of the AdS/CFT duality, we show that horizon symmetries can be interpreted on the boundary as emergent low-energy gauge symmetries. In particular, we identify a new class of horizon symmetries that extend the so-called shift symmetry, which was previously postulated for effective field theories of maximally chaotic systems. Additionally, we comment on the connections of horizon symmetries with bulk calculations of out-of-time-ordered correlation functions and the phenomenon of pole-skipping.
</description>
<pubDate>Tue, 24 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157387</guid>
<dc:date>2024-09-24T00:00:00Z</dc:date>
</item>
<item>
<title>The transverse energy-energy correlator at next-to-next-to-next-to-leading logarithm</title>
<link>https://hdl.handle.net/1721.1/157386</link>
<description>The transverse energy-energy correlator at next-to-next-to-next-to-leading logarithm
Gao, Anjie; Li, Hai T.; Moult, Ian; Zhu, Hua X.
We present an operator based factorization formula for the transverse energy-energy correlator in the back-to-back (dijet) region, and uncover its remarkable perturbative simplicity and relation to transverse momentum dynamics. This simplicity enables us to achieve next-to-next-to-next-to leading logarithmic (N3LL) accuracy for a hadron collider dijet event shape for the first time. Our factorization formula applies to W/Z/γ + jet, and dijet production, providing a natural generalization of transverse momentum observables to one- and two-jet final states. This provides a laboratory for precision studies of QCD and transverse momentum dynamics at hadron colliders, as well as an opportunity for understanding factorization and its violation in a perturbatively well controlled setting.
</description>
<pubDate>Fri, 13 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157386</guid>
<dc:date>2024-09-13T00:00:00Z</dc:date>
</item>
<item>
<title>Transverse polarization measurement of Λ hyperons in pNe collisions at  √sNN = 68.4 GeV with the LHCb detector</title>
<link>https://hdl.handle.net/1721.1/157385</link>
<description>Transverse polarization measurement of Λ hyperons in pNe collisions at  √sNN = 68.4 GeV with the LHCb detector
Aaij, R.; Abdelmotteleb, A. S. W.; Abellan Beteta, C.; Abudinén, F.; Ackernley, T.; Adefisoye, A. A.; Adeva, B.; Adinolfi, M.; Adlarson, P.; Agapopoulou, C.; Aidala, C. A.; Ajaltouni, Z.; Akar, S.; Akiba, K.; Albicocco, P.; Albrecht, J.; Alessio, F.; Alexander, M.
A measurement of the transverse polarization of the Λ andΛ hyperons in pNe&#13;
fxed-target collisions at √&#13;
sNN = 68.4 GeV is presented using data collected by the LHCb&#13;
detector. The polarization is studied using the decay Λ → pπ− together with its charge&#13;
conjugated process, the integrated values measured are&#13;
PΛ = 0.029 ± 0.019 (stat) ± 0.012 (syst),&#13;
PΛ = 0.003 ± 0.023 (stat) ± 0.014 (syst).&#13;
Furthermore, the results are shown as a function of the Feynman x variable, transverse&#13;
momentum, pseudorapidity and rapidity of the hyperons, and are compared with previous&#13;
measurements.
</description>
<pubDate>Tue, 17 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157385</guid>
<dc:date>2024-09-17T00:00:00Z</dc:date>
</item>
<item>
<title>Data activism against feminicide Co-designing digital tools to monitor gender-related violence across the Americas</title>
<link>https://hdl.handle.net/1721.1/157384</link>
<description>Data activism against feminicide Co-designing digital tools to monitor gender-related violence across the Americas
Cruxên, Isadora; Jungs de Almeida, Alessandra; D’Ignazio, Catherine
Gender-related violence happens inside the&#13;
home and outside in the public spaces of&#13;
cities, towns, villages and rural areas around&#13;
the world. It is part of a pattern of violence&#13;
against women driven by misogynistic, racist&#13;
and transphobic attitudes in society,&#13;
discrimination and unequal power. As such,&#13;
some people are disproportionately&#13;
targeted with violence and neglected in&#13;
efforts to address it, including women&#13;
from racialised, trans, indigenous and&#13;
low-income groups. While the crimes are&#13;
committed by individuals, the problem is&#13;
structural, enabled by state inaction and&#13;
media coverage that downplays the issue or&#13;
blames victims. By addressing these&#13;
structural causes, we can help stop&#13;
the violence and create a safer world&#13;
for all women and girls.
</description>
<pubDate>Sun, 01 Jan 2023 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157384</guid>
<dc:date>2023-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>Mesoscale Brain Mapping: Bridging Scales and Modalities in Neuroimaging – A Symposium Review</title>
<link>https://hdl.handle.net/1721.1/157383</link>
<description>Mesoscale Brain Mapping: Bridging Scales and Modalities in Neuroimaging – A Symposium Review
Marchant, Joshua K.; Ferris, Natalie G.; Grass, Diana; Allen, Magdelena S.; Gopalakrishnan, Vivek; Olchanyi, Mark; Sehgal, Devang; Sheft, Maxina; Strom, Amelia; Bilgic, Berkin; Edlow, Brian; Hillman, Elizabeth M. C.; Juttukonda, Meher R.; Lewis, Laura
Advances in the spatiotemporal resolution and field-of-view of neuroimaging tools are driving mesoscale studies for translational neuroscience. On October 10, 2023, the Center for Mesoscale Mapping (CMM) at the Massachusetts General Hospital (MGH) Athinoula A. Martinos Center for Biomedical Imaging and the Massachusetts Institute of Technology (MIT) Health Sciences Technology based Neuroimaging Training Program (NTP) hosted a symposium exploring the state-of-the-art in this rapidly growing area of research. “Mesoscale Brain Mapping: Bridging Scales and Modalities in Neuroimaging” brought together researchers who use a broad range of imaging techniques to study brain structure and function at the convergence of the microscopic and macroscopic scales. The day-long event centered on areas in which the CMM has established expertise, including the development of emerging technologies and their application to clinical translational needs and basic neuroscience questions. The in-person symposium welcomed more than 150 attendees, including 57 faculty members, 61 postdoctoral fellows, 35 students, and four industry professionals, who represented institutions at the local, regional, and international levels. The symposium also served the training goals of both the CMM and the NTP. The event content, organization, and format were planned collaboratively by the faculty and trainees. Many CMM faculty presented or participated in a panel discussion, thus contributing to the dissemination of both the technologies they have developed under the auspices of the CMM and the findings they have obtained using those technologies. NTP trainees who benefited from the symposium included those who helped to organize the symposium and/or presented posters and gave “flash” oral presentations. In addition to gaining experience from presenting their work, they had opportunities throughout the day to engage in one-on-one discussions with visiting scientists and other faculty, potentially opening the door to future collaborations. The symposium presentations provided a deep exploration of the many technological advances enabling progress in structural and functional mesoscale brain imaging. Finally, students worked closely with the presenting faculty to develop this report summarizing the content of the symposium and putting it in the broader context of the current state of the field to share with the scientific community. We note that the references cited here include conference abstracts corresponding to the symposium poster presentations.
</description>
<pubDate>Mon, 23 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157383</guid>
<dc:date>2024-09-23T00:00:00Z</dc:date>
</item>
<item>
<title>A Mixed-Methods Research Design to Advance Inclusive and Equitable Teaching</title>
<link>https://hdl.handle.net/1721.1/157382</link>
<description>A Mixed-Methods Research Design to Advance Inclusive and Equitable Teaching
Soicher, Raechel N.; Baker, Amanda R.; Thomas, Ruthann C.
We designed this project to advance inclusive and equitable teaching by leveraging data to motivate, inform, and tailor teaching development initiatives to the varied needs and resources of academic departments. We developed an innovative framework and mixed methods research design to systematically assess inclusive and equitable teaching at the student, course, department, and institution levels. In the context of a decentralized institution, we partnered with academic departments to collect data about their current practices, existing resources, and needs for advancing inclusive and equitable teaching through a student survey, analysis of course syllabi, and interviews with instructors. We shared and discussed results with partners in academic departments to support and inform departmental change initiatives. We highlight how synthesizing findings across multiple levels of analysis using a mixed methods design provides a new perspective on the perennial issue of the uptake of inclusive and equitable teaching practices in higher education. We discuss lessons learned and future directions with the hope that the framework and/or the research methodology can be a template for other researchers or educational developers to support implementation and sustainability of inclusive and equitable teaching practices.
</description>
<pubDate>Thu, 26 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157382</guid>
<dc:date>2024-09-26T00:00:00Z</dc:date>
</item>
<item>
<title>Missing Data</title>
<link>https://hdl.handle.net/1721.1/157381</link>
<description>Missing Data
Jungs de Almeida, Alessandra; Klein, Lauren; D’Ignazio, Catherine
There are several different definitions of missing data. While some might refer to data that is literally absent, as in statistical approaches to missing data&#13;
that attempts to interpolate what might fill in the gaps,1 others, such as the&#13;
artist and educator Mimi Ọnụọha, take “missing data” to mean something&#13;
more political — “something [that] does not exist, but it should.”2 In the&#13;
same line as Ọnụọha, our definition of missing data refers to information&#13;
that goes uncounted (or otherwise unrecorded), despite social and political&#13;
demands that such data should be collected and made available. Our concept of missing data may include entirely absent data, as well as data that is&#13;
sparse, neglected, poorly collected and maintained, purposely removed, difficult to access, infrequently updated, contested, and/or underreported.&#13;
Missing data, in the expanded definition we propose in this essay, is a&#13;
political concept. On one hand, missing data can function as a challenge&#13;
from civil society to formal institutions, including governments, religious&#13;
institutions, and corporations. In these cases, it represents a demand from&#13;
specific communities about public issues that concern society writ large.&#13;
On the other hand, missing data may be actively desired and produced by&#13;
marginalized groups seeking to protect information about their community&#13;
and culture from the eyes of institutions. In these cases, the data is “missing” for institutions, which make a demand for information that is actively protected by and kept within a community. In this sense, missing data is also&#13;
a relational concept because it implies a directionality — an informatic demand from one group or institution to another group or institution. Missing&#13;
data is not always a bad thing, nor always a good thing. Instead of thinking&#13;
of it normatively, the locus of analysis should be on the social context, who&#13;
is making the demand to whom, and the political context for which specific&#13;
information is deemed to be missing. Our definition differs from other more&#13;
technical notions of missing data that may not consider or highlight the unbalanced power relationships between different social actors, such as marginalized communities and the state. In this sense, the definition of missing&#13;
data proposed here explicitly includes a political demand, because the group&#13;
making the demand for information is trying to charge another group or&#13;
institution with the responsibility for the absence of this data. When this relates to marginalized groups making demands on the state, groups are also&#13;
trying to assert the institutional neglect of the group or issue represented by&#13;
the data. Given the focus on the datafied state, this article will focus particularly on missing data related to governments, where civil society groups&#13;
demand that the government collect specific data or where the government&#13;
demands data that communities seek to protect.
</description>
<pubDate>Wed, 24 Apr 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157381</guid>
<dc:date>2024-04-24T00:00:00Z</dc:date>
</item>
<item>
<title>Design and Performance Comparison of Multi-Frequency Inductors for Megahertz Wireless Power Transfer</title>
<link>https://hdl.handle.net/1721.1/157380</link>
<description>Design and Performance Comparison of Multi-Frequency Inductors for Megahertz Wireless Power Transfer
Yang, Rachel S.; Nikiforidis, Ioannis; Pucci, Nunzio; Joisher, Mansi V.; Wagle, Prateek; Mitcheson, Paul D.; Perreault, David J.
In MHz inductive power transfer, inductors are crucial parts of the primary coil driver circuitry but, given the highly efficient Gallium Nitride (GaN) devices, dominate the losses. These inductors must often conduct current at multiple frequencies, making it challenging to design them with low loss. In this paper, we provide a streamlined process for designing low-loss inductors for multi-frequency MHz applications that leverages a previously-developed modified pot (MP) core structure. Using this design process, we design an MP inductor and integrate it into the ϕ-branch of a 13.56 MHz, 70 W Class EF 2 inverter, which conducts significant current at the fundamental, second and third harmonics. We quantitatively evaluate its performance by conducting a comprehensive comparison with a variety of air-core and ferrite-core inductors. The power losses in the MP inductor were less than half that of the matched comparison air-core and cored inductors, which resulted in a 0.4% increase in system efficiency over the best comparison inductor, corresponding to a 6% reduction in total system loss.
IEEE Applied Power Electronics Conference and Exposition (APEC), Long Beach, CA, USA, 25-29 February 2024
</description>
<pubDate>Sun, 25 Feb 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157380</guid>
<dc:date>2024-02-25T00:00:00Z</dc:date>
</item>
<item>
<title>Design Flexibility of a Modular Low-Loss High-Frequency Inductor Structure</title>
<link>https://hdl.handle.net/1721.1/157379</link>
<description>Design Flexibility of a Modular Low-Loss High-Frequency Inductor Structure
Yang, Rachel S.; Hanson, Alex J.; Sullivan, Charles R.; Perreault, David J.
Miniaturization and efficiency of power electronics are limited by magnetic components, which are difficult to scale to small size and high frequency (HF). Inductor structures using field shaping, quasi-distributed gaps, and modular construction can achieve low loss at HF (3-30 MHz) without litz wire. For widespread adoption though, these structures must be shown to remain effective across a wide design range and be economical to manufacture. This article investigates the design flexibility of one such previously proposed inductor structure with a modified pot core and demonstrates that this structure can provide excellent performance for a wide range of inductance and power handling requirements using only a few sets of manufactured core pieces. The core pieces used in the modified pot core structure can be scaled by 4× in volume, compared to roughly 2× for conventional core families, and still achieve high performance over a wide design space. Moreover, this approach can achieve about half the loss of conventional designs at HF and, unlike conventional core sets, can provide a range of low-loss form factors with a single family of components. The proposed inductor structure and design approaches, thus, offer new opportunities in the practical production of low-loss HF inductors.
</description>
<pubDate>Fri, 30 Apr 2021 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157379</guid>
<dc:date>2021-04-30T00:00:00Z</dc:date>
</item>
<item>
<title>Permanent Magnet Hybrid Core Inductors for High Saturation Capability</title>
<link>https://hdl.handle.net/1721.1/157378</link>
<description>Permanent Magnet Hybrid Core Inductors for High Saturation Capability
Yang, Rachel S.; Nadler, Andrew B.; Sullivan, Charles R.; Perreault, David J.
Inductor designs with large dc current relative to ac ripple are often saturation-limited, but these designs also often greatly underutilize the core material's flux carrying capabilities. Instead of using the full flux swing range from reverse saturation to forward saturation, these designs typically only use half the range. To use the full range, we propose a permanent magnet (PM) hybrid core in which a PM provides a dc flux offset in the core while being placed outside of the main winding flux path. In this work, we derive first-order theory for analyzing and designing the PM hybrid core. We then demonstrate a working proof-of-concept prototype using off-the-shelf parts. This prototype outperforms a comparable ferrite inductor design by achieving the same energy storage at half the dc resistance, thus demonstrating the potential benefits of the PM hybrid core.
IEEE 23rd Workshop on Control and Modeling for Power Electronics (COMPEL), Tel Aviv, Israel, 20-23 June 2022
</description>
<pubDate>Mon, 20 Jun 2022 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157378</guid>
<dc:date>2022-06-20T00:00:00Z</dc:date>
</item>
<item>
<title>Unexpected anthropogenic emission decreases explain recent atmospheric mercury concentration declines</title>
<link>https://hdl.handle.net/1721.1/157377</link>
<description>Unexpected anthropogenic emission decreases explain recent atmospheric mercury concentration declines
Feinberg, Aryeh; Selin, Noelle E.; Braban, Christine F.; Chang, Kai-Lan; Custódio, Danilo; Jaffe, Daniel A.; Kyllönen, Katriina; Landis, Matthew S.; Leeson, Sarah R.; Luke, Winston; Molepo, Koketso M.; Murovec, Marijana; Nerentorp Mastromonaco, Michelle G.; Aspmo Pfaffhuber, Katrine; Rüdiger, Julian; Sheu, Guey-Rong; St. Louis, Vincent L.
Anthropogenic activities emit ~2,000 Mg y−1 of the toxic pollutant mercury (Hg) into the atmosphere, leading to long-range transport and deposition to remote ecosystems. Global anthropogenic emission inventories report increases in Northern Hemispheric (NH) Hg emissions during the last three decades, in contradiction with the observed decline in atmospheric Hg concentrations at NH measurement stations. Many factors can obscure the link between anthropogenic emissions and atmospheric Hg concentrations, including trends in the reemissions of previously released anthropogenic (“legacy”) Hg, atmospheric sink variability, and spatial heterogeneity of monitoring data. Here, we assess the observed trends in gaseous elemental mercury (Hg0) in the NH and apply biogeochemical box modeling and chemical transport modeling to understand the trend drivers. Using linear mixed effects modeling of observational data from 51 stations, we find negative Hg0 trends in most NH regions, with an overall trend for 2005 to 2020 of −0.011 ± 0.006 ng m−3 y−1 (±2 SD). In contrast to existing emission inventories, our modeling analysis suggests that annual NH anthropogenic emissions must have declined by at least 140 Mg between the years 2005 and 2020 to be consistent with observed trends. Faster declines in 95th percentile Hg0 values than median values in Europe, North America, and East Asian measurement stations corroborate that the likely cause is a decline in nearby anthropogenic emissions rather than background legacy reemissions. Our results are relevant for evaluating the effectiveness of the Minamata Convention on Mercury, demonstrating that existing emission inventories are incompatible with the observed Hg0 declines.
</description>
<pubDate>Tue, 08 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157377</guid>
<dc:date>2024-10-08T00:00:00Z</dc:date>
</item>
<item>
<title>Molecular Dynamic Simulations Reveal that Water-Soluble QTY-Variants of Glutamate Transporters EAA1, EAA2 and EAA3 Retain the Conformational Characteristics of Native Transporters</title>
<link>https://hdl.handle.net/1721.1/157376</link>
<description>Molecular Dynamic Simulations Reveal that Water-Soluble QTY-Variants of Glutamate Transporters EAA1, EAA2 and EAA3 Retain the Conformational Characteristics of Native Transporters
Karagöl, Alper; Karagöl, Taner; Zhang, Shuguang
Objective&#13;
              Glutamate transporters play a crucial role in neurotransmitter homeostasis, but studying their structure and function is challenging due to their membrane-bound nature. This study aims to investigate whether water-soluble QTY-variants of glutamate transporters EAA1, EAA2 and EAA3 retain the conformational characteristics and dynamics of native membrane-bound transporters.&#13;
            &#13;
            &#13;
              Methods&#13;
              Molecular dynamics simulations and comparative genomics were used to analyze the structural dynamics of both native transporters and their QTY-variants. Native transporters were simulated in lipid bilayers, while QTY-variants were simulated in aqueous solution. Lipid distortions, relative solvent accessibilities, and conformational changes were examined. Evolutionary conservation profiles were correlated with structural dynamics. Statistical analyses included multivariate analysis to account for confounding variables.&#13;
            &#13;
            &#13;
              Results&#13;
              QTY-variants exhibited similar residue-wise conformational dynamics to their native counterparts, with correlation coefficients of 0.73 and 0.56 for EAA1 and EAA3, respectively (p &lt; 0.001). Hydrophobic interactions of native helices correlated with water interactions of QTY- helices (rs = 0.4753, p &lt; 0.001 for EAA1). QTY-variants underwent conformational changes resembling the outward-to-inward transition of native transporters.&#13;
            &#13;
            &#13;
              Conclusions&#13;
              Water-soluble QTY-variants retain key structural properties of native glutamate transporters and mimic aspects of native lipid interactions, including conformational flexibility. This research provides valuable insights into the conformational changes and molecular mechanisms of glutamate transport, potentially offering a new approach for studying membrane protein dynamics and drug interactions.
</description>
<pubDate>Wed, 25 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157376</guid>
<dc:date>2024-09-25T00:00:00Z</dc:date>
</item>
<item>
<title>Figuring Figures: An assessment of large language models on different modalities of math word problems</title>
<link>https://hdl.handle.net/1721.1/157375</link>
<description>Figuring Figures: An assessment of large language models on different modalities of math word problems
Wang, Yan; Lynch, Jayson; Krueger, Elizabeth
This paper presents a new dataset of geometry word problems in three forms: with figures, with code that produces these figures, and purely textual. Having versions of the same question which use different modalities allows for a more direct comparison of the performance of machine learning models on mathematical question answering across different modalities of input. We evaluate several multi-modal large language models and find they consistently perform best on the plain text descriptions and worst on the version with images.
ICMLT 2024, May 24–26, 2024, Oslo, Norway
</description>
<pubDate>Fri, 24 May 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157375</guid>
<dc:date>2024-05-24T00:00:00Z</dc:date>
</item>
<item>
<title>Fracturing Processes in Specimens with Internal vs. Throughgoing Flaws: An Experimental Study Using 3D Printed Materials</title>
<link>https://hdl.handle.net/1721.1/157374</link>
<description>Fracturing Processes in Specimens with Internal vs. Throughgoing Flaws: An Experimental Study Using 3D Printed Materials
Almubarak, Majed; Germaine, John T.; Einstein, Herbert H.
The fracturing behavior and associated mechanical characterization of rocks are important for many applications in the fields of civil, mining, geothermal, and petroleum engineering. Laboratory testing of rocks plays a major role in understanding the underlying processes that occur on the larger scale and for predicting rock behavior. Fracturing research requires well-defined and consistent boundary conditions. Consequently, the testing design and setup can greatly influence the results. In this study, a comprehensive experimental program using an artificial material was carried out to systematically evaluate the effects of different parameters in rock testing under uniaxial compression. The parameters include compression platen type, specimen centering, loading control method, boundary constraints, and flaw parameters. The results show that these testing conditions have a significant effect on the mechanical behavior of rocks. Using a fixed compression platen helped reduce bulging of the material. Centering of the specimen played a critical role to avoid buckling and unequal distribution of stress. Slower displacement rates can control the energy being released once failure occurs to prevent the specimen from exploding. Also, the frictional end effects were investigated by comparing friction-reduced and non-friction-reduced end conditions. Very importantly, the study also identified variations in crack initiation and propagation between specimens with internal flaws and specimens with throughgoing flaws. This investigation showed that wing cracks appeared in specimens with throughgoing flaws, while wing cracks with petal cracks were associated with the internal flaws. It also showed that the mechanical properties are influenced by the inclination of the flaws and established that specimens with internal flaws generally exhibit higher strength compared to specimens with throughgoing flaws. The systematic analysis presented in this work sheds light on important considerations that need to be taken into account when conducting fracture research and adds knowledge to the fundamental understanding of how fractures occur in nature.
</description>
<pubDate>Wed, 25 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157374</guid>
<dc:date>2024-09-25T00:00:00Z</dc:date>
</item>
<item>
<title>TimelyTale: A Multimodal Dataset Approach to Assessing Passengers' Explanation Demands in Highly Automated Vehicles</title>
<link>https://hdl.handle.net/1721.1/157371</link>
<description>TimelyTale: A Multimodal Dataset Approach to Assessing Passengers' Explanation Demands in Highly Automated Vehicles
Kim, Gwangbin; Hwang, Seokhyun; Seong, Minwoo; Yeo, Dohyeon; Rus, Daniela; Kim, SeungJun
Explanations in automated vehicles enhance passengers' understanding of vehicle decision-making, mitigating negative experiences by increasing their sense of control. These explanations help maintain situation awareness, even when passengers are not actively driving, and calibrate trust to match vehicle capabilities, enabling safe engagement in non-driving related tasks. While design studies emphasize timing as a crucial factor affecting trust, machine learning practices for explanation generation primarily focus on content rather than delivery timing. This discrepancy could lead to mistimed explanations, causing misunderstandings or unnecessary interruptions. This gap is partly due to a lack of datasets capturing passengers' real-world demands and experiences with in-vehicle explanations. We introduce TimelyTale, an approach that records passengers' demands for explanations in automated vehicles. The dataset includes environmental, driving-related, and passenger-specific sensor data for context-aware explanations. Our machine learning analysis identifies proprioceptive and physiological data as key features for predicting passengers' explanation demands, suggesting their potential for generating timely, context-aware explanations. The TimelyTale dataset is available at https://doi.org/10.7910/DVN/CQ8UB0.
</description>
<pubDate>Mon, 09 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157371</guid>
<dc:date>2024-09-09T00:00:00Z</dc:date>
</item>
<item>
<title>The Distance Between Us: Exploring the COVID-19 Intensive Care Unit Through the Soundscape of Biometric Monitoring</title>
<link>https://hdl.handle.net/1721.1/157370</link>
<description>The Distance Between Us: Exploring the COVID-19 Intensive Care Unit Through the Soundscape of Biometric Monitoring
Lecamwasam, Kimaya
Audio technology rested at the heart of the COVID-19 pandemic. Though heart rate monitors, ventilators, and pulse oximeters are not traditionally thought of as tools for musical expression, their sounds hold a resonance that has not been fully explored, given the spaces these biometric monitoring devices held in the zeitgeist. In my piece “The Distance Between Us”, I investigate the role of the sonic output of biometric measurement as a source of input data for artistic expression on its own. I present a score grounded in the auditory environment of a COVID-19 intensive care unit (ICU), with graphic notation inspired by the visual output of heart rate monitors and pulse oximeters, that directs the dynamics of the performers as they improvise using steady-state and alert tones of monitoring devices to explore the soundscape of the ICU. “The Distance Between Us” becomes a representation of the profound connection between soundscape, score, and the emotional and physical toll of the pandemic. I include analyses of performer and audience member reflections that suggest the importance of considering the contextual relationships that we build with auditory stimuli when composing and performing pieces grounded in emotionally-charged subject matter.
AM '24: Proceedings of the 19th International Audio Mostly Conference: Explorations in Sonic Cultures, September 18–20 2024, Milan, Italy
</description>
<pubDate>Wed, 18 Sep 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157370</guid>
<dc:date>2024-09-18T00:00:00Z</dc:date>
</item>
<item>
<title>Effect of Spacecraft Charging on Performance of Ion Electrospray Propulsion Systems</title>
<link>https://hdl.handle.net/1721.1/157325</link>
<description>Effect of Spacecraft Charging on Performance of Ion Electrospray Propulsion Systems
Shaik, Saba Z.; Corrado, Matthew N.; Lozano, Paulo C.
Ion electrospray propulsion systems are known to induce moderate levels of spacecraft charging when operated in&#13;
a passive dual-polarity neutralization scheme. Here, the relationship between this charging and the performance of&#13;
the electrospray thrusters is experimentally assessed. We characterize a passively-fed ion electrospray thruster in a&#13;
simulated spacecraft charging environment with the ionic liquid propellant EMI-BF4. Performance metrics, including&#13;
thrust, specific impulse, and component efficiencies are estimated with the thruster operated at emission currents of&#13;
±150 µA for prescribed spacecraft biases between 0 and ±800 V. When the spacecraft and plume are the same polarity,&#13;
thrusters exhibit a narrower plume and produce more thrust with increasing spacecraft bias. Conversely, when the&#13;
spacecraft and plume are opposite polarities, thrusters show increasingly divergent plumes that are attracted back to the&#13;
spacecraft, resulting in less thrust being produced at higher spacecraft biases. The combined thrust output for a dualpolarity pair of thrusters was estimated to decrease by about 36% at a spacecraft bias of 800 V and 25% at a spacecraft&#13;
bias of −800 V. These results show that spacecraft charging is a critical consideration for determining the true in-space&#13;
performance of ion electrospray propulsion systems.
75th International Astronautical Congress (IAC), Milan, Italy, 14-18 October 2024
</description>
<pubDate>Mon, 14 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157325</guid>
<dc:date>2024-10-14T00:00:00Z</dc:date>
</item>
<item>
<title>An Energy-Efficient 3D Point Neural Network Accelerator with Fine-grained LiDAR-SoC Pipeline Structure</title>
<link>https://hdl.handle.net/1721.1/157324</link>
<description>An Energy-Efficient 3D Point Neural Network Accelerator with Fine-grained LiDAR-SoC Pipeline Structure
Seo, Bokyoung; Jung, Jueun; Han, Donghyeon; Lee, Kyuho
3D point neural network (PNN) segmentation using LiDAR data has emerged as a fundamental stage of high-level intelligence algorithms for autonomous applications such as SLAM, path planning, object detection, etc. However, previous processors were not feasible for real-time and low-power 3D PNN systems since they wasted ~100 ms of LiDAR's sensing time and required 107.3 mW of external memory access before PNN processing. Furthermore, their compute-intensive bin partitioning and point sampling methods were not suitable for large-scale outdoor data, causing significant computing power. Therefore, the entire system, from sensing to processing, must be taken into account for 3D PNN processor implementation. This paper proposes L-PNPU, an energy-efficient 3D PNN segmentation processor optimized with the unique mechanical characteristics of LiDAR. It is designed with three key features: 1) Azimuthal bin partitioning to reduce power and latency, 2) Modified PNN algorithm co-optimized with heterogeneous architecture to remove redundant operation and reduce energy, and 3) Fine-grained LiDAR-System-on-Chip (SoC) pipeline structure to enhance the system energy and throughput. At 250 MHz and 1.0V, L-PNPU achieves 1.27M points/s of throughput and 0.51 μJ/point of energy efficiency.
ISLPED '24, August 5–7, 2024, Newport Beach, CA, USA
</description>
<pubDate>Mon, 05 Aug 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157324</guid>
<dc:date>2024-08-05T00:00:00Z</dc:date>
</item>
<item>
<title>Learn to Code Sustainably: An Empirical Study on Green Code Generation</title>
<link>https://hdl.handle.net/1721.1/157323</link>
<description>Learn to Code Sustainably: An Empirical Study on Green Code Generation
Vartziotis, Tina; Dellatolas, Ippolyti; Dasoulas, George; Schmidt, Maximilian; Schneider, Florian; Hoffmann, Tim; Kotsopoulos, Sotirios; Keckeisen, Michael
The increasing use of information technology has led to a significant share of energy consumption and carbon emissions from data centers. These contributions are expected to rise with the growing demand for big data analytics, increasing digitization, and the development of large artificial intelligence (AI) models. The need to address the environmental impact of software development has led to increased interest in green (sustainable) coding and claims that the use of AI models can lead to energy efficiency gains. Here, we provide an empirical study on green code and an overview of green coding practices, as well as metrics used to quantify the sustainability awareness of AI models. In this framework, we evaluate the sustainability of auto-generated code. The auto-generated code considered in this study is produced by generative commercial AI language models, GitHub Copilot, OpenAI ChatGPT-3, and Amazon CodeWhisperer. Within our methodology, in order to quantify the sustainability awareness of these AI models, we propose a definition of the code's "green capacity", based on certain sustainability metrics. We compare the performance and green capacity of human-generated code and code generated by the three AI language models in response to easy-to-hard problem statements. Our findings shed light on the current capacity of AI models to contribute to sustainable software development.
LLM4Code ’24, April 20, 2024, Lisbon, Portugal
</description>
<pubDate>Sat, 20 Apr 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/157323</guid>
<dc:date>2024-04-20T00:00:00Z</dc:date>
</item>
<item>
<title>The Semantic Reader Project: Augmenting Scholarly Documents through AI-Powered Interactive Reading Interfaces</title>
<link>https://hdl.handle.net/1721.1/157322</link>
<description>The Semantic Reader Project: Augmenting Scholarly Documents through AI-Powered Interactive Reading Interfaces
Lo, Kyle; Chang, Joseph; Head, Andrew; Bragg, Jonathan; Zhang, Amy; Trier, Cassidy; Anastasiades, Chloe; August, Tal; Authur, Russell; Bragg, Danielle; Bransom, Erin; Cachola, Isabel; Candra, Stefan; Chandrasekhar, Yoganand; Chen, Yen-Sung; Cheng, Evie; Chou, Yvonne; Downey, Doug; Evans, Rob; Fok, Raymond
Scholarly publications are key to the transfer of knowledge from scholars to others. However, research papers are information-dense, and as the volume of the scientific literature grows, the greater the need for new technology to support scholars. In contrast to the process of finding papers, which has been transformed by Internet technology, the experience of reading research papers has changed little in decades. For instance, the PDF format for sharing papers remains widely used due to its portability but has significant downsides, inter alia, static content and poor accessibility for low-vision readers. This paper explores the question "Can recent advances in AI and HCI power intelligent, interactive, and accessible reading interfaces, even for legacy PDFs?" We describe the Semantic Reader Project, a collaborative effort across multiple institutions to explore automatic creation of dynamic reading interfaces for research papers. Through this project, we've developed a collection of novel reading interfaces and evaluated them with study participants and real-world users to show improved reading experiences for scholars. We've also released a productio