An important message regarding the project descriptions






Important Message Students, the Summer Workshop is an educational program, which means that if you are selected to participate you will be taught what you need to know to experience success on your project. We do not expect you to understand everything in these project descriptions, which are provided mainly to give you an idea of what the mentor has in mind. If the area interests you but you do not understand everything in the description, that is okay. Please do not let that deter you from applying. The projects will each stay in the areas as described below, but will be adjusted when the participating students dialogue with their mentors.






Project Title
(Short Name)
Tentative Project Descriptions






Stopping Power in Warm Dense Matter
(Stopping Power)
Stopping power is a measure of how much energy is lost per unit distance when a particle travels through a material. In inertial confinement fusion experiments, a state called warm dense matter is reached, where the material is a dense plasma with quantum mechanical, partially degenerate electrons and strong to moderate ion coupling. This is a difficult state to model and yet a good understanding of its properties (including stopping power) is necessary to accurately simulate and interpret the experiments. In those experiments, fusion reactions and other processes produce fast particles that deposit their energy in the surrounding plasma as they slow down. This energy exchange which heats the plasma and determines the mean free path of the particle is governed by the stopping power. In this project, students will write a code to calculate the stopping power of charged ions in warm dense plasmas and investigate its dependence on various physical effects and approximations.






Lengthscale Equations for Turbulence Modeling
(Lengthscale Equations)
Typically turbulence models require a scaling equation, often for dissipation, but sometimes for other quantities such as the turbulent lengthscale. These equations are usually ad-hoc, and the quantities they model are often not rigorously defined. For example, one can define multiple turbulent time and lengthscales. In this project we will be looking at some more well defined alternatives for the lengthscale equation. Depending on ability and interest, students may work on deriving model equations, implementing them in OpenFOAM, and running comparisons between different sets of model equations and experimental data.






Predictive Models for Brittle Damage Failure of Materials
(Brittle Damage)
Developing predictive models for brittle damage evolution is a challenging problem. In order to be applicable at larger length scales (hundreds of microns up to meters depending on application), continuum-scale material models must somehow scale up important mechanisms, interactions and other phenomena seen at the micro- and meso-scales. While appropriately capturing these physics is difficult, damage evolution also has computational difficulties such as mesh dependence during softening that must also be overcome in order to develop robust models. In this project, students will have the opportunity to study brittle damage and failure in materials (geological and metallic) using a hydrocode. Students may work on the following in relation to this project: implementation and testing of an effective moduli constitutive model for predicting material strength, comparison of a statistical material representation to a phenomenological model for accurately capturing damage evolution, and/or investigating robustness and mesh dependence of existing damage models or extending models to accommodate mesh remapping.






Monte Carlo Transport of Cosmic Rays and the Search for Dark Matter Annihilation
(Cosmic Ray Propagation )
The Earth is bathed by a flux of high energy particles with energies ranging from a billion eV to 1.0e21 eV. Scientists believe that these particles are produced in the most energetic astrophysical explosions such as supernovae and their remnants, gamma-ray bursts and active galactic nuclei. By studying the cosmic rays, we probe some of the biggest uncertainties of our understanding of these outbursts. TeV gamma-rays can also be produced through dark matter annihilation. The positron excess observed by the PAMELA collaboration suggested this annihilation has been observed. However, it has also been suggested that the PAMELA anomaly is instead just a reflection of simplified cosmic ray transport. The state-of-the-art in transport is a simple diffusion code that cannot model the full physics of cosmic ray transport, especially when global magnetic fields thread the galaxy. In this project students will expand LANL's Monte Carlo cosmic ray transport code (Harding et al. 2016), working both to enhance the physics in the code and enhance and accelerate the transport method to study next generation cosmic ray propagation. With these improved methods, we will be able to better distinguish the affects of dark matter annihilation from standard cosmic ray production sites.






Computational Fluid Dynamics at Scale
(CFD at Scale)
FleCSALE is a C++ library for studying multi-phase continuum dynamics problems with different runtime environments. It is specifically developed for existing and emerging large distributed memory system architectures. FleCSALE uses the Flexible Computational Science Infrastructure (FleCSI) project for mesh and data structure support. FleCSALE has support for multi-phase fluids and tabular equation of state (EOS). Students will have the opportunity to choose a project to explore ways to better target the enormous compute capacity of these future machines.






Richtmyer-Meshkov Instability for High Impedance Mismatch Imploding Shells
(Imploding Shells)
One of the challenges for modern technology is to achieve positive gain controlled nuclear fusion. Up until the present this has proven to be an elusive goal, with all attempts to get the compression necessary to get positive yield (more energy out than in) having been ineffective. A principal reason for this failure is that surface instability growth at the fuel interface leads to inefficient compression. One of the leading candidates for controlled nuclear fusion is Inertial Confinement Fusion (ICF), in which a small spherical capsule is imploded by high energy lasers. An ICF capsule typically consists of a high density outer shell surrounding a gas mixture of deuterium-tritium (DT) nuclear fuel. Positive yield nuclear burn requires that the DT mixture be compressed by as much as a factor of thirty, and has, so far, not been achieved in any experimental faculty. Numerical simulations of capsule implosion have been a critical tool in the design of ICF capsules. However, the degree to which such simulations are actually predictive of the implosion process is in fact not very clear. The goal of this project is to conduct a computational study of the effect of the numerical simulation method on the predictions for interface implosion. We will focus on simulations of high impedance mismatch imploding shells that are rough approximations of the hydrodynamic aspects of ICF implosions. The geometry consists of a high pressure spherical shell with a low-pressure shell nested inside. Perturbations are initialized on the inner shell's inner surface. The high-pressure shell generates an imploding shock wave that accelerates the inner surface leading to the growth of the interface perturbations due to Richtmyer-Meshkov (1, 2) instability. Our computational study will use at least two codes, the Los Alamos Eulerian Hydrodynamics code xRage (3, 4), and the University of Chicago compressible hydrodynamics code FLASH (5). We will seek to diagnose the behavior of the unstable perturbations as they implode, up to the point where these modes collide at the center of the implosion. The xRage code provides three options for material interface treatment and we will test all three options and quantify their effect. The FLASH code provides a high order Piecewise Parabolic Method (6) solver with molecularly mixed zones, the results of which will be contrasted with the xRage results. Students participating in this project will become familiar with basic hydrodynamics and unstable interfaces, numerical simulations and setup, as well as methods to visualize and analyze complex flows in multiple space dimensions. References 1. Richtmyer, R.D., Taylor Instability in Shock Acceleration of Compressible Fluids. Comm. Pure Appl. Math., 1960. 13: p. 297-319. 2. Meshkov, E.E., Instability of a shock wave accelerated interface between two gases. NASA Tech. Trans., 1970. F-13: p. 074. 3. Gittings, M., et al., The RAGE radiation-hydrodynamic code. Computational Science and Discovery, 2008: p. 015005 (63 pp.). 4. Crestone code team, xRage Users Manual. 2008, Los Alamos National Laboratory: Los Alamos, NM. 5. Flash Center for Computational Science. [I] 2016; Available from: http://flash.uchicago.edu/site/. 6. Woodward, P. and P. Colella, The numerical simulation of two-dimensional fluid flow with strong shocks. Journal of Computational Physics, 1984. 54(1): p. 115-73.






Bridging the Performance-Productivity Gap of Vectorization
(Vectorize!)
A significant fraction of available performance in modern CPUs and accelerators comes from SIMD hardware. While compilers can automatically generate vector instructions, programmer-guided vectorization can yield higher performance. We will explore various abstractions available for vectorization (1) and apply them to computational physics algorithms from multiple fields, such as fluid dynamics and atomic physics. Specifically, we will learn how to write vectorized code using intrinsics (2), compiler directives such as with OpenMP (3), vectorized libraries such as libsimdpp (4), and using language extensions such as Intel's "single program multiple data" program compiler (5). We will evaluate these techniques not only in terms of the performance, but the portability, the ease of use, and the readability, extensibility, and maintainability of the resulting code. Our goal is not to solely maximize performance at any cost but to maximize performance * productivity. (1) http://www.walkingrandomly.com/?p=3378 (2) https://software.intel.com/en-us/articles/introduction-to-intel-advanced-vector-extensions/ (3) http://www.hpctoday.com/hpc-labs/explicit-vector-programming-with-openmp-4-0-simd-extensions (4) https://github.com/p12tic/libsimdpp (5) https://ispc.github.io






Using Monte Carlo to Determine Multiple Neutron Eigenvalues
(Eigenvalues)
My project involves implementing Arnoldi's Method for calculating multiple k-eigenvalues in a Monte Carlo neutron transport code (e.g., MCATK). While this has been done, there remain some considerations that I'd like to explore: - What has to be done to implement continuous-energy cross sections into Arnoldi's method? - Can we get around spatially discretizing the geometry? (Possibly use kernel density estimators.) - How will performing implicit restarts affect the performance/precision of the calculation? - What will I do with multiple eigenvalues?






Impact Modeling of Seismic Waves to Facilitate Prospecting on Other Planets
(Seismic Waves)
As its Discovery program gears towards new challenges, NASA looks for technologies to unravel the deepest parts of celestial bodies. Impact seismology, meaning seismology based on waves generated by bolide impacts, is a unique tool for subsurface and crustal characterization that will be key to the refinement of models for our Solar System and its history, and ultimately, may aid in our quest to find future habitable planets. Impact seismology has been overseen in past NASA missions due to the uncertainty on the amplitude and shape of the seismic waves resulting from bolide impacts. The objective of this project is to leverage our DOE complex modeling capabilities in high explosive experiments to develop a new state-of-the-art of bolide impact modeling that will reduce impact seismic detectability uncertainties by an order of magnitude from its current level. Our approach involves two steps: verification and validation utilizing NASA laboratory experiments for which both kinematic and model parameters are controlled, and then moving to first principles impact calculations of two Moon events for which kinematic parameters are known. The scientific challenge of this project is to extend our proven modeling capabilities to material models for unconsolidated soils and high velocity impacts (involving higher strain rates than we currently handle). One objective of this project is to conduct preliminary validation and verification efforts on a newly developed bolide impact model implemented via LANL's Hybrid Optimization Software Suite (HOSS). HOSS is based on the Combined Finite-Discrete Element Method (FDEM) and provides the ability to predict deformation and failure of materials in a variety of situations. A brief description of HOSS can be found at http://www.lanl.gov/discover/news-stories-archive/2016/November/software-to-simulate-material-deformation.php Participants will have an excellent opportunity to perform research on hypervelocity impacts under the supervision of LANL's scientists; he/she will have the chance to enhance their knowledge on a wide range of fields from material modeling, numerical methods, computational mechanics as well as high performance computing.






Cosmological Origins of Water in the Universe
(Water Origins)
The evolution of the large-scale structure of the universe is dominated by dark matter, dark energy and primordial elements such as hydrogen and helium produced in the Big Bang. These processes, as well as how hydrogen and helium get converted to basic elements like carbon, nitrogen and oxygen in stars, have been well-studied. However, the cosmological origin of water and other complex molecules essential to the rise of life in the cosmos is not well-understood. In this project, students will learn to run large-scale simulations of the universe which track star formation and supernovae which produce heavy elements in galaxies to quantify when, where, and how water and other complex molecules form in galaxies. Students will acquire a wide range of skills from running massively-parallel codes on large computing platforms to post-processing, visualizing and analyzing large datasets. The project will culminate with students writing up their findings in a peer-reviewed publication.






Modeling Thermal Feedback in Nuclear Reactors
(Thermal Feedback)
In a nuclear reactor, the neutron distribution is related to the production of energy, which then feeds into the temperature distribution of the reactor. The temperature, however, feeds back into the neutron distribution by changing interaction probabilities of the neutrons with the reactor materials. These feedback effects are crucial for controlling the nuclear chain reaction within a reactor, as temperature feedback serves to limit power levels during reactor transients as well as fix the steady state operating conditions of a given reactor. Therefore, the accurate modeling of feedback effects is necessary to predict reactor behavior during transient and steady state scenarios. The students will couple the C++ Monte Carlo neutron transport code MCATK to a thermomechanical code (e.g., MOOSE, Abaqus, or OpenFOAM) and use this multiphysics application to model the temperature feedback of nuclear reactors. The multiphysics calculations will be compared to a variety of well-characterized experiments. Steady-state verification and validation calculations will be performed first, followed by transient calculations if time allows. Possible transient experiments include the Godiva critical assembly, the Transient Reactor Test Facility (TREAT), or the Kilopower Reactor Using Stirling TechnologY (KRUSTY), a space power reactor. The students may also attempt to characterize how Monte Carlo noise in the neutronics calculation propagates through to the other physics, and whether this has a serious effect on convergence of the coupled calculation.






Many-atom Modeling of Electron Photoinjection
(Photoinjection)
The quantum propagation of either a single quantum state or the full density matrix is proven to be a useful tool to answer several questions about non-equilibrium phenomena occurring at nanoscale dimensions. In particular it can provide a detailed description of the time-dependent photoinjection process occurring in photovoltaic devices. These simulations are performed by integrating the time-dependent Schrodinger equation and monitoring the evolution of the electron density probability as a function of time. In order to describe the electronic structure of the systems the use of self-consistent tight-binding methods including density functional theory (DFTB) and extended Huckel methods have been preferred given the computational cost of these simulations. When used to simulate the time-dependent photoinjection mechanism in dye sensitized solar cells, this method leads to a remarkable agreement with experiments together with a great predictive power. When systems become too large the computational cost for the quantum propagation becomes expensive rendering the simulation unfeasible depending on the number of states to be propagated. The synthetic efforts of recent years in photovoltaic materials have opened up the possibility of having large antenna complexes as well as quantum dots with well controlled sizes. Given this scenario, the creation of tools capable to simulate electron and energy transfer in these large complexes is mandatory. Scaling to systems of several thousands atoms is a goal that could be attained with techniques allowing a reduced order propagation, ideally O(N). The project will focus on implementing an O(N) propagation of the density matrix as well as a single state. We have developed two libraries that will become essential for improving the performance of quantum chemistry packages in general. The basic matrix library allows to perform several sparse matrix-matrix operations that could be used for the quantum propagation. Since absorbing potentials (imaginary energy terms in the Hamiltonian) will be employed in these simulations, the BML tools will have to be adapted to handle complex numbers. Making additions to this library is fairly easy and will benefit quantum chemistry applications beyond this one. Also, the fact of working in a very well maintained code is a good training for students coming from a pure domain science background (biology, chemistry, physics, etc.) that have only experience on writing very specific codes for their thesis. After reaching the main goal of this proposal we would be able to simulate larger systems (with thousands of atoms), making simulations much more realistic.






Back to Summer Workshop home page






Los Alamos National Laboratory
Operated by
Los Alamos National Security, LLC for the U.S. Department of Energy's NNSA
© Copyright 2012-14 LANS, LLC All rights reserved | Terms of Use | Privacy Policy