DownloadThe Portobello Bookshop Gift Guide 2024

Timothy J Williams Editor

Dr. T. P. Straatsma is the Group Leader for Scientific Computing in the National Center for Computational Sciences, a division that houses the Oak Ridge Leadership Computing Facility, at Oak Ridge National Laboratory, and Adjunct Faculty member in the Chemistry Department of the University of Alabama in Tuscaloosa. He earned his Ph.D. in Mathematics and Natural Sciences from the University of Groningen, the Netherlands. After a postdoctoral associate appointment, followed by a faculty position in the Department of Chemistry at the University of Houston, he moved to Pacific Northwest National Laboratory (PNNL), as co-developer of the NWChem computational chemistry software, established a program in computational biology, and was group leader for computational biology and bioinformatics. Straatsma served as Director for the Extreme Scale Computing Initiative at PNNL, focusing on developing science capabilities for emerging petascale computing architectures. He was promoted to Laboratory Fellow, the highest scientific rank at the Laboratory.

In 2013 he joined Oak Ridge National Laboratory, where, in addition to being Group Leader for Scientific Computing, he is the Lead for the Center for Accelerated Application Readiness, and Lead for the Applications Working Group in the Institute for Accelerated Data Analytics and Computing, focusing on preparing scientific applications for the next generation pre-exascale and exascale computer architectures.

Straatsma has been a pioneer in the development, efficient implementation and application of advanced modeling and simulation methods as key scientific tools in the study of chemical and biomolecular systems, complementing analytical theories and experimental studies. His research focuses on the development of computational techniques that provide unique and detailed atomic level information that is difficult or impossible to obtain by other methods, and that contributes to the understanding of the properties and function of these systems. In particular, his expertise is in the evaluation of thermodynamic properties from large scale molecular simulations, having been involved since the mid-1980s, in the early development of thermodynamic perturbation and thermodynamic integration methodologies. His research interests also include the design of efficient implementations of these methods on modern, complex computer architectures, from the vector processing supercomputers of the 1980s to the massively parallel and accelerated computer systems of today. Since 1995, he is a core developer of the massively parallel molecular science software suite NWChem and responsible for its molecular dynamics simulation capability. Straatsma has co-authored nearly 100 publications in peer-reviewed journals and conferences, was the recipient of the 1999 R&D 100 Award for the NWChem molecular science software suite, and was recently elected Fellow of the American Association for the Advancement of Science.

Katie B. Antypas is the Data Department Head at the National Energy Research Scientific Computing (NERSC) Center, which includes the Data and Analytics Services Group, Data Science Engagement Group, Storage Systems Group and Infrastructure Services Group. The Department’s mission is to pioneer new capabilities to accelerate large-scale data-intensive science discoveries as the Department of Energy Office of Science workload grows to include more data analysis from experimental and observational facilities such as light sources, telescopes, satellites, genomic sequencers and particle colliders. Katie is also the Project Manager for the NERSC-8 system procurement, a project to deploy NERSC's next generation HPC supercomputer in 2016, named Cori, a system comprised of the Cray interconnect and Intel Knights Landing manycore processor. The processor features on-package, high bandwidth memory, and more than 64 cores per node with 4 hardware threads each. These technologies offer applications great performance potential, but will require users to make changes to applications in order to take advantage of multi-level memory and a large number of hardware threads. To address this concern, Katie and the NERSC-8 team launched the NERSC Exascale Science Applications Program (NESAP), an initiative to prepare approximately 20 application teams for the Knights Landing architecture through close partnerships with vendors, science application experts and performance analysts.

Katie is an expert in parallel I/O application performance, and for the past 6 years has given a parallel-I/O tutorial at the SC conference. She also has expertise in parallel application performance, HPC architectures, and HPC user support and Office of Science user requirements. Katie is also a PI on a new ASCR Research Project, “Science Search: Automated MetaData Using Machine Learning”. Before coming to NERSC, Katie worked at the ASC Flash Center at the University of Chicago supporting the FLASH code, a highly scalable, parallel, adaptive mesh refinement astrophysics application. She wrote the parallel I/O modules in HDF5 and Parallel-NetCDF for the code. She as an M.S. in Computer Science from the University of Chicago and a bachelors in Physics from Wellesley College.

Timothy J. Williams is Deputy Director of Science at the Argonne Leadership Computing Facility, at Argonne National Laboratory. He works in the Catalyst team—computational scientists who work with the large-scale projects using ALCF supercomputers. Tim manages the Early Science Program. The goal of the ESP is preparing a set of scientific applications for early, pre-production use of next-generation computers such as ALCF’s most recent Cray-Intel system based on second generation Xeon Phi processors, Theta; and our forthcoming pre-exascale system, Aurora, based on third generation Xeon Phi. Tim received his BS in Physics and Mathematics from Carnegie Mellon University in 1982; he received PhD in Physics in 1988 from the College of William and Mary, focusing on numerical study of a statistical turbulence theory using Cray vector supercomputers. Since 1989, he has specialized in the application of large-scale parallel computation to various scientific domains, including particle-in-cell plasma simulation for magnetic fusion, contaminant transport in groundwater flows, global ocean modeling, and multimaterial hydrodynamics. He spent eleven years in research at Lawrence Livermore National Laboratory and Los Alamos National Laboratory. In the early 1990s, Tim was part of the pioneering Massively Parallel Computing Initiative at LLNL, working on plasma PIC simulations and dynamic alternating direction implicit (ADI) solver implementations on the BBN TC2000 computer. In the late 1990s, he worked at Los Alamos’ Advanced Computing Laboratory with a team of scientists developing the POOMA (Parallel Object Oriented Methods and Applications) framework—a C++ class library encapsulating efficient parallel execution beneath high-level data-parallel interfaces designed for scientific computing. Tim then spent nine years as a quantitative software developer for the financial industry, at Morgan Stanley in New York focusing on fixed-income securities and derivatives, and at Citadel in Chicago focusing most recently on detailed valuation of subprime mortgage-backed securities. Tim returned to computational science at Argonne in 2009.