Organizing Committee:
Joe Carlson
George Fuller
* Wick Haxton
Tom Luu
Juan Meza
Tony Mezzacappa
John Negele
Esmond Ng
Steve Pieper
Martin Savage
James Vary
Pavlos Vranas

*corresponding member for scientific questions

Program Coordinator:
Inge Dolan
(206) 685-4286

Seminar schedules

Exit report

Talks online

Application form

Friends of the INT

Obtain an INT preprint number

INT homepage

INT Program INT-11-2a

Extreme Computing and its Implications for the Nuclear Physics/Applied Mathematics/Computer Science Interface

June 6 - July 8, 2011

Applications and Program Organization
Anyone interested in participating in this program can apply by filling out the online application form. To help in planning your visit, we hope to concentrate discussions as follows:

June 6-10 Nuclear Structure and Reactions (Quantum Monte Carlo, Lanczos Methods, Density Functional Methods)
Coordinators: Steve Pieper, James Vary
June 13-17 Neutron Star Matter, Astrophysics (Nuclear Equation of State, Supernovae)
Coordinators: Joe Carlson, George Fuller, Tony Mezzacappa
June 20-24 Astrophysics, Hot QCD (First Stars, Structure Formation; QCD Phase Transition)
Coordinators: George Fuller, Pavlos Vranas
June 27-July 1 Workshop on the Nuclear Physics/ Applied Math/Computer Science Interface
Applied Math/CS Coordinators: Juan Meza, Esmond Ng
Nuclear Physics Coordinators: Wick Haxton, John Negele
Workshop agenda
There is a registration fee of $85 to attend this workshop
July 5-8 Lattice QCD at Zero Temperature
Coordinators: Tom Luu, Martin Savage

Weeks one, two, three, and five will be organized in the usual INT way, with seminars limited to one or two per day so that participants will have free time for discussions and collaborations. The fourth week will be a more intense workshop similar to the Grand Challenge workshops that the DOE organized in 2008 and 2009. As in these earlier workshops, the focus will be the steps that nuclear physics, applied mathematics, and computer science communities should take in order to optimize the nuclear physics transition to exascale computation.

The nuclear physics community has been a very active participant in recent DOE-sponsored discussions of exascale computing. The field has several important applications - e.g., lattice QCD; various ab initio nuclear structure calculations involving moments of very large sparse matrices, fermion Monte Carlo, variational, and other numerical methods; and the simulation of explosive astrophysical environments such as supernovae - that are currently limited computationally. Were a factor of 1000 increase in computing power to become available, nuclear physics would be one of the fields with the most to gain.

This program will focus on preparing the way for such an increase in computing: what kinds of community organization and new partnerships with applied mathematics and computer science will be necessary if nuclear theory is to reach the next level in high performance computing? It is generally believed that current petascale platforms may give way to a new generation of machines that are significantly more powerful, but also significantly more complex, based on processor architectures where perhaps 1000 cores are integrated on a chip. Nuclear physicists face the challenge of using these machines for newly envisioned calculations. They will also have to reformulate the mathematics, algorithms, and programming of existing codes. The challenge is similar to one our experimental colleagues faced three decades ago, in making the transition from small-scale nuclear experiments to flagship accelerator facilities that demand large collaborations and significant technical support. Computational nuclear theory must evolve in a similar way, building effective collaborations within our field as well as effective partnerships with applied mathematicians and computer scientists, if we are to continue to play a role in high-performance computing (HPC).

The program will bring nuclear theorists together with a significant community of applied mathematicians and computer scientists to discuss what should be done now to prepare the field for coming changes. The goals of the program are to identify key opportunities in computational nuclear physics, and to develop a common language for discussing the underlying algorithms, so that potential collaborators can better envision how they and we might work together. Specifically, we hope to analyze several of the key problems in computational nuclear physics:

  • How are these problems now formulated in our best codes?
  • What would be the new goals, if a factor of 1000 increase in computer power became available?
  • Are these existing formulations the best-practice ones, from an applied mathematics standpoint?
  • How do the applied mathematicians and computer scientists see the HPC approaches to such problems evolving over the next five and ten years?
  • What should we be doing, as an interdisciplinary community, to keep up with this evolution, so that our computational toolbox is current? What is the right "model" for such systematic HPC development within a domain science? Will there exist modular codes, produced by applied mathematics and computer science, into which we can simply plug our applications? If so, what will be the form of that interface, and what level of sophistication will the domain scientist need to achieve? Or are there significant applications that are so specific to nuclear physics that we will be in need of specialized codes?
  • What form will code validation take in future years?
  • What challenges will come from multiple platforms with different architectures?
The goal is to come away with stronger partnerships for tackling nuclear grand-challenge problems, as well as a stronger community strategy for helping nuclear physics remain a driver for HPC developments.