INT Program INT-18-2b
Advances in Monte Carlo Techniques for Many-Body Quantum Systems
July 30 - September 7, 2018
A list of general topics and questions regarding QMC methods that are to be
addressed within the program are:
A "sign problem" arises in any Monte Carlo method with fluctuating positive and negative weights. Is it possible to construct a hierarchical 'taxonomy' of the various sign problems that arise in QMC ranked by severity and the potential for amelioration or removal of the problem?
For many-fermion systems, how can the systematic error due to the sign problem be estimated? Have we definitely and mathematically proven that the DMC sign problem is always intractable?
Recently many proposals have been made of algorithms working in some "dual" space. Is this the future direction of QMC? Has QMC in configuration space reached the limits of its development?
Can we find a way of systematically study problems with non-local interactions (extremely useful in nuclear physics and in some novel applications of condensed matter physics)?
How widespread is the applicability of effective (possibly non-local) Hamiltonians across the various research fields? Can effective theories help in making QMC techniques usable in some contexts?
Nuclear physics calculations require the use of Hamiltonians depending on spin/isospin operators. What kind of extensions towards condensed matter problems can be considered (e.g. topological insulators)?
How can we efficiently calculate scattering states and reactions?
Is it possible to extend the concept of "chemical accuracy" beyond quantum chemistry problems? Can QMC methods meet these requirements in general?
In the case of calculations of finite systems (atoms, molecules, nuclei, clusters) what is the comparison in terms of accuracy and efficiency with few-body methods? Up to which number of particles is reliable benchmarking possible?
How do we deal with the problem of systematic and statistical uncertainties? Which are the cases in which systematic uncertainties are much smaller than systematic ones, and which are the cases in which they are comparable? The importance of dynamic quantities
The importance of dynamic quantities
What is the status of the application of QMC to the study of excited states and reactions?
How can we efficiently extend finite temperature calculations to Hamiltonians including explicit spin/isospin dependence and/or auxiliary variables?
How can we calculate dynamic response functions?
What is the progress in developing real-time algorithms? Approaching the 'exascale era'. Computers stopped getting faster while the number of cores has exploded.
What is the current status of scalability of QMC codes? Can we establish some "golden rules" for an efficient and scalable implementation of the various algorithms?
- What is the most efficient way to exploit coprocessors in MC calculations?
Which kind of architecture should the QMC community push in the development of HPC facilities? Do we prefer standard clusters, clusters of graphics cards, mixed architectures...? What about persistent data managing?
What is the current state of the art of general purpose QMC codes as compared to other many-body techniques? Is it possible to imagine QMC as a reliable standard to be used also by non-experts in the near future? What is the cost/benefit ratio compared, for instance, to DFT calculations?
Each QMC flavor for T=0 problems has pros and cons. Is there systematic evidence pointing to some efficiency criterion that can be used to discriminate among different techniques? Or is it really just a matter of personal taste? Which algorithms guarantee the best computational efficiency?