De novo ultrascale atomistic simulations on high-end parallel supercomputers

Aiichiro Nakano, Rajiv K. Kalia, Ken Ichi Nomura, Ashish Sharma, Priya Vashishta, Fuyuki Shimojo, Adri C.T. van Duin, William A. Goddard, Rupak Biswas, Deepak Srivastava, Lin H. Yang

Research output: Contribution to journalArticlepeer-review

42 Scopus citations

Abstract

We present a de novo hierarchical simulation framework for first-principles based predictive simulations of materials and their validation on high-end parallel supercomputers and geographically distributed clusters. In this framework, high-end chemically reactive and non-reactive molecular dynamics (MD) simulations explore a wide solution space to discover microscopic mechanisms that govern macroscopic material properties, into which highly accurate quantum mechanical (QM) simulations are embedded to validate the discovered mechanisms and quantify the uncertainty of the solution. The framework includes an embedded divide-and-conquer (EDC) algorithmic framework for the design of linear-scaling simulation algorithms with minimal bandwidth complexity and tight error control. The EDC framework also enables adaptive hierarchical simulation with automated model transitioning assisted by graph-based event tracking. A tunable hierarchical cellular decomposition parallelization framework then maps the O(N) EDC algorithms onto petaflops computers, while achieving performance tunability through a hierarchy of parameterized cell data/ computation structures, as well as its implementation using hybrid grid remote procedure call + message passing + threads programming. High-end computing platforms such as IBM BlueGene/L, SGI Altix 3000 and the NSF TeraGrid provide an excellent test grounds for the framework. On these platforms, we have achieved unprecedented scales of quantum-mechanically accurate and well validated, chemically reactive atomistic simulations-1.06 billion-atom fast reactive force-field MD and 11.8 million-atom (1.04 trillion grid points) quantum-mechanical MD in the framework of the EDC density functional theory on adaptive multigrids- in addition to 134 billion-atom non-reactive space-time multiresolution MD, with the parallel efficiency as high as 0.998 on 65,536 dual-processor BlueGene/L nodes. We have also achieved an automated execution of hierarchical QM/MD simulation on a grid consisting of 6 supercomputer centers in the US and Japan (in total of 150,000 processor hours), in which the number of processors change dynamically on demand and resources are allocated and migrated dynamically in response to faults. Furthermore, performance portability has been demonstrated on a wide range of platforms such as BlueGene/L, Altix 3000, and AMD Opteron-based Linux clusters.

Original languageEnglish (US)
Pages (from-to)113-128
Number of pages16
JournalInternational Journal of High Performance Computing Applications
Volume22
Issue number1
DOIs
StatePublished - Mar 2008

All Science Journal Classification (ASJC) codes

  • Software
  • Theoretical Computer Science
  • Hardware and Architecture

Fingerprint Dive into the research topics of 'De novo ultrascale atomistic simulations on high-end parallel supercomputers'. Together they form a unique fingerprint.

Cite this