TY - JOUR
T1 - Scaling deep learning for whole-core reactor simulation
AU - Shriver, Forrest
AU - Watson, Justin
N1 - Funding Information:
This research used resources of the Oak Ridge Leadership Computing Facility at the Oak Ridge National Laboratory, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC05-00OR22725 .
Funding Information:
This research used resources of the Compute and Data Environment for Science (CADES) at the Oak Ridge National Laboratory, which is supported by the Office of Science of the US Department of Energy under Contract No. DE-AC05-00OR22725 .
Publisher Copyright:
© 2022 Elsevier Ltd
PY - 2022/4
Y1 - 2022/4
N2 - A deep learning architecture for predicting the normalized pin powers within 2D pressurized water reactors, called LatticeNet, has been developed and shown to be performant for a variety of relevant conditions within a single 2D reflective assembly. However, many neutronics scenarios of interest involve regions composed of multiple assemblies, up to and including full-core scenarios. It is not immediately obvious that scaling LatticeNet up to these full-core scenarios will achieve the same performance as seen in single-assembly scenarios, due to the problem-tailored nature of neural networks. It is also simple to show that the original implementation of LatticeNet does not easily scale up to multi-assembly regions due to the enormous compute demands of the original proposed architecture. In this work, we address these issues by first proposing several variants of LatticeNet which address the issue of scaling compute needs, and show the theoretical performance benefits gained from these architectures. We then evaluate the actual benefit of the proposed variants on multi-assembly regions containing roughly the same variation outlined in the original paper proposing LatticeNet. We show that the proposed architecture changes do not result in significantly increased error, and that these changes result in much more manageable training times relative to the original LatticeNet architecture.
AB - A deep learning architecture for predicting the normalized pin powers within 2D pressurized water reactors, called LatticeNet, has been developed and shown to be performant for a variety of relevant conditions within a single 2D reflective assembly. However, many neutronics scenarios of interest involve regions composed of multiple assemblies, up to and including full-core scenarios. It is not immediately obvious that scaling LatticeNet up to these full-core scenarios will achieve the same performance as seen in single-assembly scenarios, due to the problem-tailored nature of neural networks. It is also simple to show that the original implementation of LatticeNet does not easily scale up to multi-assembly regions due to the enormous compute demands of the original proposed architecture. In this work, we address these issues by first proposing several variants of LatticeNet which address the issue of scaling compute needs, and show the theoretical performance benefits gained from these architectures. We then evaluate the actual benefit of the proposed variants on multi-assembly regions containing roughly the same variation outlined in the original paper proposing LatticeNet. We show that the proposed architecture changes do not result in significantly increased error, and that these changes result in much more manageable training times relative to the original LatticeNet architecture.
UR - http://www.scopus.com/inward/record.url?scp=85124156693&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85124156693&partnerID=8YFLogxK
U2 - 10.1016/j.pnucene.2022.104134
DO - 10.1016/j.pnucene.2022.104134
M3 - Article
AN - SCOPUS:85124156693
SN - 0149-1970
VL - 146
JO - Progress in Nuclear Energy
JF - Progress in Nuclear Energy
M1 - 104134
ER -