Supercomputer aids SMR simulation

20 December 2021



Understanding physical behaviour inside an operating nuclear reactor can be enhanced with simulations on a supercomputer, says Jared Sagoff


Supercomputer at the ALCF Photo credit: Argonne National Laboratory

Scientists hoping to build new generations of small modular reactors (SMRs) need to be able to design and understand the behaviour of these reactors in simulated environments before they can be constructed. Large-scale high-resolution models yield information that can drive down costs to build a new, intrinsically safe nuclear reactor.

Scientists at the US Department of Energy’s (DOE) Argonne National Laboratory (ANL) have collaborated to develop a new computer model that allows for the visualisation of a full reactor core at unprecedented resolution. The aim of the project, which is conducted under the auspices of the DOE’s Exascale Computing Project (ExaSMR), is to carry out full-core multi-physics simulations on upcoming cutting-edge exascale supercomputers. This includes Aurora, which is scheduled to arrive at Argonne
in 2022. 

An update on the progress achieved was published in April in the journal Nuclear Engineering and Design, which will hopefully inspire researchers to further integrate high-fidelity numerical simulations in actual engineering designs.

Modelling in more detail

In a nuclear reactor, the whirls and eddies of coolant that flow around the fuel pins play a critical role in determining the reactor’s thermal and hydraulics performance. They also give much-needed information to nuclear engineers about how best to design future nuclear reactor systems, both for normal operation and for stress tolerance.

A typical light water reactor core is made up of nuclear fuel assemblies, each containing several hundred individual fuel pins, which in turn are made up of fuel pellets. Until now, limitations in raw computing power have constrained models, so they could only address particular regions of the core. However, now an image can model all the individual pins in one of the first ever full-core nuclear reactor simulations. 

“As we advance towards exascale computing, we will see more opportunities to reveal large-scale dynamics of these complex structures in regimes that were previously inaccessible, giving us real information that can reshape how we approach the challenges in reactor designs,” said Argonne nuclear engineer Jun Fang, an author of the study, which was published by ExaSMR teams at Argonne and Professor Elia Merzari’s group at Pennsylvania State University.

A key aspect of SMR fuel assembly modelling is the presence of spacer grids. These grids play an important role in pressurised water reactors, such as the SMR under consideration, as they create turbulence structures and enhance the ability of the flow to remove heat from the fuel rods.

Instead of creating a computational grid resolving all the local geometric details, the researchers developed a mathematical mechanism to reproduce the overall impact of these structures on the coolant flow without sacrificing accuracy. In doing so, the researchers could successfully scale up the related computational fluid dynamics CFD simulations to an entire SMR core for the first time.

“The mechanisms by which the coolant mixes throughout the core remain regular and relatively consistent. This enables us to leverage high-fidelity simulations of the turbulent flows in a section of the core to enhance the accuracy of our core-wide computational approach,” said Argonne principal nuclear engineer Dillon Shaver.

The technical expertise exhibited by the ExaSMR teams is built upon Argonne’s history of breakthroughs in related research fields such as nuclear engineering and computational sciences. 

Several decades ago, a group of Argonne scientists, led by Paul Fischer, pioneered a CFD flow solver software package called Nek5000, which was transformative because it allowed users to simulate engineering fluid problems with up to one million parallel threads. Recently, Nek5000 has been re-engineered into a new solver called NekRS that uses the power of graphics processing units (GPUs) to increase the computational speed of the model. “Having codes designed for this particular purpose gives us the ability to take full advantage of the raw computing power the supercomputer offers us,” Fang said.

The team’s computations were carried out on supercomputers at the Argonne Leadership Computing Facility (ALCF), Oak Ridge Leadership Computing Facility (OLCF), and Argonne’s Laboratory Computing Resource Center (LCRC). The ALCF and OLCF are DOE Office of Science User Facilities. The research is supported by the Exascale Computing Project, a collaborative effort of DOE’s Office of Science and the National Nuclear Security Administration. 


Jared Sagoff is coordinating writer/editor at Argonne National Laboratory

 

Supercomputer at the ALCF Photo credit: Argonne National Laboratory


Privacy Policy
We have updated our privacy policy. In the latest update it explains what cookies are and how we use them on our site. To learn more about cookies and their benefits, please view our privacy policy. Please be aware that parts of this site will not function correctly if you disable cookies. By continuing to use this site, you consent to our use of cookies in accordance with our privacy policy unless you have disabled them.