In the last couple of decades the application of computer simulations in the nuclear energy sector has been steadily increasing, because the industry needs better predictive tools for describing basic physical phenomena. This is driven by the need for increased safety and operation beyond the design life.
The latter requires a thorough assessment and management of structures and components for ageing degradation, which in turn require higher-fidelity models, both in terms of resolution and physics, increased knowledge of the physics, parametric studies, and better estimation of margins by reducing uncertainties and optimising operations.
The cost, complexity and difficulty of performing experiments on irradiated materials are additional drivers for using simulation tools. Damage in material starts with point defects on the atom scale, while a structural engineer who needs to account for material damage works on a component scale. So these tools must cover a wide range of time and length scales (Figure 1).
High performance computing (HPC) Processor design, high speed memory systems, storage subsystems and hardware designed for parallel computation, have all helped to reduce computation times.
At the start vector processors were used. These were replaced with big shared-memory systems containing a number of processors, and from 1990 with distributed memory systems where computers were clustered within a fast network – described as a high performance computing (HPC) cluster.
A HPC cluster typically consists of a master/login node (i.e. computer), compute nodes and a storage system, all connected via a fast network. Since off-the-shelf components have been used, the price of a HPC system has fallen considerably. Further advances brought multi-core processors within each compute node and the use of graphical accelerators. Consequently, computational power has increased considerably (Figure 2).
Small HPC cluster systems, containing more than 500 compute cores, are nowadays relatively inexpensive and can be purchased from a number of vendors.
Computing service
A HPC system is just one of many necessary building blocks in the IT service structure.
IT is central to many tasks as it needs to provide user-oriented support and HPC life-cycle management. Initially, this means support for defining a HPC system that meets the users’ needs, followed by procurement and commissioning of a system.
IT also provides daily support to the users and their software applications, along with the system administration, back-up and archiving services. The software selection can significantly influence the level of required support. Open-source software is often much more difficult to compile and install, so more support should be planned-in. But commercial licences can often be more expensive than a small or medium HPC system, so maximising HPC system usage is especially important. Transparent HPC usage policies and a clear and efficient computational job submission system need to be put in place.
As needs grow and technology evolves, HPC systems are upgraded and new systems are added. System usage and administration should then be standardised to minimise and simplify administration and to make it easy for users.
For example using a different job submission system may require changes to the job submission scripts. Some users can do this, but it takes valuable time from their core work. The IT has to minimise such distractions. Careful planning of system requirements, financial and human resources, and commissioning are essential.
Communication between the users and the IT is also important. It should be as open as possible. The users often do not understand IT issues and terms, especially in a large organisation. They want to have the latest and fastest HPC system available, while issues like data standardisation, the variation in hardware of nodes in a cluster, ease of administration, serviceability and up-time, are of secondary concern. The IT is also challenged by the growing size of data sets and visualisation over high latency networks. These HPC issues must be addressed by focusing on the user requirements, policies and technology together, in order to obtain the best value.
Fields of application
In nuclear energy HPC is applied in a number of fields, primarily materials science, structural integrity, neutronics and thermal-hydraulics.
Material science, ageing degradation
Understanding the thermo-mechanical properties of structural and nuclear materials is essential for safe operation of a nuclear facility. Understanding, predicting and measuring changes in a material involves different approaches; from first principles at the lowest length and time scales, to Monte Carlo and discrete dislocation dynamics, to finite element models at the component scale.
Results from modelling at smaller scales can be fed into models at the next scale. For example, interatomic potentials calculated using first principles can be used in molecular dynamics simulations. Figure 3 shows the computed intergranular cracking surfaces in 304 stainless steel.
The measured grain structure is recreated within a finite element model to simulate the early crack initialisation and evolution. Such realism helps understand the early crack propagation rate and the effect of the microstructure on it. This is a typical example of a simulation, requiring large amounts of processors and memory.
Structural integrity
At higher scales, the structural integrity of components is also of concern. HPC is often used for finite element modelling of complex structures and corresponding parametric studies. A model is first validated against an experiment and then used to assess the effect of a specific number of parameters, reducing the need for expensive testing.
One example is of a cask for transport and storage of spent nuclear fuel and radioactive waste testing. The cask is dropped from 1m height on a steel bar, where the impact is in a region with cooling fins. In this way a validated model can be used to reduce the need for expensive testing.
Neutronics
Modern Monte Carlo computer codes for neutron transport allow calculation of detailed neutron flux and power distribution in complex geometries with a resolution of ~1mm. They can calculate individual particle tracks, scattering and absorption events (Figure 4). The Monte Carlo approach is optimal for parallelisation and can efficiently use thousands of computer cores. Visualisation of the results can often be challenging.
Thermal hydraulics
Thermal hydraulics simulations can also benefit greatly from HPC. Simulation times can be counted in months, especially for 3D models. A significant number of computer cores are needed, along with a fast interconnect between the compute nodes. Typically these simulations do not require large amounts of memory. Figure 5 shows a severe accident case with a presumed completely melted reactor core. Modelling is applied to study the feasibility of preventing pressure vessel wall failure by flooding the reactor pit with water. A shield enhances water convection and the cooling along the wall.
Figure 6 shows the results of a scale-adaptive simulation for simulation of a turbulent flow in horizontal rod bundle with split type spacer grid. Further research, showed the need to account for secondary flow of a second kind, which develops perpendicularly to the main flow along the channel. Depending on whether a spacer is used or not this secondary flow can be one or two orders of magnitude weaker than the main flow.
About the authors
Dr. Igor Simonovski heads the development of high performance computational resources at the European Commission’s Institute for Energy and Transport.
I. Simonovski (Igor.Simonovski@ec.europa.eu) and S. Clements, European Commission, Joint Research Centre, Institute for Energy and Transport, P.O. Box 2, NL-1755 ZG Petten, The Netherlands
B. Žefran and S. Cimerman, Jožef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana, Slovenia
The authors would like to thank to all the contributors for providing figures of their results.