Given that nuclear reactors are typically submerged in water, direct manual inspection is unfeasible due to the high temperature and radiation environment. As a result, remote visual inspection is widely used to inspect reactor pressure vessel internals. Typically operators use a robotic arm to record videos of the underwater reactor surface, with engineers reviewing the footage to identify any cracks.

However, a recent study from researchers at Purdue University in the US state of Indiana and the US Electric Power Research Institute (EPRI) has identified a need to increase reliability of the process.

Mohammad Jahanshahi, an assistant professor in Purdue University’s Lyles School of Civil Engineering, who led the research, says current inspection practices are “time consuming, tedious and subjective” because they involve an operator manually locating cracks in metallic surfaces. He says that improved data processing methods using image analysis can significantly reduce the potential for human error associated with remote visual data review.

New approach

Purdue University’s new system, CRAQ (crack recognition and quantification), uses advanced algorithms and machine learning to detect cracks on the basis of changes in the texture that appear on steel surfaces.

Other automated crack-detection systems under development are designed for processing single images and often do not detect cracks in metallic surfaces because the cracks are usually small, have low contrast and are difficult to distinguish from welds, scratches and grind marks.

The CRAQ system identifies cracks by a method called Bayesian data fusion, which tracks cracks via video frames from the information coming from multiple frames.

The team tested the CRAQ technique using video footage of underwater metal components in similar conditions to those found in nuclear reactors, featuring a mixture of cracks, welds, scratches and grind marks.

Essentially CRAQ assesses whether the detected ‘cracks’ are real, outlining the cracks with colour-coded boxes that correspond to confidence levels. If the algorithm assigns a high confidence level to a crack, the box outline is red. If the box is orange, the confidence is lower, and if it is yellow, it is even lower. The process takes about a minute. A technician can follow up with manual inspection to confirm the presence of a crack.

Jahanshahi is working on a second version of the software by developing deep learning algorithms to detect cracks. One characteristics of deep learning is its need for huge data sets. The University is planning to collaborate with EPRI to gather more videos and is looking for nuclear plants to share footage. “We are optimistic that within the next few months we will have a system that can be used in a nuclear environment,” Jahanshahi told NEI.  


For more information see ‘A Texture-based Video Processing Methodology Using Bayesian Data Fusion for Autonomous Crack Detection on Metallic Surfaces’ in Computer- Aided Civil and Infrastructure Engineering.