Before scientists can effectively capture and deploy fusion energy, they must learn to predict major disruptions that can halt fusion reactions and damage the walls of doughnut-shaped fusion devices called tokamaks. Timely prediction of disruptions, the sudden loss of control of the hot, charged plasma that fuels the reactions, will be vital to triggering steps to avoid or mitigate such large-scale events.
Today, researchers at the U.S. Department of Energy’s (DOE) Princeton Plasma Physics Laboratory (PPPL) and Princeton University are employing artificial intelligence to improve predictive capability. Researchers led by William Tang, a PPPL physicist and a lecturer with the rank and title of professor at Princeton University, are developing the code for predictions for ITER, the international experiment under construction in France to demonstrate the practicality of fusion energy.
Form of “deep learning”
The new predictive software, called the Fusion Recurrent Neural Network (FRNN) code, is a form of “deep learning” — a newer and more powerful version of modern machine- learning software, an application of artificial intelligence. “Deep learning represents an exciting new avenue toward the prediction of disruptions,” Tang said. “This capability can now handle multi-dimensional data.”
FRNN is a deep-learning architecture that has proven to be the best way to analyze sequential data with long-range patterns. Members of the PPPL and Princeton University machine-learning team are the first to systematically apply a deep learning approach to the problem of disruption forecasting in tokamak fusion plasmas.
Chief architect of FRNN is Julian Kates-Harbeck, a graduate student at Harvard University and a DOE-Office of Science Computational Science Graduate Fellow. Drawing upon expertise gained while earning a master’s degree in computer science at Stanford University, he has led the building of the FRNN software.
More accurate predictions
Using this approach, the team has demonstrated the ability to predict disruptive events more accurately than previous methods have done. By drawing from the huge data base at the Joint European Torus (JET) facility located in the United Kingdom — the largest and most powerful tokamak in operation — the researchers have significantly improved upon predictions of disruptions and reduced the number of false positive alarms. EUROfusion, the European Consortium for the Development of Fusion Energy, manages JET research.
The team now aims to reach the challenging goals that ITER will require. These include producing 95 percent correct predictions when disruptions occur, while providing fewer than 3 percent false alarms when there are no disruptions. “On the test data sets examined, the FRNN has improved the curve for predicting true positives while reducing false positives,” said Eliot Feibush, a computational scientist at PPPL, referring to what is called the “Receiver Operating Characteristic” curve that is commonly used to measure machine learning accuracy. “We are working on bringing in more training data to do even better.”
The process is highly demanding. “Training deep neural networks is a computationally intensive task that requires engagement of high-performance computing hardware,” said Alexey Svyatkovskiy, a Princeton University big data researcher. “That is why a large part of what we do is developing and distributing new algorithms across many processors to achieve highly efficient parallel computing. Such computing will handle the increasing size of problems drawn from the disruption-relevant data base from JET and other tokamaks.”
The deep learning code runs on graphic processing units (GPUs) that can compute thousands of copies of a program at once, far more than older central processing units (CPUs). Tests performed on modern GPU clusters, and on world-class machines such as Titan, currently the fastest and most powerful U.S. supercomputer at the Oak Ridge Leadership Computing Facility, a DOE Office of Science User Facility at Oak Ridge National Laboratory, have demonstrated excellent linear scaling. Such scaling reduces the computational run time in direct proportion to the number of GPUs used — a major requirement for efficient parallel processing.
Princeton’s Tiger cluster
Princeton University’s Tiger cluster of modern GPUs was the first to conduct deep learning tests, using FRNN to demonstrate the improved ability to predict fusion disruptions. The code has since run on Titan and other leading supercomputing GPU clusters in the United States, Europe and Asia, and have continued to show excellent scaling with the number of GPUs engaged.
Going forward, the researchers seek to demonstrate that this powerful predictive software can run on tokamaks around the world and eventually on ITER. Also planned is enhancement of the speed of disruption analysis for the increasing problem sizes associated with the larger data sets prior to the onset of a disruptive event. Support for this project has primarily come to date from the Laboratory Directed Research and Development funds provided by PPPL.
The Latest on: Fusion Recurrent Neural Network
- Science newson May 11, 2022 at 5:00 pm
Efficiently processing broadband signals using convolutional neural networks (CNNs ... Laser powder bed fusion, a 3D-printing technique, offers potential in the manufacturing industry ...
- DeepMind Trains AI Controller for Nuclear Fusion Research Deviceon May 10, 2022 at 6:30 am
Researchers at Google subsidiary DeepMind and the Swiss Plasma Center at EPFL have developed a deep reinforcement learning (RL) AI that creates control algorithms for tokamak devices used in nuclear ...
- Deep learning processor IP Listingon May 8, 2022 at 5:00 pm
Like the DV 700, the DV 500 is based ... Efficient and Versatile Computer Vision, Image, Voice, Natural Language, Neural Network Processor VIP9000 supports all popular deep learning frameworks ...
- Amazon's AI Is Helping the Police Watch Youon May 5, 2022 at 5:00 pm
However, based on the AI's reported functionality, it can be surmised that Rekognition's deep learning is a combination of a convolutional neural network (CNN) and some kind of recurrent neural ...
- Research on control method of upper limb exoskeleton based on mixed perception modelon March 31, 2022 at 4:59 pm
Research on control method of upper limb exoskeleton based on mixed perception model ...
- NTRK3 Fusionon November 9, 2020 at 11:46 am
fusion. NTRKs are a family of receptors known to be involved in the development, differentiation, and metabolism of neural and other tissues and are found at high frequencies in rare cancer types.
- Deep learning accelerator IP Listingon August 1, 2020 at 7:22 am
with altek NNE 1.0 IP, we help customer ... TinyRaptor is a fully-programmable accelerator designed to execute deep neural networks (DNN) in an energy-efficient way. It helps to reduce the inference ...
via Google News and Bing News