top of page

AI Case Study

NASA successfully researchers detection of anomalies in rocket propulsion using machine learning

NASA investigated four different unsupervised learning systems for anomaly detection using data from its space shuttle main engine and a rocket engine test stand. The systems successfully detected a major system failure along with several sensor failures and additional anomalies in testing.

Industry

Industrials

Aerospace And Defence

Project Overview

NASA explores four different unsupervised anomaly detection machine learning methods: Orca, a nearest-neighbours approach, commercial product GritBot, the clustering Inductive Monitoring System (IMS), and a one-class support vector machine (SVM): "We used two testbeds to test the anomaly detection algorithms: The SSME [Space Shuttle Main Engine] and a rocket engine test stand at NASA Stennis Space Center (SSC).

Reported Results

"In our tests, the four algorithms successfully detected one major system failure, and several sensor failures. They also detected some other anomalies that are not considered to be failures.

Technology

"Although all
of these anomalies were either previously known or minor, we believe that they demonstrate that the algorithms have
the ability to detect the kind of unusual phenomena in the data that correspond to significant anomalies. We have
also demonstrated that these algorithms have the potential to process real rocket propulsion sensor data in real time.
An important point to make is that although some anomalies were detected by multiple algorithms, other anomalies
were only detected by one algorithm out of the four. The reason for the difference is that the different algorithms use
different definitions of an anomaly. Because of these differences, it can be useful to run multiple anomaly detection
algorithms on a data set. In deciding which algorithms to use for a particular data set, one important consideration is
the ability of the algorithm to handle discrete data and the presence and importance of discrete data in the data set.
Orca and GritBot can handle discrete data explicitly, but IMS and one-class SVMs cannot. A second consideration
is the ability of the algorithms to explain the anomalies. Orca, IMS, and GritBot all provide some sort of explanation
of each anomaly in terms of the variables, but the one-class SVM algorithm does not (although it might be possible
to add such a capability to it). A third consideration is that while some algorithms (such as IMS) learn a model that
is guaranteed to cover all of the training data, others (including Orca, GritBot, and one-class SVMs) learn a model
that may only cover the vast majority of the training data."

Function

Operations

Field Services

Background

"The ability to detect anomalies in sensor data from a complex engineered system such as a spacecraft is important for at least three reasons. First, detecting anomalies in near-real time during flight can be helpful in making crucial decisions such as whether to abort the launch of a spacecraft prior to reaching the intended altitude. Second, for a reusable spacecraft such as the Space Shuttle, detecting anomalies in recorded sensor data after a flight can help to determine what maintenance is or is not needed before the next flight. Third, the detection of recurring anomalies in historical data covering a series of flights can produce engineering knowledge that can lead to design improvements. The current approach to detecting anomalies in spacecraft sensor data is to use large numbers of human experts.

Flight controllers watch the data in near-real time during each flight. Engineers study the data after each flight. These experts are aided by limit checks that signal when a particular variable goes outside of a predetermined range."

Benefits

Data

"We have been using historical data from the Space Shuttle Main Engine (SSME) [14] and from a rocket engine test stand to develop and test algorithms that we hope will be useful for future launch systems such as Ares I [15] and Ares V [16]. For both of these testbeds, the number of examples of anomalies available in historical data is fairly small. Fortunately, the number of examples of anomalies available in real launch systems is also too low for effective use of supervised anomaly detection algorithms. We therefore decided to use unsupervised anomaly detection algorithms, since they do not require labeled examples of anomalies."

bottom of page