Robust Fault Detection with One-Class Training
##plugins.themes.bootstrap3.article.main##
##plugins.themes.bootstrap3.article.sidebar##
Abstract
Anomaly detection is a critical capability in modern industrial systems, particularly in the energy sector where early fault identification can prevent catastrophic failures, minimize downtime, and reduce maintenance costs. However, the scarcity of labeled fault data in real-world applications makes traditional supervised learning approaches infeasible. This motivates the need for methods trained using only healthy data, a paradigm known as one-class training. One-class approaches are especially relevant for deployment in safety-critical domains such as nuclear power generation, grid monitoring, and process control, where failure data is rare, diverse, and expensive to collect. This study evaluates the performance and generalization capabilities of four data-driven methods trained exclusively on healthy data. The first method uses Principal Component Analysis to reduce data dimensionality and leverages reconstruction error for anomaly scoring. The second approach applies sequence modeling via a Long Short-Term Memory forecasting model, predicting future time steps based on past behavior and flagging sequences that deviate significantly from predicted values. The third is a one-dimensional convolutional autoencoder designed to reconstruct multivariate time-series inputs, with deviations in reconstruction used to identify potential anomalies. The fourth method, termed Deep Center Encoding, employs a neural network encoder trained to map healthy data to a compact region in latent space centered around a learned centroid, with outliers identified based on distance from this center. All methods are evaluated on sensor data from a real, operating nuclear power plant and tested for their ability to detect previously unseen fault distributions. Our results highlight trade-offs in sensitivity and generalization across the approaches, with Deep Center Encoding showing promising robustness to distribution shifts. These findings reinforce the feasibility and importance of one-class training frameworks for generalizable, fault-agnostic condition monitoring in industrial environments, supporting broader efforts in reliable artificial intelligence and predictive maintenance.
How to Cite
##plugins.themes.bootstrap3.article.details##
Deep Center Encoding, Anomaly Detection, LSTM, Autoencoder, PCA, Condition Based Maintenance
Agarwal, V., Smith, J. A., Gribok, A. V., Joe, J. C., Oxstrand, J. H., & Primer, C. A. (2022). Data architecture and analytics requirements for artificial intelligence and machine learning applications to achieve condition-based maintenance (Tech. Rep.). Idaho National Laboratory (INL), Idaho Falls, ID (United States).
Chen, Y.-C. (2017). A tutorial on kernel density estimation and recent advances. Biostatistics & Epidemiology, 1(1), 161–187.
Ghorbani, A., & Zou, J. (2019). Data shapley: Equitable valuation of data for machine learning. In International conference on machine learning (pp. 2242–2251).
Han, T., Wang, X., Guo, J., Chang, Z., & Chen, Y. (2025). Health-aware joint learning of scale distribution and compact representation for unsupervised anomaly detection in photovoltaic systems. IEEE Transactions on Instrumentation and Measurement.
Ifeanyi, A., Saxena, A., & Coble, J. (2023). Within-bank condition monitoring and fault detection of fine motion control rod drives. 13th nuclear plant instrumentation. Control & Human-Machine Interface Technologies (NPIC&HMIT 2023), 66(8), 1053–1062.
Ifeanyi, A., Zanotelli, M., & Coble, J. (2025). System-level prognostics with explainable artificial intelligence. Control & Human-Machine Interface Technologies (NPIC&HMIT 2025).
Ifeanyi, A. O., Dos Santos, D., Saxena, A., & Coble, J. (2024). Fault detection and isolation in simulated batch operation of fine motion control rod drives. Nuclear Technology, 210(12), 2387–2403.
Kim, S., Choi, Y., & Lee, M. (2015). Deep learning with support vector data description. Neurocomputing, 165, 111–117.
Mackiewicz, A., & Ratajczak, W. (1993). Principal components analysis (pca). Computers & Geosciences, 19(3), 303–342.
Venkatesan, R., & Sriyutha Murthy, P. (2008). Macrofouling control in power plants. Springer.
Xu, H., Wang, Y., Jian, S., Liao, Q., Wang, Y., & Pang, G. (2024). Calibrated one-class classification for unsupervised time series anomaly detection. IEEE Transactions on Knowledge and Data Engineering.

This work is licensed under a Creative Commons Attribution 3.0 Unported License.
The Prognostic and Health Management Society advocates open-access to scientific data and uses a Creative Commons license for publishing and distributing any papers. A Creative Commons license does not relinquish the author’s copyright; rather it allows them to share some of their rights with any member of the public under certain conditions whilst enjoying full legal protection. By submitting an article to the International Conference of the Prognostics and Health Management Society, the authors agree to be bound by the associated terms and conditions including the following:
As the author, you retain the copyright to your Work. By submitting your Work, you are granting anybody the right to copy, distribute and transmit your Work and to adapt your Work with proper attribution under the terms of the Creative Commons Attribution 3.0 United States license. You assign rights to the Prognostics and Health Management Society to publish and disseminate your Work through electronic and print media if it is accepted for publication. A license note citing the Creative Commons Attribution 3.0 United States License as shown below needs to be placed in the footnote on the first page of the article.
First Author et al. This is an open-access article distributed under the terms of the Creative Commons Attribution 3.0 United States License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.