Explainable multimodal learning for predictive maintenance of steam generators

##plugins.themes.bootstrap3.article.main##

##plugins.themes.bootstrap3.article.sidebar##

Published Sep 4, 2023
Duc An Nguyen Sagar Jose Thi Phuong Khanh Nguyen Kamal Medjaher

Abstract

Prognostics and Health Management (PHM) is identified as an important lever for enhancing the development of predictive maintenance to ensure the reliability, availability, and safety of industrial systems. However, the efficiency of data- driven PHM approaches is dependent on the quality and quantity of data. Therefore, exploiting multiple data sources can provide additional, useful information than single-modal data. For instance, by incorporating multiple data sources, including condition monitoring data, images from cameras, and texts from maintenance technicians’ reports, multi-modal learning can provide a more comprehensive and accurate understanding of the system’s health. However, multi-modal deep learning is complex to understand. To address this complexity, it is crucial to incorporate explainable artificial intelligent techniques to provide clear and interpretable insights into how the model makes decisions. In this light, this paper proposes the application of the model-agnostic-explanation approach, i.e., SHAP, to explain the working mechanism of multimodal learning for the prediction of industrial steam generator degradation. Particularly, we determine the important features of each data modality and investigate how multimodal learning can overcome the issues of low-quality data from a single modality due to the additional information from other data modalities.

Abstract 173 | PDF Downloads 113

##plugins.themes.bootstrap3.article.details##

Keywords

Explainable AI, SHAP, Multimodal Learning, Predictive Maintenance, Degradation Prediction, Steam Generators

References
Achanta, R., Shaji, A., Smith, K., Lucchi, A., Fua, P., & Susstrunk, S. (2010, 06). Slic superpixels. ¨ Technical report, EPFL.

Amin, O., Brown, B., Stephen, B., & McArthur, S. (2022). A case-study led investigation of explainable ai (xai) to support deployment of prognostics in the industry. In Phm society european conference (Vol. 7, pp. 9–20).

Efron, B. (1992). Bootstrap methods: another look at the jackknife.

Springer. Girard, S. (2014). Physical and statistical models for steam generator clogging diagnosis. Springer.

Jabeen, S., Li, X., Amin, M. S., Bourahla, O., Li, S., & Jabbar, A. (2023). A review on methods and applications in multimodal deep learning. ACM Transactions on Multimedia Computing, Communications and Applications, 19(2s), 1–41.

Jiao, Q., & Zhang, S. (2021). A brief survey of word embedding and its recent development. In 2021 ieee 5th advanced information technology, electronic and automation control conference (iaeac) (Vol. 5, pp. 1697– 1701).

Joshi, G., Walambe, R., & Kotecha, K. (2021). A review on explainability in multimodal deep neural nets. IEEE Access, 9, 59800–59821.

Lundberg, S. M., & Lee, S.-I. (2017). A unified approach to interpreting model predictions. Advances in neural information processing systems, 30.

Mckinley, T., Somwanshi, M., Bhave, D., & Verma, S. (2020). Identifying nox sensor failure for predictive maintenance of diesel engines using explainable ai. In Phm society european conference (Vol. 5, pp. 11–11).

Nguyen, K. T. P., Medjaher, K., & Tran, D. T. (2023, April). A review of artificial intelligence methods for engineering prognostics and health management with implementation guidelines. Artificial Intelligence Review, 56(4), 3659–3709.

Nor, A. K. M., Pedapati, S. R., Muhammad, M., & Leiva, V. (2022). Abnormality detection and failure prediction using explainable bayesian deep learning: Methodology and case study with industrial data. Mathematics, 10(4), 554.

O’Shea, K., & Nash, R. (2015). An introduction to convolutional neural networks. arXiv preprint arXiv:1511.08458.

Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). ” why should i trust you?” explaining the predictions of any classifier. In Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining (pp. 1135–1144).

Sanakkayala, D. C., Varadarajan, V., Kumar, N., Soni, G., Kamat, P., Kumar, S., . . . Kotecha, K. (2022). Explainable ai for bearing fault prognosis using deep learning techniques. Micromachines, 13(9), 1471.

Srinivasan, S., Arjunan, P., Jin, B., Sangiovanni-Vincentelli, A. L., Sultan, Z., & Poolla, K. (2021). Explainable ai for chiller fault-detection systems: Gaining human trust. Computer, 54(10), 60–68.

Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., . . . Polosukhin, I. (2017). Attention is all you need. Advances in neural information processing systems, 30.

Yang, Z., Baraldi, P., & Zio, E. (2021). A multi-branch deep neural network model for failure prognostics based on multimodal data. Journal of Manufacturing Systems, 59, 42–50.
Section
Regular Session Papers