difLIME Enhancing Explainability with a Diffusion-Based LIME Algorithm for Predictive Maintenance
##plugins.themes.bootstrap3.article.main##
##plugins.themes.bootstrap3.article.sidebar##
Juan Galán-Páez
Joaquín Borrego-Díaz
Abstract
Predictive maintenance, within the field of Prognostics and Health Management (PHM), aims to identify and anticipate potential issues in equipment before they become serious problems. Deep Learning (DL) models, such as Deep Convolutional Neural Networks (DCNN), Long Short-Term Memory (LSTM) networks, and Transformers, have been widely adopted for this task and have shown great success. However, these models are often considered "black boxes" due to their opaque decision-making processes, making it challenging to explain their outputs to industrial equipment experts. The complexity and vast number of parameters in these models further complicate understanding their predictions.
This paper introduces a novel Explainable AI (XAI) algorithm, an extension of the well-known Local Interpretable Model-agnostic Explanations (LIME). Our approach uses a conditioned Probabilistic Diffusion Model to generate altered samples in the neighborhood of the original sample studied. We validate our method using various rotating machinery diagnosis datasets. Additionally, we compare our approach with state-of-the-art XAI methods, employing nine metrics to evaluate the desirable properties of any XAI method.
##plugins.themes.bootstrap3.article.details##
Explicability, Diffussion Model, Explainable AI, Predictive Maintenance
Bach, S., Binder, A., Montavon, G., Klauschen, F., Müller, K.-R., & Samek, W. (2015). On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PloS one, 10(7), e0130140.
Bearing Data Cente Case School of Engineering; Case Western Reserve University. (n.d.) https://engineering.case.edu/bearing datacenter. (Accessed 08-04-2024)
Brito, L. C., Susto, G. A., Brito, J. N., & Duarte, M. A. V. (2023). Fault diagnosis using explainable ai: A transfer learning-based approach for rotating machinery exploiting augmented synthetic data. Expert Systems with Applications, 232, 120860.
Decker, T., Lebacher, M., & Tresp, V. (2023). Does your model think like an engineer? explainable ai for bearing fault detection with deep learning. In Icassp 2023-2023 ieee international conference on acoustics, speech and signal processing (icassp) (pp. 1–5).
Gilpin, L. H., Bau, D., Yuan, B. Z., Bajwa, A., Specter, M., & Kagal, L. (2018). Explaining explanations: An overview of interpretability of machine learning. In 2018 ieee 5th international conference on data science and advanced analytics (dsaa) (pp. 80–89).
Goodman, B., & Flaxman, S. (2017). European union regulations on algorithmic decision-making and a “right to explanation”. AI magazine, 38(3), 50–57.
Ho, J., Jain, A., & Abbeel, P. (2020). Denoising diffusion probabilistic models. Advances in neural information processing systems, 33, 6840–6851.
Li, K., Ping, X., Wang, H., Chen, P., & Cao, Y. (2013). Sequential fuzzy diagnosis method for motor roller bearing in variable operating conditions based on vibration analysis. Sensors, 13(6), 8013–8041.
Meng, H., Wagner, C., & Triguero, I. (2023). Explaining time series classifiers through meaningful perturbation and optimisation. Information Sciences, 645, 119334.
Ramachandran, P., Zoph, B., & Le, Q. V. (2017). Searching for activation functions. arXiv preprint arXiv:1710.05941.
Ribeiro, M. T., Singh, S.,&Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining (pp. 1135–1144).
Rudin, C., & Radin, J. (2019). Why are we using black box models in ai when we don’t need to? a lesson from an explainable ai competition. Harvard Data Science Review, 1(2), 1–9.
Santos, M. R., Guedes, A., & Sanchez-Gendriz, I. (2024). Shapley additive explanations (shap) for efficient feature selection in rolling bearing fault diagnosis. Machine Learning and Knowledge Extraction, 6(1), 316–341.
Schlegel, U., Arnout, H., El-Assady, M., Oelke, D., & Keim, D. A. (2019). Towards a rigorous evaluation of xai methods on time series. In 2019 ieee/cvf international conference on computer vision workshop (iccvw) (pp. 4197–4201).
Schlegel, U., Vo, D. L., Keim, D. A., & Seebacher, D. (2021). Ts-mule: Local interpretable model-agnostic explanations for time series forecast models. In Joint european conference on machine learning and knowledge discovery in databases (pp. 5–14).
Scott, M., Su-In, L., et al. (2017). A unified approach to interpreting model predictions. Advances in neural information processing systems, 30, 4765–4774.
Selvaraju, R. R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., & Batra, D. (2017). Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the ieee international conference on computer vision (pp. 618–626).
Selvaraju, R. R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., & Batra, D. (2020). Grad-cam: visual explanations from deep networks via gradient-based localization. International journal of computer vision, 128, 336–359.
Siddiqui, S. A., Mercier, D., Munir, M., Dengel, A., & Ahmed, S. (2019). Tsviz: Demystification of deep learning models for time-series analysis. IEEE Access, 7, 67027–67040.
Simonyan, K., Vedaldi, A., & Zisserman, A. (2013). Deep inside convolutional networks: Visualising image classification models and saliency maps. arXiv preprint arXiv:1312.6034.
Solis-Martin, D., Galan-Paez, J., & Borrego-Diaz, J. (2023). D3a-ts: Denoising-driven data augmentation in time series. arXiv preprint arXiv:2312.05550.
Solís-Martín, D., Galán-Páez, J., & Borrego-Díaz, J. (2023). On the soundness of xai in prognostics and health management (phm). Information, 14(5), 256.
Solís-Martín, D., Galán-Páez, J., & Borrego-Díaz, J. (2025). Phmd: An easy data access tool for prognosis and health management datasets. SoftwareX, 29, 102039. doi: https://doi.org/10.1016/j.softx.2025.102039
Vollert, S., Atzmueller, M., & Theissler, A. (2021). Interpretable machine learning: A brief survey from the predictive maintenance perspective. In 2021 26th ieee international conference on emerging technologies and factory automation (etfa) (pp. 01–08).
Wang, Z., Yan, W., & Oates, T. (2017). Time series classification from scratch with deep neural networks: A strong baseline. In 2017 international joint conference on neural networks (ijcnn) (pp. 1578–1585).
Zereen, A. N., Das, A., & Uddin, J. (2024). Machine fault diagnosis using audio sensors data and explainable ai techniques-lime and shap. Computers, Materials and Continua, 80(3), 3463–3484.