LLMs as Pre-trained Models for Time-Series Applications in PHM
##plugins.themes.bootstrap3.article.main##
##plugins.themes.bootstrap3.article.sidebar##
Abstract
In industrial Prognostics and Health Management (PHM), the scarcity of sufficiently large, high-quality datasets remains a persistent challenge, which limits the practical deployment of machine learning-based approaches. Recent efforts to address this include few-shot learning, domain adaptation, and other data-efficient learning paradigms. However, while the use of pre-trained models has shown great promise in other fields such as natural language processing and computer vision, its application in PHM remains relatively underexplored. Although initial studies have begun to introduce foundation-style models for specific components—such as recent efforts on bearing health diagnostics using transformer-based architectures—their development is still in an early stage compared to the maturity and versatility of large language models (LLMs) in the NLP domain. LLMs continue to advance rapidly, offering generalization capabilities that could be highly beneficial in data-constrained PHM settings. While some preliminary research has explored the use of LLMs as intelligent agents for decision support in PHM workflows, their application as direct learners for time-series sensor data remains rare. In this work, we propose a novel framework that adapts pre-trained LLMs for time-series-based PHM tasks. Our approach involves mapping temporal sensor signals to a tokenized format compatible with transformer-based language models, enabling the application of LLMs as generic sequence learners. Building on recent pioneering concepts such as multimodal LLM-based health management systems and prompt-driven signal encoding, our framework is benchmarked on publicly available industrial datasets under low-data conditions. The results demonstrate that our LLM-based approach not only maintains robust performance in scenarios with limited labeled data but also outperforms traditional models in fault classification accuracy. This study contributes a new perspective to the PHM community by highlighting the untapped potential of LLMs as general-purpose, pre-trained models in industrial health monitoring applications. Our findings suggest that incorporating LLMs into PHM workflows can be a powerful and forward-looking strategy to overcome data scarcity and improve adaptability across diverse operational domains.
How to Cite
##plugins.themes.bootstrap3.article.details##
PHM, LLMs, Pre-trained Models, Time-series

This work is licensed under a Creative Commons Attribution 3.0 Unported License.
The Prognostic and Health Management Society advocates open-access to scientific data and uses a Creative Commons license for publishing and distributing any papers. A Creative Commons license does not relinquish the author’s copyright; rather it allows them to share some of their rights with any member of the public under certain conditions whilst enjoying full legal protection. By submitting an article to the International Conference of the Prognostics and Health Management Society, the authors agree to be bound by the associated terms and conditions including the following:
As the author, you retain the copyright to your Work. By submitting your Work, you are granting anybody the right to copy, distribute and transmit your Work and to adapt your Work with proper attribution under the terms of the Creative Commons Attribution 3.0 United States license. You assign rights to the Prognostics and Health Management Society to publish and disseminate your Work through electronic and print media if it is accepted for publication. A license note citing the Creative Commons Attribution 3.0 United States License as shown below needs to be placed in the footnote on the first page of the article.
First Author et al. This is an open-access article distributed under the terms of the Creative Commons Attribution 3.0 United States License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.