Extracting Decision Trees from Diagnostic Bayesian Networks to Guide Test Selection

##plugins.themes.bootstrap3.article.main##

##plugins.themes.bootstrap3.article.sidebar##

Published Oct 10, 2010
Scott Wahl John W. Sheppard

Abstract

In this paper, we present a comparison of five different approaches to extracting decision trees from diagnostic Bayesian nets, including an approach based on the dependency structure of the network itself. With this approach, attributes used in branching the decision tree are selected by a weighted information gain metric computed based upon an associated D-matrix. Using these trees, tests are recommended for setting evidence within the diagnostic Bayesian nets for use in a PHM application. We hypothesized that this approach would yield effective decision trees and test selection and greatly reduce the amount of evidence required for obtaining accurate classification with the associated Bayesian networks. The approach is compared against three alternatives to creating decision trees from probabilistic networks such as ID3 using a dataset forward sampled from the network, KL-divergence, and maximum expected utility. In addition, the effects of using χ2 statistics and probability measures for pre-pruning are examined. The results of our comparison indicate that our approach provides compact decision trees that lead to high accuracy classification with the Bayesian networks when compared to trees of similar size generated by the other methods, thus supporting our hypothesis.

How to Cite

Wahl, S. ., & W. Sheppard, J. . (2010). Extracting Decision Trees from Diagnostic Bayesian Networks to Guide Test Selection. Annual Conference of the PHM Society, 2(1). https://doi.org/10.36001/phmconf.2010.v2i1.1909
Abstract 136 | PDF Downloads 87

##plugins.themes.bootstrap3.article.details##

Keywords

PHM

References
Casey, R. G., & Nagy, G. (1984). Decision tree design using a probabilistic model. IEEE Transactions on Information Theory, 30, 93-99.

Craven., M. W., & Shavlik, J. W. (1996). Extracting tree- structured representations of trained networks. In Advances in neural information processing systems (Vol. 8, pp. 24–30).

Frey, L., Tsamardinos, I., Aliferis, C. F., & Statnikov, A. (2003). Identifying markov blankets with decision tree induction. In In icdm 03 (pp. 59–66). IEEE Computer Society Press.

Heckerman, D., Breese, J. S., & Rommelse, K. (1995). Decision-theoretic troubleshooting. Communications of the ACM, 38, 49–57.

Heckerman, D., Geiger, D., & Chickering, D. M. (1995). Learning bayesian networks: The combination of knowledge and statistical data. Machine Learning, 20(3), 20–197.

Hogg, R. V., & Craig, A. T. (1978). Introduction to mathematical statistics (4th ed.). Macmillan Publishing Co.

Ide, J. S., Cozman, F. G., & Ramos, F. T. (2004). Generating random bayesian networks with constraints on induced width. In Proceedings of the 16th European conference on artificial intelligence (pp. 323–327).

Jensen, F. V., Kjærulff, U., Kristiansen, B., Lanseth, H., Skaanning, C., Vomlel, J., et al. (2001). The sacso methodology for troubleshooting complex systems. AI EDAM Artificial Intelligence for Engineering Design, Analysis and Manufacturing, 15(4), 321–333.

Jordan, M. I. (1994). A statistical approach to decision tree modeling. In In m. warmuth (ed.), Proceedings of the seventh annual acm conference on computational learning theory (pp. 13–20). ACM Press.

Kim, Y. gyun, & Valtorta, M. (1995). On the detection of conflicts in diagnostic bayesian networks using abstraction. In Uncertainty in artificial intelligence: Proceedings of the eleventh conference (pp. 362–367). Morgan-Kaufmann.

Koller, D., & Friedman, N. (2009). Probabilistic graphical models. The MIT Press.

Liang, H., Zhang, H., & Yan, Y. (2006). Decision trees for probability estimation: An empirical study. Tools with Artificial Intelligence, IEEE International Conference on, 0, 756-764.

Martin, J. K. (1997). An exact probability metric for decision tree splitting and stopping. Machine Learning, 28, 257-291.

Mengshoel, O. J., Wilkins, D. C., & Roth, D. (2006). Controlled generation of hard and easy bayesian networks: Impact on maximal clique size in tree clustring. Artificial Intelligence, 170(16–17), 1137–1174.

Murthy, S. K. (1997). Automatic construction of decision trees from data: A multi-disciplinary survey. Data Mining and Knowledge Discovery, 2, 345– 389.

Mussi, S. (2004). Putting value of information theory into practice: A methodology for building sequential decision support systems. Expert Systems,
21(2), 92-103.

Pattipati, K. R., & Alexandridis, M. G. (1990). Application of heuristic search and information theory to sequential fault diagnosis. IEEE Transactions on Systems, Man and Cybernetics, 20(44), 872–887.

Pearl, J. (1988). Probabilistic reasoning in intelligent systems: Networks of plausible inference. Morgan Kaufmann.

Przytula, K. W., & Milford, R. (2005). An efficient framework for the conversion of fault trees to diagnostic bayesian network models. In Proceedings of the IEEE aerospace conference (pp. 1–14).

Quinlan, J. R. (1986). Induction of decision trees. Ma- chine Learning, 1(1), 81–106.

Shwe, M., Middleton, B., Heckerman, D., Henrion, M., Horvitz, E., Lehmann, H., et al. (1991). Probabilistic diagnosis using a reformulation of the internest-1/qmr knowledge base: 1. the probabilistic model and inference algorithms. Methods of Information in Medicine, 30(4), 241-255.

Simpson, W. R., & Sheppard, J. W. (1994). System test and diagnosis. Norwell, MA: Kluwer Academic Publishers.

Skaanning, C., Jensen, F. V., & Kjærulff, U. (2000). Printer troubleshooting using bayesian networks. In Iea/aie ’00: Proceedings of the 13th international conference on intustrial and engineering applications of artificial intelligence and expert systems (pp. 367–379).

Smile reasoning engine. (2010). University of Pittsburgh Decision Systems Laboratory. (http://genie.sis.pitt.edu/)

Vomlel, J. (2003). Two applications of bayesian networks.

Zheng, A. X., Rish, I., & Beygelzimer, A. (2005). Efficient test selection in active diagnosis via entropy approximation. In Uncertainty in artificial intelligence (pp. 675–682).

Zhou, Y., Zhang, T., & Chen, Z. (2006). Applying bayesian approach to decision tree. In Proceedings of the international conference on intelligent computing (pp. 290–295).
Section
Technical Research Papers