Рост методологических разработок на основе искусственного интеллекта, который мы можем наблюдать в широком спектре практических областей,
демонстрирует успешное использование методов, основанных на моделях машинного обучения и глубокого обучения. Одним из основных препятствий
на пути внедрения таких моделей в различных приложениях ввиду их сложности, отмечается проблема их интерпретации. Анализ текущего состояния
исследований в новой области развития систем объяснимого искусственного интеллекта, eXplainable Artificial Intelligence, подтверждает актуальную
необходимость в интерпретируемых объяснениях принятия решений и действий для понимания поведения модели пользователем. В статье представлен
краткий обзор развития области объяснимого искусственного интеллекта. Приведены примеры разработанных моделей.
Ключевые слова: системы искусственного интеллекта, интерпретируемость, модели машинного обучения, принятие решений.
Список использованных источников
- Fok R., Weld D. S. In search of verifiability: Explanations rarely enable complementary performance in ai-advised decision making. Доступ: https://arxiv.org/html/2305.07722v4
- Vilone G., Longo L. Classification of explainable artificial intelligence methods through their output formats. Machine Learning and Knowledge Extraction 3. 2021. No. 3. Pp. 615–661.
- Hoffman R. R., Jalaeian M., Tate C., et al. Evaluating machine-generated explanations: a “Scorecard” method for XAI measurement science. Frontiers in Computer Science. 2023. No. 5:1114806.
- Townsend J., Chaton T., Monteiro J. M. Extracting relational explanations from deep neural networks: A survey from a neural-symbolic perspective. IEEE transactions on neural networks and learning systems. 2019. 31. No. 9. Pp. 3456–3470.
- Schwalbe G., Finzel B. A comprehensive taxonomy for explainable artificial intelligence: a systematic survey of surveys on methods and concepts. Data Mining and Knowledge Discovery. 2023. 6. Pp. 1–59.
- van Lent M., Fisher W., Mancuso M. An explainable artificial intelligence system for small-unit tactical behavior. In: Proceedings of the 2004 national conference artificial intelligence. AAAI Press. 1999. Pp 900–907. 7. Confalonieri R., Coba L., Wagner B., Besold T. R. A historical perspective of explainable Artificial Intelligence. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery. 2021. No. 1. e1391.
- Hoffman R. R., Miller T., Klein G., et al. Increasing the Value of XAI for Users: A Psychological Perspective. KI-Kunstliche Intelligenz. 2023. 17. Pp.1–11.
- Belle V. Logic meets Probability: Towards Explainable AI Systems for Uncertain Worlds. In IJCAI. 2017. Pp. 5116–5120. Доступ: https://www.ijcai.org/proceedings/2017/0733.pdf
- J. McCarthy. Programs with common sense. In Proceedings of the Symposium on the Mechanization of Thought Processes, National Physiology Lab, Teddington, England. 1958. Доступ: https://www.cs.rit.edu/~rlaz/is2014/files/McCarthyProgramsWithCommonSense.pdf
- Carroll J. M., McKendree J. Interface design issues for advice-giving expert systems. Communications of the ACM. 1987. 30 (1). Pp. 14–32.
- Shortliffe E. Computer-Based Medical Consultations: MYCIN. Elsevier, New York, 1976. 286 p.
- Clancey W. J. Methodology for building an intelligent tutoring system. InMethods and tactics in cognitive science 2014. Psychology Press. Pp. 51–83.
- Cohen T. A., Patel V. L., Shortliffe E. H., editors. Intelligent Systems in Medicine and Health: The Role of AI. Springer Nature; 2022. 671 p. Доступ: https://www.shortliffe.net/
- Mueller S. T., Hoffman R.R, Clancey W., et al. Explanation in human-AI systems: A literature meta-review, synopsis of key ideas and publications, and bibliography for explainable AI. arXiv preprint arXiv:1902.01876. 2019 Feb 5.
- Newell A. The knowledge level//Artificial intelligence. 1982. V. 18. No. 1. Pp. 87–127.
- Borrego-Diaz J., Galan Paez J. Knowledge representation for explainable artificial intelligence: Modeling foundations from complex systems. Complex & Intelligent Systems. 2022. 8. No. 2. Pp. 1579–1601. Доступ: https://link.springer.com/article/10.1007/s40747–021–00613–5
- Поспелов Д. А. Моделирование рассуждений. Опыт анализа мыслительных актов. М.: Радио и связь. 1989. 184 с.
- Kocaballi A. B., Sezgin E., Clark L., et al. Design and evaluation challenges of conversational agents in health care and well-being: selective review study. Journal of medical Internet research. 2022 Nov 15; 24 (11): e38525. Доступ: https://www.jmir.org/2022/11/e38525/PDF
- Hassija V., Chamola V., Mahapatra A. et al. Interpreting black-box models: a review on explainable artificial intelligence. Cognitive Computation. 2024. 16. No. 1. Pp. 45–74. Доступ: https://link.springer.com/article/10.1007/s12559–023–10179–8
- Kanezaki A. Unsupervised image segmentation by backpropagation. 2018 IEEE international conference on acoustics, speech and signal processing (ICASSP). IEEE, 2018. Pp. 1543–1547.
- Fu F. F., Wei J., Zhang M., et al. Rapid vessel segmentation and reconstruction of head and neck angiograms using 3D convolutional neural network//Nature communications. 2020. Vol. 11. No. 1. Pp. 4829.
- Saxena A., Eyk N., Lim S. T. Imaging modalities to diagnose carotid artery stenosis: progress and prospect. Biomed. Eng. Online. 2019. Vol. 18. Pp. 1–23. Доступ: https://link.springer.com/article/10.1186/s12938–019–0685–7
- Pisarev I. A., Kotova E. E., Pisarev A. S., Stash N. V. Development of scenarios for automatic processing and data mining in a multi-agent environment. 2019 IEEE Conferenceof Russian Young Researchers in Electrical and Electronic Engineering (EIConRus). IEEE, 2019. Pp. 630–633.
- Bonechi S., Andreini P., Mecocci A. et al. Segmentation of aorta 3D CT images based on 2D convolutional neural networks. Electronics. 2021. Vol. 10. No. 20. Pp. 2559.
- Spinella G., Fantazzini A., Finotello A. et al. Artificial intelligence application to screen abdominal aortic aneurysm using computed tomography angiography. Journal of Digital Imaging. 2023. Vol. 36. No. 5. Pp. 2125–2137.
- Kotova E. E., Pisarev A. S. Predicting Student Academic Performance Based on Individual Cognitive Differences. In 2023 V International Conference on Control in Technical Systems (CTS). IEEE, 2023. Pp. 168–171.
- Stroop J. R. Studies of interference in serial verbal reactions//J. of Exper. Psychology. 1935. Vol. 18. Pp. 643–662.
- Kagan J. Reflection-impulsivity: The generality and dynamics of conceptual tempo//Journal of abnormal psychology. 1966. Vol. 71. No. 1. Pp. 17–24.
- Thurstone L. L. A factorial study of perception. Chicago: University of Chicago Press. 1944. 148 p.
- Котова Е. Е., Падерно П. И. Экспресс-диагностика когнитивно-стилевого потенциала обучающихся в интегрированной образовательной среде//Образовательные технологии и общество. 2015. Т. 18. № 1. С. 561–576.
- Asif R., Merceron A. Analyzing undergraduate students’ performance using educational data mining. Computers & Education. 2017. Vol. 113. Pp. 177–194.
- Bilal M., Omar M., Anwar W., et al. The role of demographic and academic features in a student performance prediction. Scientific Reports. 2022. Vol. 12. No. 1. 9 p.
Авторы