ИСТИНА |
Войти в систему Регистрация |
|
ИСТИНА ИНХС РАН |
||
In machine learning applied tasks, a model is often required not only to have a good performance metrics, but also to have “interpretability”, i.e. people want to understand why the system made a particular decision. This desire may be caused by a serious decision risks. Sometimes the interpretation of the result can help to see new, non-trivial dependencies. In recent years, a number of methods have already been developed to explain predictions for models working with traditional data types (tabular, images), but multivariate time series are still not among them. We have proposed a scalable method for explaining predictions of machine learning models for multivariate time series. Also, we adapted a method of comparing the quality of interpretation. The results of experiments for medical data (electroencephalograms) are given.