ИСТИНА |
Войти в систему Регистрация |
|
ИСТИНА ИНХС РАН |
||
The proliferation of systems based on machine (deep) learning and the constantly expanding attempts to use them in critical systems (avionics, healthcare, etc.) inevitably leads to questions about the reliability of such solutions. Reliability (stability) of such systems is the main requirement in implementations in such areas. The absence of such characteristics naturally closes the path to implementation in practical systems. The problem is fundamental in machine learning systems. With any model, there is always some training data and data that comes already in the process of work. The system was trained and showed its characteristics on some data, and will be used with other data. How, under such conditions, to ensure that the characteristics shown on the training data remain so (vary within a limited range) with any data? There are different ways to do this. This paper examines formal methods for verifying machine learning systems