Аннотация:This article is devoted to the detection of adversarial attacks on machinelearning models. In the most general case, adversarial attacks are special datachanges at one of the stages of the machine learning pipeline, which are designedto either prevent the operation of the machine learning system, or vice versa, toachieve the desired result for the attacker. But there is also a form of attackaimed at extracting non-public information from machine learning models. Theseinclude model inversion attacks. These types of attacks pose a threat to the useof machine learning as a service (MLaaS). Machine learning models accumulatea lot of redundant information during training, and the possibility of exposingthis data while using the model can come as an unpleasant surprise