Аннотация:The explainability of recommendations is a common research topic among researchers and providers
of recommender systems. Numerous approaches and inference types were developed in order to find
explanations for recommendations. For example, we can send users the following recommendation with
an explanation: ”Since you recently made a purchase from merchant X, we suggest you merchant Y”. A
variety of methods can be used to produce the (X, Y) item pairs with this explanation logic. Despite this,
some users might not understand the logical connection between the recommendation Y and explanation
X. In this study, we validate 23,000 recommendation explanations with the help of 400 crowdworkers.
Additionally, we suggest a novel method for evaluating the quality of the (X, Y) item pair explanations
based on crowdworkers’ responses. Finally, we evaluate 9 different approaches and produce interesting
findings. We hope that, in future research, our method will be expanded upon and further studied for
additional types of explanations and domains.