ИСТИНА |
Войти в систему Регистрация |
|
ИСТИНА ИНХС РАН |
||
Eye movements accompany all our everyday activities and often precede our physical actions (Land et al., 1999; Velichkovsky, 1995), including actions used to control a computer. For example, we typically fixate a virtual button or a link at computer screen with gaze before approaching them with a cursor and making a click. It is natural to consider the use of this gaze behavior as a good basis for establishing a human-machine interface not requiring manual actions, controlled solely with gaze fixations. One may expect that such an interface would be useful not only for persons whose ability to use skeletal muscles is impaired but also for healthy people, because bypassing manual activity may make interaction with computers and robots fast and fluent. Indeed, such interfaces exist, but their performance is limited due to difficulties in differentiating gaze behavior related to machine control from usual gaze activities (Jacob, 1990; Velichkovsky et al., 1997). It is natural to suggest that this problem can be overcome by sending a click command through a brain-computer interface (BCI) that recognizes brain activity related to motor imagery. Indeed, such hybrid eye – brain – computer interfaces (EBCIs) were designed (e.g., Pfurtscheller et al., 2010; Zander et al., 2010; Yong et al., 2011), but the click in their operation required additional time of the order of seconds, which evidently contradicted the idea of fluent control. A more advanced approach to develop an EBCI was proposed later based on the idea of “passive BCI” (Zander, Kothe, 2011). A passive BCI is such a BCI that responds to the changes in brain activity that are not elicited intentionally by the user. In offline simulations, a passive BCI could differentiate between gaze fixations used for control and spontaneous gaze fixations of the same duration (Ihme, Zander, 2011; Protzak et al., 2013). These studies demonstrated that a “click” can be executed by a BCI without additional manual or mental task but just by the same procedure as used for gaze based control itself, i.e. intentionally prolonged fixations. However, the fixations used in these studies were rather long (1 s) and the gaze control paradigm quite limited. We developed a gaze controlled computer game EyeLines (Shishkin et al., 2015) and recorded EEG when the participants played it with their gaze only. Moves in the game were made with fixations that exceeded 500 ms threshold. Hundreds of spontaneous and controlling fixations were collected from each of 9 participants. The EEG during controlling but not spontaneous fixations showed pronounced negativity in the posterior cortical areas. Using different feature extraction strategies and keeping the false alarm rate below 10%, we obtained correct classification rate for controlling fixations from 20% (for simplified feature sets) to 50-60% (for more reach feature sets) on the test data with 5-fold cross-validation. These results provide a firm basis for online EBCI experiments that we plan to carry out in the nearest future. In these experiments, two fixation thresholds will be used: a 500 ms threshold combined with the EEG classifier positive output and a longer (e.g., 1000 ms) threshold for the cases when control was not detected by the EEG classifier. With such a protocol, we expect that the participants may develop a stronger and more stable EEG pattern when attempting to make moves, because this will lead to faster move execution. With further improvement of classification algorithms, a practically useful EBCI enabling fluent interaction with computers can be developed. Moreover, we suggested that gaze interaction technology, until now being developed mainly by IT engineers, can be further improved by applying relevant knowledge from psychology and cognitive science. In our Gaze Touch approach, vibrotactile feedback is used to reduce the burden on visual attention and to enable fast switching to next steps in sequential fixation based control. Comparing to standard protocol, time to proceed to the next control fixation was reduced by 60 ms (Shishkin et al., in prep.). Tactile (or haptic) feedback is very common in our tool use practice, but can be irrelevant in the case of anthropomorphic and/or autonomous robots, because such robots may be perceived more as partners than tools. For this case, we proposed the Gaze Talk approach based on psychological notion of “joint attention”. In this protocol, tactile feedback is absent, but natural mechanisms of intensive gaze based communication are used instead (Fedorova et al., 2015). Although Gaze Touch and Gaze Talk protocols have not been implemented into the EBCI framework yet, they seem to possess a number of useful features that can be exploited within this class of interfaces. Together with the fast “brain click” already provided by our EBCI, both protocols can be used for the development of systems that can be helpful tools not only for the paralyzed individuals but for healthy people at work as well. This work was supported, in its parts related to designing components of the EBCI, studying the vibrotactile feedback effects and analyzing “joint attention” based interaction, by a grant from the Russian Science Foundation, RScF Project 14-28-00234. References Fedorova A.A., Shishkin S.L., Nuzhdin Y.O., Velichkovsky B.M. (2015) Gaze based robot control: The communicative approach. 7th International IEEE/EMBS Conference on Neural Engineering (NER), 22-24 April 2015, pp. 751-754. Jacob R.J. (1990). What you look at is what you get: eye movement-based interaction techniques. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, ACM, pp. 11-18. Land M., Mennie N., Rusted J. (1999) The roles of vision and eye movements in the control of activities of daily living. Perception, 28, pp. 1311-1328. Pfurtscheller G., Allison B.Z., Brunner C., Bauernfeind G., Solis-Escalante T., Scherer R., Zander T.O., Mueller-Putz G., Neuper C., Birbaumer N. (2010) The hybrid BCI. Frontiers in neuroscience, 4, article 42. Protzak, J., Ihme, K., & Zander, T. O. (2013). A passive brain-computer interface for supporting gaze-based human-machine interaction. Universal Access in Human-Computer Interaction. Design Methods, Tools, and Interaction Techniques for eInclusion. Springer, Berlin, pp. 662-671. Shishkin S.L., Nuzhdin Y.O., Svirin E.P., Fedorova A.A., Slobodskoy-Plusnin Y.Y., Trofimov A.G., Vasilyevskaia A.M., Velichkovsky B.M. (2015) Toward a fluent eye-brain-computer interface: EEG negativity marks visual fixations used to control a game. 7th International IEEE/EMBS Conference on Neural Engineering (NER), 22-24 April 2015, paper No. 055 (Late Breaking Research). Velichkovsky B.M. (1995). Communicating attention: Gaze position transfer in cooperative problem solving. Pragmatics and Cognition, 3(2), pp. 199-222. Velichkovsky B.M., Sprenger A., Unema P. (1997) Towards gaze-mediated interaction: Collecting solutions of the “Midas touch problem”. Human-Computer Interaction INTERACT’97. Springer, pp. 509-516. Yong X., Fatourechi M., Ward R.K., Birch G. (2011) The design of a point-and-click system by integrating a self-paced brain-computer interface with an eye-tracker. IEEE JETCAS, 1(4), pp. 590-602. Zander T.O., Gaertner M., Kothe C., Vilimek R. (2010) Combining eye gaze input with a brain–computer interface for touchless human–computer interaction. Intl. Journal of Human–Computer Interaction, 27(1), 38-51. Zander T.O., Kothe C. (2011) Towards passive brain–computer interfaces: applying brain–computer interface technology to human–machine systems in general. Journal of Neural Engineering, 8(2), 025005.