HCIBRAIN - Methods of human-computer communication to diagnose and to stimulate patients with severe traumatic brain injuries
The main research goal implemented under the project was to develop the concept and methodology for processing data obtained from Human-Computer Interfaces (HCI), enabling the assessment of cognitive functions of patients after brain injuries. Used among others eye-tracking interfaces and electroencephalographic (EEG) helmets. The tool software developed as part of the project relates to several groups of diagnostic and therapeutic tasks. The aim of the study was to assess the level of awareness using objective measures of eye-tracking during subsequent sessions involving test subjects to perform computer exercises. The results obtained based on the method developed during the creation of the system were compared with the results of standard neurological examination of patients. Depending on the context, a diverse number of people participated in the study, i.e. from several to fifty people. It turned out that six out of the first group of ten patients after severe brain trauma showed signs of conscious completion of at least one of the five tasks used in the study. Meanwhile, for example, according to the first and subsequent diagnoses made in the course of a standard neurological examination, one of these patients should have been in a vegetative state and the other in a coma. The developed method, in contrast to the currently used subjective neurological scales, allowed mapping the patient's state of consciousness on the scale of real numbers, and thus in an objectified way.
The research goal was also to study reading comprehension skills in patients awake from a coma who remain in a state of reduced consciousness. Eye-tracking technology has also been used to achieve this goal. In the course of developing the necessary software, various tasks were prepared to check the ability to read with understanding syllables, words, and sentences. The obtained results showed that people awakened from a coma, remaining in a state of reduced consciousness, were able to read with comprehension, but had difficulty recognizing errors in the written text. The obtained results made it possible to formulate recommendations regarding the development of human-computer interfaces, based on eye-tracking, intended for people with awareness deficits.
Some of the research work carried out in the project aimed at examining the ways of parameterization and classification of EEG signals in terms of maximizing the effectiveness of recognizing motion intentions. Observations previously known from the literature were used showing the relationship between electrical activity in different parts of the brain and the act of performing real movement (e.g., raising the hand). Also, similar nervous activity appears when the subject is only imagining movement. In connection with the above, one of the research objectives in the project was to propose and test new methods of parameterization and classification of EEG signals, which ultimately resulted in the creation of a classifier with high efficiency in recognizing real and imagined movements based on recorded nerve activity. Recognizing the intention of movement is particularly crucial in human-computer interfaces, dedicated to paralyzed people, who thus gain the opportunity to control the position of the cursor on the screen and to confirm the indicated options, similarly to the ways of operating the application using the touch screen. In order to prove the above thesis about, the high effectiveness of the proposed approach, several experiments were carried out, the aim of which was to use and compare the various machine learning methods used to classify imaginary motion.
As a result of the research, it was proved that the results of the subjective assessment of patients' condition used in neurology (GCS) correlate with the results obtained using the developed human-computer interfaces (EGT and EEG), thanks to which it is possible to objectify assessments of patients with cerebral palsy using modern information technologies. Besides, it has been shown that neural networks (autoencoders) can be used to effectively analyze EEG and EGT signals in order to determine the level of consciousness of people after severe brain injuries.
The research results are presented in the monograph entitled "Computer Eye of Consciousness", in 12 publications in scientific journals, in the form of 4 chapters in foreign book publications and in 11 conference papers, as well as in 4 informational and promotional presentations.