A single link to the first track to allow the export script to build the search page
  • Addiction, Drugs
  • Information from Lay-Language Summaries is Embargoed Until the Conclusion of the Scientific Presentation

    016—Brain-Machine Interface

    Saturday, November 09, 2013, 1:00 pm - 3:15 pm

    16.04: A practical, intuitive brain-computer interface for communicating a "yes" or "no" by listening

    Location: 5B

    *J. HILL, E. RICCI, S. HAIDER, L. MCCANE, S. HECKMAN, J. R. WOLPAW, T. VAUGHAN;
    Wadsworth Center, New York State Dept. of Hlth., Albany, NY

    Abstract Body: We previously showed that it is possible to build EEG brain-computer interface systems based on voluntary shifts of covert attention between simultaneous streams of auditory stimuli (Hill et al 2012, Frontiers in Neuroscience 6:181). We aim to translate this system into an easy-to-use practical assistive technology through which a user can express a simple "yes" or "no" cued manually by a conversation partner.
    Our first goal was to move from abrupt artificial stimuli (short discrete beeps or pulses) to more natural, intuitive stimuli (spoken words "yes" and "no"). This solves two problems: first, many subjects previously found the beeps annoying, intrusive or otherwise unpleasant; second, the abstract nature of the beeps made the system unintuitive to many users. When the stimuli are semantically indicative of the purpose of the corresponding interface selection, user instructions are greatly simplified: to say "yes", listen to the voice repeating the word "yes", and to say "no", focus on the voice that says "no".
    We assessed, in a within-subject design with 14 healthy subjects, whether the new voice stimuli, or the beep stimuli of the previous study, allow better performance in an 8-channel EEG-based BCI. Stimuli were presented dichotically, the "no" stimuli in the left ear alternating with the "yes" stimuli in the right. The stimuli in each single trial lasted 3.5 seconds (each stream consisted of 7 stimuli repeated at 2 Hz). The BCI system classified the responses at the end of each trial. Despite increased between-subject variability in the ERPs, we find no significant penalty (in fact, a non-significant advantage) for the voice stimuli (mean ± s.d. online performance = 76% ± 11) in comparison with the beep stimuli (73% ± 11).
    We conducted further tests in which people with ALS (ALS-FRS=0) used the spoken-word version of the paradigm to answer natural-language questions to which we knew (or subsequently found out) the answer. After calibration measurements lasting 10-15 minutes, subject 1 was able to answer 8 out of 9 questions correctly using the BCI (88.9% ± 30% s.e.). Subject 2 answered 32 out of 40 questions correctly, or 80% correct (± 13), rising to 86.7% (± 22) with the use of response verification.
    These preliminary results suggest our first two locked-in users could use the system to make yes-no choices at roughly the same level of accuracy as our healthy volunteers. Future developments will aim to raise the absolute level of performance, for example by increasing the length of single trials. We conclude that this is a promising approach that provides an intuitive interface for simple yes/no communication, without relying on vision, for severely paralyzed users.

    Lay Language Summary: Listening to your Brain Listening
    We have developed a system that allows people to communicate yes or no just by choosing whether to pay attention to the spoken word yes or the spoken word no. This is accomplished by interpreting the brain’s electrical activity.
    Systems that harness brain signals for practical tasks are often called brain-computer interfaces or BCIs. Research and development of BCIs has been expanding recently. They offer the potential to replace, restore, enhance, supplement or improve the brain's natural functions. For example, they can provide a communication system for people who are unable to speak or to use their limbs- a condition called “locked-in syndrome”.
    Our study included two people with locked-in syndrome resulting from amyotrophic lateral sclerosis (ALS, or Lou Gehrig’s disease). Both users could answer yes-or-no questions using our system, with about the same accuracy as the healthy volunteers in our laboratory study. Both had previously used a BCI system that relies on letters flashing on a screen, but this was the first ever field test of a BCI driven by spoken words. One user, who normally communicates via eyebrow movements, welcomed this novel approach, telling us, “My eyes get tired, but never my ears.”
    To use our system, users wear earphones and listen to a male voice repeating the word no in the left ear and a female voice repeating the word yes in the right. The voices occasionally say nope instead of no, or yep instead of yes. To choose no, users simply concentrate on the voice saying no, counting the nopes to help them focus. To choose yes, they shift their attention to the voice saying yes and count the yeps.
    This setup encapsulates a task that our brains perform every day: sounds reach our ears from multiple sources simultaneously, and we must filter out irrelevant sounds to concentrate on what interests us. The human brain is very good at solving this so-called “cocktail-party problem”. In doing so, it produces tiny electrical signals that correspond to the chosen sounds. We measure these signals using EEG electrodes (flat metal discs placed against the scalp) and then apply a series of modern signal-processing and pattern-recognition algorithms to estimate where the person is focusing their attention. In this way, we translate the covert mental act of listening into an action that can directly affect the external world.
    Our previous studies demonstrated success with a similar system that employed the harsh, abrupt beeps typically used in earlier EEG research. In the current study, we explored the use of the natural words yes and no as a more pleasant, more intuitive alternative. Our laboratory results indicated that healthy volunteer subjects could use the words at least as well as they could use beeps. This gave us the confidence to adopt the word-based design with locked-in users.
    We are continuing to develop this system into a reliable communication tool that users and their caregivers can operate without expert help. This will enable people with locked-in syndrome to increase their independence and gain control over their lives through effective communication.
    People with locked-in syndrome may be able to communicate using small muscle movements, such as eye movements or facial twitches. Some have even written books this way: famous examples include Jean-Dominique Bauby, author of The Diving Bell And The Butterfly, and the influential physicist Stephen Hawking. However, these muscle movements may be very tiring, may become progressively weaker, and are often difficult for friends and family to recognize. By eliminating the reliance on muscles, our research aims to enable more people with locked-in syndrome to communicate, and for longer.