A single link to the first track to allow the export script to build the search page
  • Addiction, Drugs
  • Information from Lay-Language Summaries is Embargoed Until the Conclusion of the Scientific Presentation

    373—Brain-Machine Interface III

    Monday, November 11, 2013, 8:00 am - 12:00 noon

    373.09: Adaptive decoding of eye movements with a simple recurrent artificial neural network

    Location: Halls B-H

    *S. TORENE1,2,3, S. L. BRINCAT8, A. F. SALAZAR-G”MEZ1,2,3, N. JIA1,3,4, M. PANKO1,2,3, V. SALIGRAMA5, E. K. MILLER8, F. H. GUENTHER3,6,7;
    2Grad. Program for Neurosci., 3Ctr. for Computat. Neurosci. and Neural Technol., 4Program in Cognitive and Neural Systems, 5Electrical & Computer Engin., 6Biomed. Engin., 7Speech, Language & Hearing Sci., 1Boston Univ., Boston, MA; 8The Picower Inst. for Learning and Memory & Dept. of Brain and Cognitive Sci., MIT, Cambridge, MA

    Abstract Body: As the brain learns a task, neural plasticity alters activity patterns. A primary concern for brain-machine interfaces (BMI) is the development of decoding algorithms that are able to adapt to such changing activity, as well as reduce, or eliminate, the need for calibration. We trained a four layer simple recurrent artificial neural network (RANN) with backpropagation to test the feasibility of adaptive online decoding of eye movements in a macaque during a 6-choice delayed saccade task. Recordings were taken from 96 intracortical electrodes in dorsolateral prefrontal cortex (PFC), frontal eye field (FEF), and the supplementary eye field (SEF). Three consecutive days of 80-500 Hz local field potential (LFP) data were used for this offline analysis. Whole day training was performed separately on the 1st and 2nd days to create models of the RANN, which were then used on the 2nd and 3rd days, respectively, as bases for online learning. Two days (i.e. the 2nd and 3rd days) of adaptive online decoding were thereby simulated by updating the RANN models after each sequential trial of the test data. Initial simulated online performance of the RANN models were above chance levels (~65-80%), and late performance of the RANN models were qualitatively similar to the results obtained from linear discriminant analysis run online in a closed-loop setting (~75-80% correct). Asymptotic performance during simulated online learning was achieved within ~100 trials--significantly fewer than the 600 training trials required for equivalent performance with the linear discriminant model. Our results indicate that a RANN can be used for online adaptive decoding and achieve a performance level comparable to non-adaptive decoders that are trained daily, which therefore reduces or eliminates the need for calibration in brain-machine interfaces. Additionally, because the model parameters were kept identical between the two whole day training sessions and identical between the two simulated online training sessions, this method of online adaptive decoding reduces the need for an onsite expert for BMI calibration.

    Lay Language Summary: Our collaboration has found that brain-computer interfaces could theoretically be easier to use by avoiding the time spent on calibrating decoding algorithms. Importantly, we can also enable these decoding algorithms to adapt to the user’s changing neural activity which occurs as a result of normal, everyday life.
    Using thoughts to control a computer cursor, voice synthesizer, or robotic limbs could one day be a common method of helping paralyzed people and locked-in patients become more independent. However, such brain-computer interfaces traditionally require frequent calibration trials that take time and effort on behalf of the user and necessitate the involvement of one or more trained experts.
    Calibration trials can take anywhere from several minutes to almost an hour, and the associated effort is non-trivial because every moment spent unsuccessfully using the device potentially reduces user motivation or increases user weariness. Furthermore, such calibration is generally static and does not adjust to the changes in neural activity that are associated with temporary mental states such as arousal or attention, or more permanent states like learning.
    To adjust to such states, either the calibration trials can be repeated periodically throughout a session, or the brain-computer interface can adapt to the user as the user attempts to control it, without need of calibration. In this current research, we looked at the latter possibility of calibration-free adaptation.
    We analyzed previously recorded neural data from a monkey as it moved its eyes from the center of a computer screen to one of six possible targets for a juice reward. Using neural data from two consecutive days, we pre-trained an artificial neural network decoding algorithm on data from the first day to determine which of the six targets the monkey wanted to look at, and then we used that pre-trained decoder as a starting point for decoding the monkey’s intended eye movements on the second day.
    Using the previous day’s neural data as a precursor for interpreting the following day’s neural data, we were able to show that the decoding algorithm slowly adapted without need of calibration on the second day. From the very first trial, the pre-trained adaptive decoder successfully determined which of the six targets the monkey intended to look at about 70% of the time. Then, as the trials continued, the performance of the adaptive decoding algorithm quickly increased to about 80%. In this way, we simulated a brain-computer interface adapting itself to ongoing user control, allowing the user to potentially skip calibration trials and simply begin using it.
    Altogether, we repeated this two day sequence with four different consecutive pairs of days and achieved similar results all four times. Promisingly, decoder parameters did not need to be modified from day to day, which means that the decoder adaptation occurred without the need for expert intervention. In other words, the brain-computer interface setup was “set it and forget it”.
    We hope to test our brain-computer interface in a live session with a monkey to see how the adaptive decoding algorithm performs in a real world scenario. If successful, it could be used in human trials and might be applied to chronic therapeutic use. If researchers are able to reduce the time spent on calibrating brain-computer interfaces, it will directly benefit the users of those brain-computer interfaces. With enough advancement in the field, we may see brain-computer interfaces become common therapeutic solutions to a host of human mobility problems.