A single link to the first track to allow the export script to build the search page
  • Addiction, Drugs
  • Information from Lay-Language Summaries is Embargoed Until the Conclusion of the Scientific Presentation

    354—Auditory System: Cortical Processing in Animals and Humans

    Monday, November 11, 2013, 8:00 am - 12:00 noon

    354.22: The song in your head: identifying tonal frequency patterns in auditory cortex

    Location: Halls B-H

    ">*J. M. THOMAS, I. FINE, G. M. BOYNTON;
    Psychology, Univ. of Washington, Seattle, WA

    Abstract Body: Purpose: Recent fMRI (Formisano et al. 2008) and EEG studies (Schaefer et al. 2011) have shown that it is possible to identify an auditory stimulus based on the spatial pattern of activity within human auditory cortex (AC). Here we describe a decoding method using the quantitative population receptive-field model (Dumoulin and Wandell 2008) to estimate what auditory stimulus was presented based on BOLD responses in AC (Kay et al. 2004). Using this model we were able to not only successfully classify novel stimuli, but also to accurately estimate the frequencies presented in the stimulus over time. For simple pure tone stimuli, it is possible to reconstruct a reasonable representation of a person's auditory experience from measurements of brain activity.
    Methods: Data were collected from 4 subjects (age: 26‐45 years) on a 3T Philips Achieva using an 8-channel head coil. Pure tone stimuli were presented at an equal perceived volume (65-85 dB) as a random sequence of 240 unique frequency blocks (88-8000 Hz). Our pRF analysis assumes that the sensitivity of a voxel is a 1D Gaussian of log frequency and finds the centers and widths that best-predict the fMRI time-courses to the random stimulus. These pRFs were then used to estimate the frequencies presented over time from the time series generated by novel pure-tone sequences. This was done with a fitting algorithm that identified the series of frequencies for which the (previously estimated) pRFs generated a predicted time series that best matched the measured BOLD responses to the novel stimulus over time.
    Results: pRF centers were tonotopically arranged in mirror-symmetric gradients running perpendicular to Heschl’s gyrus, likely corresponding to hA1 and hR subdivisions of primary auditory cortex (PAC) (Da Costa et al. 2011, Humphries et al. 2010, Striem-Amit et al. 2011). Performance classifying different tone sequences was near perfect. When identifying individual tones over time, the correlation for individual TRs between the actual frequency presented and the predicted frequency was .7632. Averaging across all trials (subjects, scans, and TRs) the correlation was .9631. Tone identification errors were limited to tones similar in frequency likely due to the relatively broad pRF bandwidths of most voxels, evoking similar levels of activation for nearby tones.
    Conclusions: The pRF model can be used in auditory cortex to identify individual pure tones presented over time. A natural extension will be to apply these methods to more natural and behaviorally relevant stimuli, such as music (Schaefer et al. 2011) and speech (Formisano et al. 2008).

    Lay Language Summary: Our research shows that it is possible to guess on the basis of brain responses in the auditory cortex not only what song a person was listening to, but also even which individual note was played over time.
    A primary goal of neuroscience is to develop models to predict how the brain will respond to a given input. A strong and useful test of these predictive models is to see if they also work the other way around: Can the model be used to successfully guess what stimulus a subject was experiencing from their brain responses? Such models are not just important for understanding the relationship between neural responses and our mental experience; they can also help explain the neural basis of individual differences in experience and performance, and it is hoped that they will also eventually provide the basis of “brain-computer interface” assistive technologies.
    We began by asking four subjects to listen to a set of random tones while we measured their brain responses over time using functional magnetic resonance imaging. We then developed a simple model of auditory cortex, where we estimated which tones excited each small region (voxel) of auditory cortex.
    We then tested this model by examining its ability to work in reverse. We re-measured brain activity in the same four subjects, but this time they listened to simple tunes or rising/descending tone sequences. By comparing the predictions of our simple model to the actual brain responses, we were able to determine which tune/tone sequences the subjects were listening to. Furthermore, within a given tone sequence we were able to predict which tones were being heard at any given time.
    The way that we understand the auditory world varies hugely across individuals and their environments. Our hope is that the ability to reconstruct the auditory experience of an individual from measurements of their brain activity should provide a powerful method for studying the neurological basis of both enhancements and deficits in auditory experience.
    Examples of enhanced auditory abilities include our exquisite sensitivity to the acoustic variations that are important for our native language, the ability of professional musicians to hear the richness and detail of music, and the extraordinary ability of blind individuals to construct a complex environment from the sounds that surround them.
    An example of difficulties understanding the auditory world is "central auditory processing disorder," a complex problem affecting 5% of school-age children, that consists of deficits in recognizing and interpreting sounds as well as difficulty filtering out distracting sounds.

    Information from Lay-Language Summaries is Embargoed Until the Conclusion of the Scientific Presentation

    354—Auditory System: Cortical Processing in Animals and Humans

    Monday, November 11, 2013, 8:00 am - 12:00 noon

    354.22: The song in your head: identifying tonal frequency patterns in auditory cortex

    Location: Halls B-H

    ">*J. M. THOMAS, I. FINE, G. M. BOYNTON;
    Psychology, Univ. of Washington, Seattle, WA

    Abstract Body: Purpose: Recent fMRI (Formisano et al. 2008) and EEG studies (Schaefer et al. 2011) have shown that it is possible to identify an auditory stimulus based on the spatial pattern of activity within human auditory cortex (AC). Here we describe a decoding method using the quantitative population receptive-field model (Dumoulin and Wandell 2008) to estimate what auditory stimulus was presented based on BOLD responses in AC (Kay et al. 2004). Using this model we were able to not only successfully classify novel stimuli, but also to accurately estimate the frequencies presented in the stimulus over time. For simple pure tone stimuli, it is possible to reconstruct a reasonable representation of a person's auditory experience from measurements of brain activity.
    Methods: Data were collected from 4 subjects (age: 26‐45 years) on a 3T Philips Achieva using an 8-channel head coil. Pure tone stimuli were presented at an equal perceived volume (65-85 dB) as a random sequence of 240 unique frequency blocks (88-8000 Hz). Our pRF analysis assumes that the sensitivity of a voxel is a 1D Gaussian of log frequency and finds the centers and widths that best-predict the fMRI time-courses to the random stimulus. These pRFs were then used to estimate the frequencies presented over time from the time series generated by novel pure-tone sequences. This was done with a fitting algorithm that identified the series of frequencies for which the (previously estimated) pRFs generated a predicted time series that best matched the measured BOLD responses to the novel stimulus over time.
    Results: pRF centers were tonotopically arranged in mirror-symmetric gradients running perpendicular to Heschl’s gyrus, likely corresponding to hA1 and hR subdivisions of primary auditory cortex (PAC) (Da Costa et al. 2011, Humphries et al. 2010, Striem-Amit et al. 2011). Performance classifying different tone sequences was near perfect. When identifying individual tones over time, the correlation for individual TRs between the actual frequency presented and the predicted frequency was .7632. Averaging across all trials (subjects, scans, and TRs) the correlation was .9631. Tone identification errors were limited to tones similar in frequency likely due to the relatively broad pRF bandwidths of most voxels, evoking similar levels of activation for nearby tones.
    Conclusions: The pRF model can be used in auditory cortex to identify individual pure tones presented over time. A natural extension will be to apply these methods to more natural and behaviorally relevant stimuli, such as music (Schaefer et al. 2011) and speech (Formisano et al. 2008).

    Lay Language Summary: Our research shows that it is possible to guess on the basis of brain responses in the auditory cortex not only what song a person was listening to, but also even which individual note was played over time.
    A primary goal of neuroscience is to develop models to predict how the brain will respond to a given input. A strong and useful test of these predictive models is to see if they also work the other way around: Can the model be used to successfully guess what stimulus a subject was experiencing from their brain responses? Such models are not just important for understanding the relationship between neural responses and our mental experience; they can also help explain the neural basis of individual differences in experience and performance, and it is hoped that they will also eventually provide the basis of “brain-computer interface” assistive technologies.
    We began by asking four subjects to listen to a set of random tones while we measured their brain responses over time using functional magnetic resonance imaging. We then developed a simple model of auditory cortex, where we estimated which tones excited each small region (voxel) of auditory cortex.
    We then tested this model by examining its ability to work in reverse. We re-measured brain activity in the same four subjects, but this time they listened to simple tunes or rising/descending tone sequences. By comparing the predictions of our simple model to the actual brain responses, we were able to determine which tune/tone sequences the subjects were listening to. Furthermore, within a given tone sequence we were able to predict which tones were being heard at any given time.
    The way that we understand the auditory world varies hugely across individuals and their environments. Our hope is that the ability to reconstruct the auditory experience of an individual from measurements of their brain activity should provide a powerful method for studying the neurological basis of both enhancements and deficits in auditory experience.
    Examples of enhanced auditory abilities include our exquisite sensitivity to the acoustic variations that are important for our native language, the ability of professional musicians to hear the richness and detail of music, and the extraordinary ability of blind individuals to construct a complex environment from the sounds that surround them.
    An example of difficulties understanding the auditory world is "central auditory processing disorder," a complex problem affecting 5% of school-age children, that consists of deficits in recognizing and interpreting sounds as well as difficulty filtering out distracting sounds.