Click Here to View Publications    
  The STEPP LAB for Sensorimotor Rehabilitation Engineering combines neural, electrical, and mechanical engineering to rehabilitate disordered sensorimotor function. We study normal and disordered speech and voice, and use engineering approaches to investigate sensorimotor disorders, with the goal of rehabilitating disordered movement. Our long-term research goal is to extend therapeutic advances to the speech system, improving current treatment alternatives. We exploit multimodal sensory feedback and virtual reality to develop novel neuroprostheses and engineering solutions for sensorimotor rehabilitation.  
Our work has been and currently is supported by:

-The Boston University Undergraduate Research Opportunity Program

-A grant on "Videogame-Based Speech Rehabilitation for Children with Hearing Loss" from the Deborah Munroe Noonan Memorial Research Fund

-A New Investigators Research Grant and a New Century Scholars Research Grant from The American Speech-Language Hearing Assocation

-The Dudley A. Sargent Research Fund

-The Boston University Clinical and Translational Science Institute , funded by the National Center for Advancing Translational Sciences (NCATS) through UL1TR000157

-A Dysphagia Research Grant from The American Laryngological Association and the Nestle Nutrition Institute

-A Boston University Clinical and Translational Science Institute K-L2 Fellowship through grant KL2TR000158 from the National Center for Advancing Translational Sciences (NCATS)

-Boston University's Peter Paul Professorship

-Grant R03DC012651 (“Automation of Relative Fundamental Frequency Estimation”) from the National Institute on Deafness and other Communication Disorders

-A sub-contract from Grant R42DC011212 (“Development of an Electromyographically Controlled Electrolarynx Voice Prosthesis”) from the National Institute on Deafness and other Communication Disorders

-A grant on “Undergraduate Research on the Effects of Modality on Sensory-Motor Learning” from Boston University Grants for Undergraduate Teaching and Scholarship Program

  Acoustic Correlates of Normal and Disordered Speech Function: 
We have a long-term interest in understanding the relationship between acoustic parameters and speech and voice physiology.  We measure kinematics (measured through electromagnetography as well as optics) and neural signals (sEMG, hooked wire EMG, EEG) in a variety of disorders (PD, SD, vocal hyperfunction) in concert with acoustics to develop objective measures of voice and speech to aid clinical diagnosis and repeated assessment.  
  Videogaming for Rehabilitation: 
Striatal dopamine release during video game play may facilitate brain plasticity following perceptual learning. By combining visual distortion with video game environments and multimodal sensory feedback, we may be
  able to effect faster and more widespread learning during motor rehabilitation.  Although videogaming techniques for rehabilitation have been applied to the upper limb with success, there are many disorders of the voice, speech, and swallowing system that may be amenable to this technique.  Our work in this area is to develop and test novel videogame-based interventions for these disorders in order to improve the quality of life in individuals with sensorimotor disorders.  
  Role of Cortical and Neuromuscular Oscillations in Voice and Speech Production: 
Low-frequency neural oscillations in the cortex and muscle have been associated with visuo-motor learning, attention, and precision during upper limb motor tasks, and may present a unique biofeedback modality to facilitate rehabilitation.  Our work in this area suggests that these measures may offer a window into motor learning and provide functional biomarkers for disorders of fine motor control (e.g., speech, swallowing, upper limb control).  We are working to design rehabilitation protocols able to specifically target motor learning in these systems.
Novel Neurotechnology for Speech Assistance and Rehabilitation:
Rehabilitation of communication through novel human-machine-interfaces is the “next frontier" in neural technology.  A multidisciplinary understanding of the neural dynamics during speech production and real-time signal processing techniques is essential for the advancement of these technologies.  The long-term research agenda of our lab is to bridge speech science with engineering to design new approaches to advance speech human-machine-interfaces to a reliable and intuitive state for populations that are currently severely restricted in their ability to communicate.