Technologies


Multi-Modal Neural Interfacing for Prosthetic Devices

Reference#: P02647


Conventional neural interface systems rely on a single recording modality to gather neurophysiological signals (e.g., signals from the brain) that are processed to derive a signal for controlling a prosthetic. To this end, a variety of recording modalities have had varying levels of success, largely depending on which recording modality is selected. Prior applications of single-modality neural interface technology for prosthetic control have yet to demonstrate the reliability, performance, and safety necessary to justify the inherent surgical and medical risks of adopting such technology for practical applications, and they exhibit long-term degradation.

The invention is a neural interface system that uses multiple simultaneous neurophysiological recording modalities from spatially distributed sites in a subjectís brain to more safely and reliably gather the signals and derive therefrom improved control signals for closed-loop, dexterous control of a many-degree-of-freedom prosthetic limb (e.g., an arm and hand). The system uses recordings from single-neuron action potentials (spike recordings), multiple-neuron local field potentials (LFPs), and electrocorticograms (ECoGs) to minimize the deficiencies inherent in each individual recording modality and maximize their combined value to the subject/user, thereby enhancing reliability, performance, and long-term safety of practical applications of the technology over conventional techniques. From a processing point of view, the system includes multiple parallel decoders corresponding to the multiple recording modalities (i.e., decoders for the spike recording, LFPs, and ECoGs) and a hybrid synergistic controller (HSC) that applies combining algorithms across the decoder outputs. On the basis of a decoder output combining strategy, the HSC (i) determines a best derived control signal, indicative of a subjectís intended motion, for controlling (e.g., a prosthetic) and (ii) assigns a confidence variable to that control signal, where the confidence variable can be used to tailor performance of the prosthetic under control (i.e., to cause the limb to execute the intended motion indicated by the control signal). In summary, these elements of the system combine signals from multiple recording modalities across various spatially distributed areas of the brain to provide complementary and redundant information regarding user intention. The HSC applies optimized algorithms across each of the sensed/decoded signals (corresponding to each modality) in parallel in order to gain the highest correlation with a desired movement or system behavior. At a block diagram level (see the figures), the system includes the following, located on a subject: (i) a multitude of wireless three-dimensional spike and LFP sense electrode arrays implanted at various depths in a subjectís brain, (ii) an ECoG sense electrode array implanted beneath the skull, (iii) a local power module, (iv) local processing and transceiving modules for collecting, processing, and wirelessly transceiving signals to/from modules external to the subject, and (v) an optional external processing module. The decoders and HSC module that derive a control signal indicative of a subjectís intended motion may be located in the local processing module or in the external controller. In addition, the system may include a stimulating electrode array implanted in the brain, which receives control signals derived by the processing modules to elicit sensory feedback in the subject to aid in control of the prosthetic.

CONTACT:
Mr. K. Chao
Phone: (443) 778-7927
ott-techmanager6@jhuapl.edu

Additional References:

Image for web