by Ryan Whitwam, ExtremeTech
Stephen Hawking was perhaps the most famous user of “vocoder” speech synthesis hardware, but he was not alone. People all over the world are unable to speak on their own, but science may be approaching a point where they can turn their inner thoughts into speech without tedious typing or clicking. A team from the Neural Acoustic Processing Lab at Columbia University has devised an AI model that can turn brain scans into intelligible speech.
The research combines several advances in machine learning to interpret the patterns of activity in the brain to find out what someone wants to say even if they’re physically unable to make noise. This isn’t a mind-reading machine — the signals come from the auditory cortex where your brain processes speech. So, it can understand real speech and not so-called “imagined speech” that could hold your deepest, darkest secrets.
The technology is still very much a work in progress; more a proof of concept than something you can hook up to your head. The study used neural signals recorded from the surface of the brain during epilepsy surgery, a process called invasive electrocorticography (ECoG). The researchers, led by Nima Mesgarani, used epilepsy patients because they often have to undergo brain surgery that involves neurological testing.
© 2019 Ziff Davis, LLC