Vocoders just got a serious upgrade. A new speech synthesiser can translate mouth movements directly into intelligible speech, completely bypassing a person's voicebox. Although the synthesiser might not be immediately useful, it's a first step towards building a brain-computer interface that could allow paralysed people to talk by monitoring their thought patterns. To create the speech synthesiser, scientists at INSERM and CNRS in Grenoble, France used nine sensors to capture the movements of the lips, tongue, jaw and soft palate. A neural network learned to translate the sensor data into vowels and consonants, which are emitted from a vocoder.
If trying to order dinner or find your hotel abroad fills you with fear due to your abysmal grasp of foreign languages, don't panic. A new in-ear gadget claims to be able to translate speech like the Babel Fish in The Hitchhiker's Guide to the Galaxy, or the Universal Translator gadget in Star Trek. The system, dubbed the Pilot, will cost $249 and go on sale later this year. It uses two earpieces, one worn by each person in the conversation. The Pilot system comprises two earpieces (shown in three colours above) to be worn by two people who don't speak the same language The Pilot works as a normal set of wireless earphones.
PARIS - People unable to communicate due to injury or brain damage may one day speak again, after scientists on Thursday unveiled a revolutionary implant that decodes words directly from a person's thoughts. Several neurological conditions can ruin a patient's ability to articulate, and many patients currently rely on communication devices that use head or eye movements to spell out words one letter at a time. Researchers at the University of California, San Francisco said they had successfully reconstructed "synthetic" speech using an implant to scan the brain signals of volunteers as they read several hundred sentences aloud. While they stress that the technology is in its early stages, it nonetheless has the potential to transpose thoughts of mute patients in real time. Instead of trying to directly translate the electrical activity to speech, the team behind the study, published in the journal Nature, adopted a three-stage approach.
For AlHaj, who plays a stringed instrument with ancient Iraqi roots called the oud, the tears still fall when he recounts the stories behind the music. One is about a teenage boy who was returning home when a car bomb exploded and leveled his home, sending his homing pigeons skyward with nowhere to land and destroying the alibi that allowed him to see his girlfriend while he cared for his birds and she hung laundry. Another is of a man who returns to Baghdad after living in exile -- and finding a place that no longer feels like home.