Researchers improve robots' speech recognition by modeling human auditory processing
We rarely think too much about noises as we're listening to them, but there's an enormous amount of complexity involved in isolating audio from places like crowded city squares and busy department stores. In the lower levels of our auditory pathways, we segregate individual sources from backgrounds, localize them in space, and detect their motion patterns -- all before we work out their context. Inspired by this neurophysiology, a team of researchers shared in a preprint paper on Arxiv.org As the researchers note, the torso, head, and pinnae (the external part of the ears) absorb and reflect sound waves as they approach the body, modifying the frequency depending on the source's location. They travel to the cochlea (the spiral cavity of the inner ear) and the organ of Corti within, which produces nerve impulses in response to sound vibrations.
Feb-17-2019, 09:07:06 GMT
- Genre:
- Research Report (0.57)
- Technology:
- Information Technology > Artificial Intelligence
- Robots (0.56)
- Speech > Speech Recognition (0.56)
- Information Technology > Artificial Intelligence