There's a good old saying that says'it takes a village to raise a child' and in the world of tech I believe that child is currently voice assistants. Pretty much most of the new technologies are incorporating voice features and there's a big reason for that. Aside from the fact that it makes interaction with systems easier, voice assistants are not yet advanced and their development relies on analysing vast amounts of voice data. This is why open source projects like the Mozilla Common Voice project exist where users can donate their voice to research and it is also why tech giants like Google and Amazon are pushing out products like Alexa and Google Home. So what exactly do tech companies want to do with our voices?
Amazon is reportedly working on a new feature for its Alexa voice assistant that would allow for individual voice recognition, according to a report from Time. In other words, your Echo would theoretically be able to tell voices apart and figure out who is actually talking to it. According to Time, the feature is internally known as "Voice ID" and has been in development since summer 2015. The report claims that Voice ID would allow certain commands to be locked to a specific voice -- for example, only allowing the account holder to purchase things off Amazon (something that's certainly been an issue in the past). Alexa actually already supports multiple user profiles and PIN verification for purchases, but automating the process through voice recognition would certainly make it easier to take advantage of those features.
With the success of Amazon's Echo and it's voice-controlled assistant Alexa, the smart speaker war is heating up to battle for the hub of home automation. Traditionally, these devices needed to be operated with buttons, a remote, or other physical controls, limiting their capabilities. As AI becomes more mainstream and customers demand more from their devices, the need to become more user-friendly grows. Consumers want instantaneous responses without having to hunt down a remote, or get up to approach their device - the demand for far-field voice activation is just around the corner. Far-field voice technology is involved in Amazon Echo, Google Home, and Apple HomePod amongst others, but how? Far-field speech recognition is far more complicated than we might have initially though, and Tao Ma, Principal Architect, AI Platform & Research at JD.com has shared with us some of his work in the area including the background, system design and architecture of these systems.
We are on the brink of entering a new technological era. A time where we will move beyond the need to use text interfaces to communicate with computers and instead, we will be talking to the devices themselves. With voice apps and voice assistants, devices will be able to understand and interact with us through voice technology. That's why voice discovery is going to challenge the established way of using smartphones to do most things online. In the future, most search queries will be done through voice commands, because using your voice to interact with a computer is simply the fastest way to get things done.
This is the first smart speaker for music lovers, the company claims. It looks like a Play:1 on the outside (not a bad thing), but it has a six-microphone array to pick up your spoken commands and lighting to indicate when voice control is active. It'll support Alexa out of the box, but Sonos says it's open to using other voice assistants -- in fact, Google Assistant will be coming in 2018. You'll have access to Alexa's skills from the get-go, of course, but the big deal is that you can control playback entirely through voice if you like. You can tell the One to play music on specific Sonos speakers or throughout your entire home, for instance.