Doctors work long hours, and a disturbingly large part of that is documenting patient visits -- one study indicates that they spend 6 hours of an 11-hour day making sure their records are up to snuff. But how do you streamline that work without hiring an army of note takers? Google Brain and Stanford think voice recognition is the answer. They recently partnered on a study that used automatic speech recognition (similar to what you'd find in Google Assistant or Google Translate) to transcribe both doctors and patients during a session.
HSBC is launching voice recognition and touch security services in the UK in a big leap towards the introduction of biometric banking. The bank says its phone and mobile banking customers will no longer have to remember a password or memorable places and dates to access accounts. Barclays has already introduced voice recognition software, but it is only available to certain clients. RBS and NatWest have offered finger print technology for the last year. The move comes weeks ahead of the launch of Atom Bank, which will allow its customers to log on via a face recognition system.
Twilio is making it easier for developers to build applications that react to what people say during phone calls with a new feature announced Wednesday. The company's Automated Speech Recognition beta will take a caller's speech and turn it into text. Twilio's technology hands the text off to developers so their systems can respond to what people say, rather than requiring customers to navigate menus using phone keypads. It's a move by the company to expand the value of its voice tools for developers by adding a layer of machine intelligence over existing support for sending phone calls and texts using code. Automated Speech Recognition uses Google's Cloud Speech API to handle 89 different languages and dialects, including Spanish, French, and Mandarin.
LG is gearing up for the debut of its V30 phablet a couple of weeks from now, so to fuel more attention toward the new handset it has revealed some of the phone's features this week. The Galaxy Note 8 rival is said to come with new security features and a new user interface that takes advantage of its FullVision display. On Monday, LG published an news update about the V30 on its online Newsroom, discussing mainly the personalization options of the device. Mentioned in the post is the customizations the Voice Recognition feature has. Apparently, LG is introducing a new technology that would allow users to unlock the handset without pressing a button or touching the device.
There's a good old saying that says'it takes a village to raise a child' and in the world of tech I believe that child is currently voice assistants. Pretty much most of the new technologies are incorporating voice features and there's a big reason for that. Aside from the fact that it makes interaction with systems easier, voice assistants are not yet advanced and their development relies on analysing vast amounts of voice data. This is why open source projects like the Mozilla Common Voice project exist where users can donate their voice to research and it is also why tech giants like Google and Amazon are pushing out products like Alexa and Google Home. So what exactly do tech companies want to do with our voices?