Goto

Collaborating Authors

Speech Recognition


5 ways voice assistants will get better over the next 5 years - Digital Street

#artificialintelligence

More than 500 million people use the Google voice assistant--found on Android phones and other Google devices like smart speakers--each month. This is just one sign of how quickly voice-powered artificial intelligence (AI) systems are becoming a part of our everyday lives. You can already ask the Google Assistant to help you with many tasks, from getting a quick update on the news, weather or the rand/dollar exchange rate to reading out your texts, composing a text or playing your favourite playlist of the moment. And as this technology improves and matures, you can expect voice assistants to be everywhere--your car, home, personal devices--and for them to be able to do even more amazing things. Over time, you can expect the voice assistants that surround you to be better able to understand and respond to your context, needs and preferences.


Let AI write your Blog -- AutoBlog

#artificialintelligence

This is a full transcript of the AutoBlog video & matching slides. We hope, you enjoy this as much as the video. Of course, this transcript was created with deep learning techniques largely automatically and only minor manual modifications were performed. Also, if you spot mistakes, please let us know! I want to talk to you today about research videos and research presentations. I know that many of you are producing videos like the one I'm producing right now in order to highlight their research.


Chatting with Your Voice Assistant - Connected World

#artificialintelligence

I have a vision that voice assistants are evolving so quickly they are going to connect us to more than just hailing a cab or ordering some food. Do you have the same vision? In February I wrote a blog discussing how many of us hate, literally hate, the concept of Big Brother hovering over our lives and listening to our every word. Oh, how times have changed in a half a year. Now we have become a society that talks less about who's listening and instead about how fast we can order something with our voice and using voice assistants.


Why Flat Navigation is Leading the Shift to Voice Interfaces

#artificialintelligence

It allows to access information without going through a series of navigational commands. Flat navigation is one of the greatest differentiators between designing a UX vs. a VUX for a product or device. While a user interface that requires physical interaction, such as a keyboard or touch screen may require several interactions to arrive at the result you're seeking, a voice interface allows you to simply ask a question and get a result. The advantages of voice interfaces include speed, efficiency, accessibility, and convenience. According to Jess Williams, CEO of Opearlo, "If you have detailed, well-structured data, there will be value in making it voice accessible -- because voice assistants will happily sub their selected answer for your if they think it will provide a better customer experience."


Deep multi-metric learning for text-independent speaker verification

#artificialintelligence

Text-independent speaker verification is an important artificial intelligence problem that has a wide spectrum of applications, such as criminal investigation, payment certification, and interest-based customer services. The purpose of text-independent speaker verification is to determine whether two given uncontrolled utterances originate from the same speaker or not. Extracting speech features for each speaker using deep neural networks is a promising direction to explore and a straightforward solution is to train the discriminative feature extraction network by using a metric learning loss function. However, a single loss function often has certain limitations. Thus, we use deep multi-metric learning to address the problem and introduce three different losses for this problem, i.e., triplet loss, n-pair loss and angular loss.


Domain-Specific Speech Recognition and Audio Transcription

#artificialintelligence

AI transcription software supports various file formats and transcribes from speech to text in any language. Select industry domain and audio type from predefined categories to improve the recognition accuracy of domain-specific words. Our speech transcription engine uses state-of-the-art deep neural network models to convert from audio to text with close to human accuracy. Search, modify and verify audio transcriptions using interactive editing tools.


Amazon's new tools for Alexa developers hints at what's next for the voice assistant

#artificialintelligence

Amazon on Wednesday is rolling out a slew of new features and tools to help developers build skills for Alexa, its AI-powered voice assistant. The improvements to the Alexa Skills Kit (ASK) range from sophisticated improvements to Alexa's foundational voice technology to features that hint at the future of Alexa -- such as features that facilitate voice-based experiences outside of the smart home. What is AI? Everything you need to know about Artificial Intelligence When improving the Alexa Skills Kit, "we try to think in terms of the experiences we enable," Nedim Fresko, Amazon's VP of Alexa Devices and Developer Technologies, said to ZDNet, "but also where we're established and what's next -- where we would like to be established and how we could get that started." All told, Amazon is rolling out 31 new features. They fit into a few different themes, according to Fresko.


Mozilla Common Voice- The Largest Dataset

#artificialintelligence

Mozilla Common Voice is the largest dataset that consists of thousands of hours of voice clips, in fifty different languages. Mozilla is planning to transform the voice technology ecosystem by releasing its own voice assistant. "The Common Voice dataset is set to contribute to the birth of'Firefox voice', and with the data gathered we cannot help but think the huge surprise we're in for soon." Mozilla released the largest public dataset of human voices available for use last year. Mozilla Firefox is a popular, open-source web browser, used by millions today.


Amazon's new tools for Alexa developers hints at what's next for the voice assistant

ZDNet

Amazon on Wednesday is rolling out a slew of new features and tools to help developers build skills for Alexa, its AI-powered voice assistant. The improvements to the Alexa Skills Kit (ASK) range from sophisticated improvements to Alexa's foundational voice technology to features that hint at the future of Alexa -- such as features that facilitate voice-based experiences outside of the smart home. What is AI? Everything you need to know about Artificial Intelligence When improving the Alexa Skills Kit, "we try to think in terms of the experiences we enable," Nedim Fresko, Amazon's VP of Alexa Devices & Developer Technologies, said to ZDNet, "but also where we're established and what's next -- where we would like to be established and how we could get that started." All told, Amazon is rolling out 31 new features. They fit into a few different themes, according to Fresko.


Salesforce quietly kills off Einstein Voice Assistant and Voice Skills

#artificialintelligence

Salesforce has quietly shuttered Einstein Voice Assistant and Einstein Voice Skills as it shifts focus towards its newly released Salesforce Anywhere app. The Einstein Voice Assistant first launched in beta last year. It's an extension of the company's Einstein Voice platform and allowed users to interact with the Salesforce platform via a mobile app or smart speaker device. Salesforce claimed the AI helper was more advanced than other digital assistants on the market, such as Alexa and Cortana, as it could be taught to recognise a company's specific jargon and acronyms. Einstein Voice Skills, which debuted in beta last November, enabled developers and admins can build custom voice-powered apps for employees to replace any type of manual data entry or manual Salesforce navigation.