speech recognition


Automatic Speech Recognition – Are All Tests Comparable? - Watson

#artificialintelligence

Key Points: – Access to appropriate domain data is the dominant factor in determining speech recognition performance. For accurate comparison of systems, training and testing must be consistent First, training and testing must be consistent, especially on highly customized data sets. For this reason, benchmarks for speech recognition systems including the SWITCHBOARD corpus that IBM regularly reports on have been considered the standard controlled data set for automatic speech recognition testing for twenty years and counting. Another example is Mizuho Bank in Japan, which uses Watson Speech Recognition API to provide real-time relevant information to call center agents to better prepare to respond to customers in real-time.


Microsoft hits new record for AI speech recognition

#artificialintelligence

Microsoft recently reached a new milestone in its ability to recognize conversational speech, achieving a 5.1% word error rate (WER). Using Switchboard, speech recognition systems are tasked with transcribing conversations about topics such as politics or sports, for example. Microsoft's speech recognition capabilities are based on neural networks, and other artificial intelligence (AI) technologies. More information on Microsoft's speech recognition technology can be found in this technical report.


Microsoft's Cortana boosts speech recognition accuracy

Daily Mail

Some problems still to be addressed by the software include achieving human levels of recognition in noisy environments with distant microphones as well as recognising accented speech or speaking styles and languages for which only limited training data is available. Previous research has shown that humans achieve higher levels of agreement on the precise words spoken as they expend more care and effort, as in the case of professional transcribers. Previous research has shown that humans achieve higher levels of agreement on the precise words spoken as they expend more care and effort, as in the case of professional transcribers. This image shows the firm's voice translation service '[This includes] achieving human levels of recognition in noisy environments with distant microphones, in recognising accented speech, or speaking styles and languages for which only limited training data is available.


Deep learning revolutionizes conversational AI

#artificialintelligence

In conversational AI, machine perception includes all speech analysis technologies, such as recognition and profiling, and machine cognition includes all the language understanding-related technologies, which are part of Natural Language Processing (NLP). Together with more data, available cloud computing, and adoption from big companies like Apple (SIRI), Amazon (Alexa), and Google, there were significant improvements in the performance and the products released to the market. The acoustic and language levels also encapsulate a variety of different deep learning techniques, from acoustic state classification using different types of neural-based architectures to neural-based language models in the language level (see Figure 5). With the success of Amazon Echo and Google Home, many companies are releasing smart speakers and home devices that understand speech.


Difficult Conversations 1: Voice Recognition Error Handling

#artificialintelligence

Although Automatic Speech Recognition (ASR) has been improving rapidly it is still very far from perfect.


McKinsey argues how the current wave of AI is 'poised to finally break through'

#artificialintelligence

These and other findings are from the McKinsey Global Institute Study, and discussion paper, Artificial Intelligence, The Next Digital Frontier (80 pp., PDF, free, no opt-in) published last month. The report cites many examples of internal development including Amazon's investments in robotics and speech recognition, and Salesforce on virtual agents and machine learning. The following is a heat map showing the relative level of AI adoption by industry and key area of asset, usage, and labor category. McKinsey found that companies who benefit from senior management support for AI initiatives have invested in infrastructure to support its scale and have clear business goals achieve 3 to 15% percentage point higher profit margin.


Monster's headphones get their own voice assistant

Engadget

Monster's latest team-up isn't with another star athlete or fellow accessory-maker: it's with music platform Speak Music Inc., which is lending the company its voice assistant named Melody. Monster says the partnership will add voice control to some of its headphones, making them the "world's first voice-powered premium" cans. World's first or not, Melody adds a new level of hands-free convenience to Monster's devices. You'll have to download the Melody app from Google Play or iTunes and pair it with your device.


2 Alibaba Employees Named to MIT Young Innovator's List

#artificialintelligence

The Alibaba employee who fended off one of the world's largest DDoS hacking attacks and another who spearheaded development of a new voice-activated and controlled artificial intelligence product were Wednesday named to the 2017 MIT Technology Review's list of top Innovators Under 35. "Wang's contributions to developing cutting-edge technologies to empower next-generation artificial intelligence applications, and Wu's efforts in designing world-class internet security systems and helping small businesses defend against cyberattacks, make them true pioneers in their fields." Wang leads the artificial intelligence lab's research efforts in computer vision, natural language processing, speech recognition and machine learning. Alibaba Group is a leader in innovation and technology, with research and development in areas including cloud computing, quantum communications, biometric recognition, machine learning, image processing and speech recognition.


A.I. Business Applications (and How It May Impact You)

#artificialintelligence

Technology companies of all sizes and in locations all around the world are developing AI-driven products aimed at reducing operating costs, improving decision-making and enhancing consumer services across a range of client industries. The sum of these drivers -- new programming techniques, more data and faster chips -- has seen AI converge with human-level performance in the key areas of image classification and speech recognition over recent years (see EXHIBIT 2). Chipmakers stand to benefit from increased demand for processing power, particularly makers of graphical processing units for AI program training. And internet companies with AI at the core of their consumer services (such as digital assistants and new software features) stand to benefit directly from improvements in speech recognition and image classification.


6 Problems Artificial Intelligence faces in speech recognition

#artificialintelligence

All large companies are investing in voice recognition and the world is slowly yet steadily adjusting to the new technology of Artificial intelligence. So why is it taking so long, why isn't it part of our day to day lives yet? Here are the 6 Reasons why. You go to a store to look for a particular colour and brand of a product. You ask an employee if the product you want is available. The employee goes to the warehouse, checks his inventory for the product, and comes back a while later, only to tell you that your product isn't available anymore. Now imagine this, you enter the same store and tell a tiny device the product you want to buy. Within a second, a voice tells you the exact availability of your product, and, if unavailable, gives you details on the outlets where the product is available. The AI device does this by internally scanning through all the digital inventory systems. With numerous benefits in relation to cost logistics and more importantly convenience, why hasn't the art of speech recognition and personal assistants been perfected yet? With science making huge strides in sound wave recognition, we take a look at some of the main problems researchers are facing when decoding speech to text. Noise Voice recording machines detect sound waves that are generated through speech.