Goto

Collaborating Authors

Results


How enterprises will benefit from AI and voice data in the post pandemic world

#artificialintelligence

The promise of artificial intelligence finally came good in 2018 and 2019, with a wider adoption of AI - from its use in detecting and combating fraud in financial institutions, through to sophisticated analytics tools in contact center. There are a host of use cases showing the value of a future-facing AI strategy, leveraging accurate and collectable data to save time, improve efficiencies, and reduce operational costs. In fact, a recent KPMG report states that five of the most AI-mature companies are spending $75m annually on AI talent, indicating the increasing importance of using AI by business leaders. The same report also finds that analysis of voice data is a high priority AI initiative, but there are some critical foundational elements that are maybe not being given the consideration they should. Organizations interested in adopting this new technology - and those that already are - must remember that AI and analytics tools are fueled by data, and the output is directly correlated to the quality of the input.


Latest Achievements of Artificial Intelligence - Tech Research Online

#artificialintelligence

Artificial Intelligence (AI) technology evolves rapidly, and it has great potential in the future. According to the latest reports, the market size of AI is projected to reach $266.92 billion by 2027 with a Compound Annual Growth Rate (CAGR) of 33.2%. A lot of world-known brands and tech companies are already using AI-powered solutions to improve the service, engage customers, enhance customer experience, and increase efficiency and productivity. Text generation, face and speech recognition, automated translation, drug discovery are a few AI achievements that are worthy of your attention. AI-powered solutions are used by dozens of companies and implemented in different fields, changing a lot of industries and reshaping the landscape of health, learning, daily living, and so on.


Amazon Alexa: How developers use AI to help Alexa understand what you mean and not what you say

#artificialintelligence

How does Amazon help Alexa understand what people mean and not just what they say? And, we couldn't be talking about Alexa, smart home tech, and AI at a better time. During this week's Amazon Devices event, the company made a host of smart home announcements, including a new batch of Echo smart speakers, which will include Amazon's new custom AZ1 Neural Edge processor. In August this year, I had a chance to speak with Evan Welbourne, senior manager of applied science for Alexa Smart Home at Amazon, about everything from how the company is using AI and ML to improve Alexa's understanding of what people say, Amazon's approach to data privacy, the unique ways people are interacting with Alexa around COVID-19, and where he sees the future of voice and smart tech going in the future. The following is an transcript of our conversation edited for readability. Bill Detwiler: So before we talk about maybe IoT, we talk about Alexa, and kind of what's happening with the COVID pandemic, as people are working more from home, and as they may have questions that they're asking about Alexa, about the pandemic, let's talk about kind of just your role there at Amazon, and what you're doing with Alexa, especially with AI and ML. So I lead machine learning for Alexa Smart Home. And what that sort of means generally is that we try to find ways to use machine learning to make Smart Home more useful and easier to use for everybody that uses smart home. It's always a challenge because we've got the early adopters who are tech savvy, they've been using smart home for years, and that's kind of one customer segment. But we've also got the people who are brand new to smart home these days, people who have no background in smart home, they're just unboxing their first light, they may not be that tech savvy.


Facebook AI Wav2Vec 2.0: Automatic Speech Recognition From 10 Minute Sample

#artificialintelligence

Speech-to-text applications have never been so plentiful, popular or powerful, with researchers' pursuit of ever-better automatic speech recognition (ASR) system performance bearing fruit thanks to huge advances in machine learning technologies and the increasing availability of large speech datasets. Current speech recognition systems require thousands of hours of transcribed speech to reach acceptable performance. However, a lack of transcribed audio data for the less widely spoken of the world's 7,000 languages and dialects makes it difficult to train robust speech recognition systems in this area. To help ASR development for such low-resource languages and dialects, Facebook AI researchers have open-sourced the new wav2vec 2.0 algorithm for self-supervised language learning. The paper Wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations claims to "show for the first time that learning powerful representations from speech audio alone followed by fine-tuning on transcribed speech can outperform the best semi-supervised methods while being conceptually simpler." A Facebook AI tweet says the new algorithm can enable automatic speech recognition models with just 10 minutes of transcribed speech data.


Voice AI Design - No Older Adult Left Behind

#artificialintelligence

"It was the best of times, it was the worst of times." This quote from A Tale of Two Cities, by Charles Dickens, describes some of what we're experiencing during the COVID-19 pandemic. Recent months have revealed many of the best aspects of local communities and highlighted small acts of kindness such as checking on neighbors to finding new avenues for social interaction. However, recent events have also exposed a pre-existing bias towards older adults. The perspective that people 50 years' old and older are all just one monolithic group is a mistake for business, for product design, and for successful voice experiences.


Nuance Expands Cloud-Based Dragon Professional Anywhere Solution

#artificialintelligence

Dragon Professional customers in the U.S. now can access an AI-powered documentation solution for law enforcement, social services, financial services, and legal Nuance Communications, Inc. announced the availability of its cloud-based Dragon Professional Anywhere speech recognition solution in the United States across multiple markets including law enforcement, social services, financial services, and legal. This next-generation AI-powered solution enables police officers, social workers, customer service agents, and lawyers to create high-quality documentation more efficiently, securely, and at scale, while reducing cost and boosting productivity. Already available in the U.K., France, the Netherlands, Sweden, and Germany, Dragon Professional Anywhere customers in the U.S. can now use Nuance's speech recognition solutions anywhere, anytime. The continued expansion of Dragon Professional Anywhere is driven by the increasing demand for cloud technology to meet the needs of modern business environments. Forrester Research reports that companies are ramping up investments in cloud-based tools to facilitate extended periods of working from home, a trend that is poised to continue post-pandemic and expected to better prepare organizations for the now dynamic post-COVID economic recovery.


Trint Keeps Expanding, Even in Global Pandemic

#artificialintelligence

Speech-to-text platform company Trint is pleased to announce two new members have joined its leadership team. Odhrán McConnell is the third person to join Trint's C suite and the new Chief Technology Officer. Odhrán spent nine years at The Guardian, bringing a detailed understanding of the intersection between global media companies and state-of-the-art technology. At Trint he will focus on growing the team and developing key processes to ensure Trint's continued excellence in engineering. The new VP of Product is Graham Paterson.


Voice assistants grow in importance as businesses reopen

#artificialintelligence

The use of voice assistants, and voice technology more generally, has been on the rise over the past few years thanks, in large part, to consumer adoption of smart speakers and devices. However, the coronavirus pandemic is making clear the fact that in order to return to a semblance of normalcy, voice technology has now become imperative where it was once simply a nice-to-have. For businesses and workplaces, the implementation of voice technology will no longer be a novelty or a simple means of asserting a commitment to innovation. As businesses reopen and later in a post-pandemic world, it will come to signal a commitment to employee, customer and community health. As states reopen and many of us begin to contemplate a return to shared workspaces, we're taking this opportunity to look at the role of voice technology.


Google signs up Verizon for its AI-powered contact center services – TechCrunch

#artificialintelligence

Google today announced that it has signed up Verizon as the newest customer of its Google Cloud Contact Center AI service, which aims to bring natural language recognition to the often inscrutable phone menus that many companies still use today (disclaimer: TechCrunch is part of the Verizon Media Group). For Google, that's a major win, but it's also a chance for the Google Cloud team to highlight some of the work it has done in this area. It's also worth noting that the Contact Center AI product is a good example of Google Cloud's strategy of packaging up many of its disparate technologies into products that solve specific problems. "A big part of our approach is that machine learning has enormous power but it's hard for people," Google Cloud CEO Thomas Kurian told me in an interview ahead of today's announcement. "Instead of telling people, 'well, here's our natural language processing tools, here is speech recognition, here is text-to-speech and speech-to-text -- and why don't you just write a big neural network of your own to process all that?' Very few companies can do that well. We thought that we can take the collection of these things and bring that as a solution to people to solve a business problem. And it's much easier for them when we do that and […] that it's a big part of our strategy to take our expertise in machine intelligence and artificial intelligence and build domain-specific solutions for a number of customers."


Azure AI: Build mission-critical AI apps with new Cognitive Services capabilities

#artificialintelligence

As the world adjusts to new ways of working and staying connected, we remain committed to providing Azure AI solutions to help organizations invent with purpose. Building on our vision to empower all developers to use AI to achieve more, today we're excited to announce expanded capabilities within Azure Cognitive Services, including:. Companies in healthcare, insurance, sustainable farming, and other fields continue to choose Azure AI to build and deploy AI applications to transform their businesses. According to IDC1, by 2022, 75 percent of enterprises will deploy AI-based solutions to improve operational efficiencies and deliver enhanced customer experiences. To meet this growing demand, today's product updates expand on existing language, vision, and speech capabilities in Azure Cognitive Services to help developers build mission-critical AI apps that enable richer insights, save time and reduce costs, and improve customer engagement.