speech recognition

IBM's Watson-based voice assistant is coming to cars and smart homes


One of IBM's first partners Harman will demonstrate Watson Assistant at the event through a digital cockpit aboard a Maserati GranCabrio, though the companies didn't elaborate on what it can do. In fact, IBM already released a Watson-powered voice assistant for cybersecurity early last year. You'll be able to access Watson Assistant via text or voice, depending on the device and how IBM's partner decides to incorporate it. So, you'll definitely be using voice if it's a smart speaker, but you might be able to text commands to a home device. Speaking of commands, it wasn't just designed to follow them -- it was designed to learn from your actions and remember your preferences.

Call Centers Tap Voice-Analysis Software to Monitor Moods


We all know how it feels to be low on energy at the end of a long work day. Some call-center agents at insurer MetLife are watched over by software that knows how it sounds. A program called Cogito presents a cheery notification when the toll of hours discussing maternity or bereavement benefits show in a worker's voice. "It's represented by a cute little coffee cup," says Emily Baker, who supervises a group fielding calls about disability claims at MetLife. Her team reports that the cartoon cup is a helpful nudge to sit up straight and speak like the engaged helper MetLife wants them to be.

How AI will impact the future of customer experience


Automated intelligent technologies like AI and machine earning are enabling businesses to deliver more relevant, personalized customer experiences through responsive technologies, like chatbots, and recommendations and services that streamline purchasing. Natural Language Recognition is also helping businesses better understand the needs of their clients and automates processes for customers who call in for service or support. These technologies are supporting and improving vital business functions that put customer experience at the forefront of business concerns, and enhance the ability to provide customized recommendations through insights into customer behaviors, preferences, and activities. Businesses are able to become customer-centric by exploiting intelligent apps and utilizing Big Data and Analytics tools to refine products and services and improve customer interactions and personalization. For example, machine learning can perform tasks that we're unable to do, such as searching through millions of databases quickly, or analyzing huge amounts of data within text, audio, and images.

Microsoft's Azure gets all emotional with machine learning


Imagine if the things around your house could respond to your voice even when you were shouting over a smoke alarm, keep track of each individual wandering through the house, unlock your front door just by identifying your voice, and even identify your emotions. Those are all capabilities that Microsoft is preparing to add to its Project Oxford, a set of cloud-based machine learning services introduced last May at Microsoft's Build conference. Ars took a deep dive on Project Oxford's first wave of machine learning-based services last year. Those services performed a number of image processing and recognition tasks, offered text-to-speech and speech recognition services, and even converted natural language into intent-based commands for applications. The services are the same technology used in Microsoft's Cortana personal assistant and the Skype Translator service, which translates voice calls in six languages (and text messages in 50 languages) in real-time.

Spotify tests voice assistant sparking rumours of a smart speaker

Daily Mail

Spotify may be about to take on the smart speaker market. The music streaming site is testing an in-app assistant, dubbed'Spotify Voice', that allows users to control their music with their voice. The trial has sparked rumours that the firm is about to release a smart speaker to take on the likes of Apple's HomePod and Amazon's Echo. If the rumours are true, it would allow Spotify to put a microphone and potentially camera in every user's home. Spotify may be about to take on the smart speaker market.

Spotify is testing its own voice assistant to control your music

The Guardian

Spotify is experimenting with a voice-control interface, looking to free itself from reliance on Siri and Alexa and pave the way for the company's forthcoming smart speaker. Users of the service have spotted the new feature hiding in the search bar of Spotify's iOS app. After tapping the magnifying glass to search for a track or playlist, testers see a microphone icon inside a white bubble, according to the Verge. After users tap on the icon, Spotify suggests a number of typical requests for a voice-controlled music system: "Show Calvin Harris", "Play my Discover Weekly" and "Play some upbeat pop", for instance. The move comes as Spotify ramps up its efforts to build a smart speaker to challenge Apple, Amazon and Google in the hardware field, all of which have their own music services.

Voice Assistants: This is How They Will Revolutionize Commerce


The Digital Transformation Institute of Capgemini published in the report titled "Conversational Commerce: Why Consumers Are Embracing Voice Assistants in Their Lives", which highlights how consumers use voice assistants (Google Assistant, Siri, and Alexa above all) and what opportunities these offer systems to companies to connect with their customers. From the report, which was attended by over 5,000 customers in the United States, United Kingdom, France and Germany, it emerges that in the next three years voice assistants will become the predominant mode of interaction between consumers, and that those who use technology to do purchases will be willing to spend 500% more through this new form of interaction than current levels. In fact, consumers are developing a strong preference for interacting with companies through voice assistants. Research has shown that today about a quarter of respondents (24%) would prefer to use a voice assistant instead of a website. However, it is estimated that in the next three years this percentage will grow to 40% and almost a third (31%) will interact with a voice assistant instead of going to a physical store or a bank branch, compared to 20% recorded today.

AI wave rolls through Microsoft's language translation technologies


A fresh wave of artificial intelligence rolling through Microsoft's language translation technologies is bringing more accurate speech recognition to more of the world's languages and higher quality machine-powered translations to all 60 languages supported by Microsoft's translation technologies. The advances were announced at Microsoft Tech Summit Sydney in Australia on November 16. "We've got a complex machine, and we're innovating on all fronts," said Olivier Fontana, the director of product strategy for Microsoft Translator, a platform for text and speech translation services. As the wave spreads, he added, these machine translation tools are allowing more people to grow businesses, build relationships and experience different cultures. Microsoft's research labs around the world are also building on top of these technologies to help people learn how to speak new languages, including a language learning application for non-native speakers of Chinese that also was announced at this week's tech summit. The new Microsoft Translator advances build on last year's switch to deep neural network-powered machine translations, which offer more fluent, human-sounding translations than the predecessor technology known as statistical machine translation.

Microsoft starts testing voice dictation in latest Office apps


Microsoft officials touted back in January a new voice dictation capability for Microsoft Office. On March 12, Microsoft began testing this feature with its Office Insider testers.The @OfficeInsider account tweeted yesterday: Microsoft officials touted the coming Office dictation technology in January, saying it would be available in February 2018. To test dictating using voice, customers must be running the latest version of Office for Windows (Office 2016) and be an Office 365 subscriber. The voice dictation feature, which uses speech recognition technology to convert speech to text, is available for Word 2016, PowerPoint 2016, Outlook 2016 and OneNote 2016 and in U.S. English only for now. To test this, users must be in the Windows desktop Office Insider program.

What happens when AI experts from Silicon Valley and China meet


Check out the AI Conference in Beijing, April 10-13, 2018. Hurry--early price ends March 9. Having traveled to China several times over the last few years, I can attest to the strong interest in applications of AI among technologists, business leaders, and policy makers. China is adopting AI tools and technologies at a rapid pace, and since current AI systems rely on large data sets, startups are able to start using AI tools much earlier (a startup in China can quickly have many millions of users). People in the West are curious about the progress of AI research and business models in China. On the flip side, having organized a couple of large conferences in Beijing, I also know that people in China want to hear from AI experts outside their country.