One of the latest was last week's AWS re:Invent, where Amazon announced a new suite of AI-enabled technology designed with businesses in mind. And while these new products, features, and tools come with a host of opportunities for the developers, marketers, and others who plan to use them, they also raise a few questions. What are they designed to do? How can they help you? Let's take a look at some of these AI announcements from AWS re:Invent and dig deeper into just what they mean.
In 2017, we launched Amazon Transcribe, an automatic speech recognition (ASR) service that makes it easy to add speech-to-text capabilities to any application. Today, I'm very happy to announce the availability of Amazon Transcribe Call Analytics, a new feature that lets you easily extract valuable insights from customer conversations with a single API call. Each discussion with potential or existing customers is an opportunity to learn about their needs and expectations. For example, it's important for customer service teams to figure out the main reasons why customers are calling them, and measure customer satisfaction during these calls. Likewise, salespeople try to gauge customer interest, and their reaction to a particular sales pitch.
If you are using the Amazon Transcribe service for automated speech recognition (ASR) feature in your project (especially for the English language), you had to decide whether to build a custom language model or a general model provided by AWS transcribe service. It could also be the case that you tried both options in your application. As I had some experience in trying both options in my project, here I am going to share my two cents. You used the general model to transcribe your audio or video files. You noticed that Amazon Transcribe is not able to recognize certain not-so-frequent English words or phrases that have been pronounced by speakers in audio files.