Goto

Collaborating Authors

Speech


Application of new information technologies in the legal profession

#artificialintelligence

The magnitude of how technology has changed the landscape of the legal profession so far is quite astounding, considering how legal professionals used to be insistent on sticking to the status quo. The digital revolution made practicing law significantly easier by supplying lawyers with tools that streamlined parts of their previously outdated workflow, from researching case files to client relations. As law firms are working remotely and switching to a client-centric approach, now is the right time to consider how the latest IT developments will further impact the legal industry. With clients demanding law firms to be faster, cost-effective, and more flexible, law professionals have embraced automation and law firm software as solutions for growing customer expectations. Automation, in particular, helped law firms save time and deliver more productive results.


Using Google's Speech-to-Text API with Python

#artificialintelligence

This post provides steps and python syntax for utilizing the Google Cloud Platform speech transcription service. Speech transcription refers to the conversion of speech audio to text. This can be applied to many use cases such as voice assistants, dictation, customer service call center documentation, or creation of meeting notes in an office business setting. It is not difficult to see the value this can bring to individuals and businesses. AWS has long been a leader in this space. Google, IBM, and Microsoft have of course developed their own services as well.


What does NLP mean for augmented analytics?

#artificialintelligence

According to Gartner, NLP turns "text or audio speech into encoded, structured information, based on an appropriate ontology." Augmented analytics uses two subtypes of NLP, which are natural language understanding (NLU) and natural language generation (NLG). NLU enables the platform to understand a user's query while NLG "narrates" data visuals. NLU applies to text and audio. However, typed queries are more common than voice queries today for several reasons, most notably because the former is an easier problem to solve.


Musicians ask Spotify to publicly abandon controversial speech recognition patent

Engadget

At the start of the year, Spotify secured a patent for a voice recognition system that could detect the "emotional state," age and gender of a person and use that information to make personalized listening recommendations. As you might imagine, the possibility that the company was working on a technology like that made a lot of people uncomfortable, including digital rights non-profit Access Now. At the start of April, the organization sent Spotify a letter calling on it to abandon the tech. After Spotify privately responded to those concerns, Access Now, along with several other groups and a collection of more than 180 musicians, are asking the company to publicly commit to never using, licensing, selling or monetizing the system it patented. Some of the individuals and bands to sign the letter include Rage Against the Machine guitarist Tom Morello, rapper Talib Kweli and indie group DIIV.


How AWS AI Services Can Help You Improve Your Foreign Language

#artificialintelligence

AWS provides several Artificial Intelligence (AI) services. With AI services, you could implement some useful AI things: image and video analysis, document analysis, text to speech or speech to text translation, and so on. However, those AWS services can be used not only for enterprise applications but for your self-development applications. Applying these services we are able to implement an application to improve our foreign language skills. Let's map AWS AI services to language skills: It doesn't cover all skills but we could develop some of them this way.


Build your own Voice Assistant in Python

#artificialintelligence

Even before beginning to code, we need to have an "intents.json" This JSON file is accessed by the Voice Assistant and the response accordingly. Let's start coding by importing all the required libraries After importing all the required modules, we need to create an instance of the speaker and the recognizer so that the assistant can capture what we humans say and convert it into textual form and, the remaining code is explained by comments within the program. Now, let's begin coding functions for each of the required tasks. The below code snippet shows you how the system reacts to a greeting.


Google Fixes Two Annoying Quirks in Its Voice Assistant

WIRED

I'm not much of a cook, but the few times I've asked Google Assistant on my Nest Mini to start a timer in the kitchen have been hit or miss. All too often, the timer disappears into a void and Google can't tell me how many minutes are left. Other times, it takes multiple attempts to set it properly because Assistant struggled with understanding context. Those problems (and a few others) are about to be resolved. Google's latest update to its voice assistant, which begins rolling out today, greatly improves its contextual understanding when you're asking it to perform a task like setting an alarm or a timer.


Podcast: AI finds its voice

MIT Technology Review

Today's voice assistants are still a far cry from the hyper-intelligent thinking machines we've been musing about for decades. And it's because that technology is actually the combination of three different skills: speech recognition, natural language processing and voice generation. Each of these skills already presents huge challenges. In order to master just the natural language processing part? You pretty much have to recreate human-level intelligence. Deep learning, the technology driving the current AI boom, can train machines to become masters at all sorts of tasks. But it can only learn one at a time. And because most AI models train their skillset on thousands or millions of existing examples, they end up replicating patterns within historical data--including the many bad decisions people have made, like marginalizing people of color and women. Still, systems like the board-game champion AlphaZero and the increasingly convincing fake-text generator GPT-3 have stoked the flames of debate regarding when humans will create an artificial general intelligence--machines that can multitask, think, and reason for themselves. In this episode, we explore how machines learn to communicate--and what it means for the humans on the other end of the conversation. This episode was produced by Jennifer Strong, Emma Cillekens, Anthony Green, Karen Hao and Charlotte Jee.


'Hey Spotify, play Up First:' Two weeks with Car Thing

Engadget

After years of rumors, confirmation and vague descriptions, Spotify has finally made its first piece of hardware available to select users. Even though the company revealed the full details on Car Thing earlier this month, it's only a "limited release" right now. I've spent two weeks with Car Thing in my car (obviously), and can tell you one thing -- this dedicated Spotify player is really more of a controller for the app on your phone. Spotify first tipped its hand on an in-car music player in 2018. It offered a few Reddit users the opportunity to try a compact device that reportedly featured voice control and 4G connectivity.


Can AI write your technical content?

#artificialintelligence

AI, or more correctly termed machine learning, can be found in many different applications in your home, car, and workplace. Our relationship with machine learning applications started when we began to use voice-operated smartphone assistants like Apple's Siri or Google's Voice but voice recognition is just one example of a machine learning application; others include vision processing and object recognition. With the wide range of industrial and electronics sector clients Publitek has, machine learning is a frequently covered topic. Over the past ten or so years, we've seen the use of machine learning (ML) neural networks, once restricted to compute-intensive data centers, emerge to bring intelligent control to the edge of industrial applications. It isn't always apparent, but many of the voice recognition applications we interact with rely on constant cloud connectivity to process our speech.