Speech Synthesis Research Engineer ObEN, Inc.

#artificialintelligence

STAGE 1: Phone Interview STAGE 2: In-person Interview at Idealab (we cover travel expenses for the day) STAGE 3: We require a sample project submission and a candidate proposal submission(To know more about what an ObEN candidate proposal is, click here) STAGE 4: Spend a day at our office and participate in all team activities.


Google debuts new Android apps for the hard of hearing

Engadget

Over the past few years, Google has created multiple applications with accessibility in mind. Yet, most of these apps have focused on users who have physical or visual impairments. Examples include last year's Voice Access and Lookout apps. Now, the company wants to provide better support for users who are deaf or hard of hearing as well. As part of this effort, Google is rolling out two new apps -- Live Transcribe and Sound Amplifier -- starting today.


Man VS Machine: The Secrets Behind Alibaba Cloud's Speech Recognition Technology - AliCloud Developer Forums: Cloud Discussion Forums

#artificialintelligence

Introduction In the previous article, we described combat performance in the Artificial Intelligence PK Gold Medal Stenography Competition and told the story behind the annual Alibaba Cloud meeting's Man VS Machine competition. Are there any curious technology geeks out there? What was the on-site real-time transcription system? What on earth is the core of a speech recognition system? How come the Alibaba Cloud iDST speech recognition system is so accurate?


Google Lookout uses AI to describe surroundings for the visually impaired

ZDNet

A screenshot image of Lookout's modes, including "Explore," "Shopping," and "Quick read", as well as a second screenshot of Lookout detecting a dog in the camera frame. Google has launched its Lookout app, which uses artificial intelligence (AI) to help the visually impaired see by pointing their phone at objects and receiving verbal feedback. Lookout uses similar underlying technology as Google Lens, Google said in a blog post, to provide feedback, earcons, or other continuous signals to the user. It also functions in the same way as Lens -- receiving information and providing feedback based on what is captured on the device's rear camera. The app reportedly assists users in situations such as learning about a new space for the first time, reading text and documents, and completing daily routines such as cooking, cleaning, and shopping, Google said.


Unstructured Audio Classification for Environment Recognition

AAAI Conferences

My thesis aims to contribute towards building autonomous agents that are able to understand their surrounding environment through the use of both audio and visual information. To capture a more complete description of a scene, the fusion of audio and visual information can be advantageous in enhancing the system's context awareness. The goal of this work is on the characterization of unstructured environmental sounds for understanding and predicting the context surrounding of an agent. Most research on audio recognition has focused primarily on speech and music. Less attention has been paid to the challenges and opportunities for using audio to characterize unstructured environments. Unlike speech and music, which have formantic structures and harmonic structures, environmental sounds are considered unstructured since they are variably composed from different sound sources. My research will investigate challenging issues in characterizing environmental sounds such as the development of appropriate features extraction algorithm and learning techniques for modeling the dynamics of the environment. A final aspect of my research will consider the decision making of an autonomous agent based on the fusion of visual and audio information.