"[T]he current capabilities of many AI systems closely match some of the specialized needs of disabled people.... Fortunately, there is a growing interest in applying the scientific knowledge and engineering experience developed by AI researchers to the domain of assistive technology and in investigating new methods and techniques that are required within the assistive technology domain."
– Bruce G. Buchanan; from his Foreword to Assistive Technology and Artificial Intelligence: Applications in Robotics, User Interfaces and Natural Language Processing
Microsoft is opening up limited access to a text-to-speech AI called Custom Neural Voice, which allows developers to create custom synthetic voices. The tech is part of an Azure AI service called Speech. Companies can use the tech for things like voice-powered smart assistants and devices, chatbots, online learning and reading audiobooks or news. They'll have to apply for access and gain approval from Microsoft before they can harness Custom Neural Voice. The tech can deliver more natural-sounding voices than many other text-to-speech services, according to Microsoft.
TL;DR: The Become a Speed Reading Machine course is on sale for £19.14 as of August 5, saving you 87% on list price. If you're being honest, you've probably always been secretly -- and irrationally -- jealous of speedy readers. Back in school, there were always a few classmates who zoomed through a dense chapter and got to start lunch early. The rest of us were stuck decoding a confusing, run-on sentence while our milk got warm. Now those kids are colleagues who answer emails quicker, read more news, and are arguably more productive throughout the day.
Energy based models (EBMs) are appealing due to their generality and simplicity in likelihood modeling, but have been traditionally difficult to train. We present techniques to scale MCMC based EBM training on continuous neural networks, and we show its success on the high-dimensional data domains of ImageNet32x32, ImageNet128x128, CIFAR-10, and robotic hand trajectories, achieving better samples than other likelihood models and nearing the performance of contemporary GAN approaches, while covering all modes of the data. We highlight some unique capabilities of implicit generation such as compositionality and corrupt image reconstruction and inpainting. Finally, we show that EBMs are useful models across a wide variety of tasks, achieving state-of-the-art out-of-distribution classification, adversarially robust classification, state-of-the-art continual online class learning, and coherent long term predicted trajectory rollouts. Papers published at the Neural Information Processing Systems Conference.
The most advanced wearable assistive technology device for the blind and visually impaired, that reads text, recognizes faces, identifies products and more. Intuitively responds to simple hand gestures. Real time identification of faces is seamlessly announced. Small, lightweight, and magnetically mounts onto virtually any eyeglass frame. Tiny, wireless, and does not require an internet connection.
A movie montage for modern artificial intelligence might show a computer playing millions of games of chess or Go against itself to learn how to win. Now, researchers are exploring how the reinforcement learning technique that helped DeepMind's AlphaZero conquer the chess and Go could tackle an even more complex task--training a robotic knee to help amputees walk smoothly. This new application of AI based on reinforcement learning--an automated version of classic trial-and-error--has shown promise in small clinical experiments involving one able-bodied person and one amputee whose leg was cut off above the knee. Normally, human technicians spend hours working with amputees to manually adjust robotic limbs to work well with each person's style of walking. By comparison, the reinforcement learning technique automatically tuned a robotic knee, enabling the prosthetic wearers to walk smoothly on level ground within 10 minutes.
Microsoft has reached a milestone in text-to-speech synthesis with a production system that uses deep neural networks to make the voices of computers nearly indistinguishable from recordings of people. With the human-like natural prosody and clear articulation of words, Neural TTS has significantly reduced listening fatigue when you interact with AI systems. Our team demonstrated our neural-network powered text-to-speech capability at the Microsoft Ignite conference in Orlando, Florida, this week. The capability is currently available in preview through Azure Cognitive Services Speech Services. Neural text-to-speech can be used to make interactions with chatbots and virtual assistants more natural and engaging, convert digital texts such as e-books into audiobooks and enhance in-car navigation systems.
A look at how Yemen's brutal civil war is creating a market for prosthetic limbs. Each is missing a vital part of their body – a hand, a leg, an arm. Inside that building is new hope for each: Prosthetic limbs are being cut, carved, melted and molded. Young patient recently outfitted with a new leg waits for his training session outside the Ma'rib prosthetics center in Yemen (Fox News/Hollie McKay) "Sometimes I go to my office to cry for each of these miserable stories," Dr. Haitham Ahmed Ali Ahmed, a Sudanese volunteer with Physicians Across Continents, told Fox News. "It isn't fair, but we do whatever we can to give them another chance."
Andre van Rüschen has no memory of the day he lost all feeling in his legs. After a car accident in Germany, he had a spinal cord injury that left him paralyzed from the waist down. When he woke up from a coma in a hospital in Hamburg, the doctors told him he would never walk again. But now, thirteen years later, van Rüschen is back on his feet, and he is training to compete as a pilot in the Powered Exoskeleton race at the Cybathlon in Zurich this month. In a high-rise office building on Leipziger Platz in Berlin, he slides out of his wheelchair onto a black leather pouf where a ReWalk exoskeleton sits folded.