"[T]he current capabilities of many AI systems closely match some of the specialized needs of disabled people.... Fortunately, there is a growing interest in applying the scientific knowledge and engineering experience developed by AI researchers to the domain of assistive technology and in investigating new methods and techniques that are required within the assistive technology domain."
– Bruce G. Buchanan; from his Foreword to Assistive Technology and Artificial Intelligence: Applications in Robotics, User Interfaces and Natural Language Processing
Imagine you've contemplated the great scientific theories of the past and arrived at new insights based on your own observations. Imagine you've organized these thoughts into compelling arguments. Imagine that what you have to say will likely advance humanity's understanding of its existence. Now imagine your frustration if you were unable to use your physical voice or hands to speak or write the thoughts coalescing in your mind. Such was the situation for Stephen Hawking, the great explainer of the universe, who died on March 14.
On Saturday, March 3, the Beaver Works facility was alive with hardworking university students collaborating with Boston-area citizens with disabilities. Wood and metal parts, PVC piping, laptops, pizza, and a host of gadgets were spread around the rooms. The atmosphere was equal parts boisterous and quietly contemplative. The participants had gathered for the Assistive Technologies Hackathon (ATHack) hosted annually by MIT. It is a one-day event that brings people living with disabilities -- called co-designers -- together with undergraduate, graduate, and PhD students from multiple disciplines to build prototypes of assistive devices.
Google has plunged high towards its'AI-first' dream. The tech giant has attempted to develop a Text-to-speech system that has exactly human-like articulation. This AI system is called "Tacotron 2" that has the ability to give an AI-generated computer speech in a human-voice. Google researchers mentioned in the blog post that the new procedure does not utitilise complex linguistic and acoustic features as input. In place of it, they developed human-like speech from text using neural networks trained using only speech examples and corresponding text transcript.
In this episode, Audrow Nash interviews Elliott Rouse, Assistant Professor at the University of Michigan, about an open-source prosthetic leg--that is a robotic knee and ankle. Rouse's goal is to provide an inexpensive and capable platform for researchers to use so that they can work on prostheses without developing their own hardware, which is both time-consuming and expensive.