AAAI AI-Alert for Aug 6, 2019
Robotic tails for humans are here
A group of researchers from Keio University in Japan has created a robotic tail for humans. Called Arque, the robotic tail prototype was designed to do what a real tail does: balance out the rest of the body. The researchers, who are part of Keio's graduate school of media design, presented the work last week at the 2019 SIGGRAPH conference in Los Angeles, which focuses on graphics, gaming, and emerging technology. The appendage was inspired by a seahorse's tail, which is strong enough to withstand predators' bites but still flexible to grip things in its environment, like coral. The researchers' prototype was also designed to fit whoever ends up wearing it: the tail can be adjusted to the wearer's body by adding or removing modular "vertebrae."
The little bicycle that could, thanks to artificial intelligence
Machine learning technology has advanced quickly in recent years, but most devices share a common pitfall: the amount of time, energy, and human input required to get the skills of these systems up to snuff. When artificial intelligence learns, it often does so through brute force, cycling through countless rounds of trial and error until it converges on the best set of tactics. People, on the other hand, are much better at thinking on their feet, and require much less brainpower to do so. To bridge this processing gap, many independent groups of computer scientists are trying to build computer chips with an internal architecture that mimics that of the human brain. So-called neuromorphic chips are hybrids.
How Facebook's brain-machine interface measures up
Somewhat unceremoniously, Facebook this week provided an update on its brain-computer interface project, preliminary plans for which it unveiled at its F8 developer conference in 2017. In a paper published in the journal Nature Communications, a team of scientists at the University of California, San Francisco backed by Facebook Reality Labs -- Facebook's Pittsburgh-based division devoted to augmented reality and virtual reality R&D -- described a prototypical system capable of reading and decoding study subjects' brain activity while they speak. It's impressive no matter how you slice it: The researchers managed to make out full, spoken words and phrases in real time. Study participants (who were prepping for epilepsy surgery) had a patch of electrodes placed on the surface of their brains, which employed a technique called electrocorticography (ECoG) -- the direct recording of electrical potentials associated with activity from the cerebral cortex -- to derive rich insights. A set of machine learning algorithms equipped with phonological speech models learned to decode specific speech sounds from the data and to distinguish between questions and responses.
Mphasis launches deep learning algorithms on AWS
Indian software solutions provider Mphasis, which specializes in cloud and cognitive services, has launched its new Deep Learning algorithms. The new algorithms, which will be made available on Amazon Web Services (AWS) Marketplace for Machine Learning, are on-demand solutions targeting practical enterprise use cases such as influence analytics, insurance claims analysis, payment card fraud, and image analytics for supply chain and logistics. The solutions, available for a free trial and download on AWS Marketplace for Machine Learning website, will help users simplify data experimentation, formulate deeper insights from disparate sources across their data estate, and foster new levels of productivity and efficiency for a wide variety of use cases. Some of the algorithms are DeepInsights Card Fraud Analysis that is a Deep-Learning powered classification solution that provides valuable insights from any data that is highly skewed and HyperGraf Auto Claims Prediction which provides occurrence and claim amount predictions for policyholders among others, as per the company statement. Dr Jai Ganesh -Senior Vice President & Head, Mphasis NEXT Labs said "Our solutions target practical, high-value use cases that can deliver immediate impact and ROI in critical enterprise business processes and operations. And users can deploy them with the speed and security provided by AWS." Mphasis is an advanced consulting partner in the AWS Partner Network (APN) and leverages AWS with customers across its business.
Japan parts makers literally reinventing the wheel to keep up with shift to autonomous cars
The car industry is reinventing the wheel to prepare for autonomous vehicles. Sumitomo Rubber Industries Ltd., whose roots stretch back to when Henry Ford was building his Model T, is developing a "smart tire" that can monitor its own air pressure and temperature, and eventually respond by itself to changes in road conditions. Yet it's more than just tires that are being changed. Koito Manufacturing Co., AGC Inc. and Lear Corp. are putting semiconductors and sensors inside headlights, glass and seats to make them as intelligent as the self-driving cars. Alphabet Inc.'s Waymo LLC, Intel Corp.'s Mobileye NV and Baidu Inc. dominate the core technology for autonomous driving, yet suppliers still count on finding their own space in the business.
Cockroach robot won't break after being repeatedly stamped on
Ever tried to stamp on a pesky insect only to see it scuttle off gleefully once you raise your shoe? You may soon have the same difficulty eradicating tiny robots. A simple machine seems to have the robustness of a common cockroach. "It looks really like a cockroach moving on the ground," says Liwei Lin at the University of California, Berkeley. He and his colleagues describe their prototype robots, comprising a curved rectangle and angled front leg.
Three pitfalls to avoid in machine learning
Researchers at TAE Technologies in California and at Google are using machine learning to optimize equipment that produces a high-energy plasma.Credit: Liz Kuball Machine learning is driving discovery across the sciences. Its powerful pattern finding and prediction tools are helping researchers in all fields -- from finding new ways to make molecules and spotting subtle signals in assays, to improving medical diagnoses and revealing fundamental particles. Yet, machine-learning tools can also turn up fool's gold -- false positives, blind alleys and mistakes. Many of the algorithms are so complicated that it is impossible to inspect all the parameters or to reason about exactly how the inputs have been manipulated. As these algorithms begin to be applied ever more widely, risks of misinterpretations, erroneous conclusions and wasted scientific effort will spiral.
How Robots Are Changing the Way You See a Doctor
The following feature is excerpted from TIME Artificial Intelligence: The Future of Humankind, available at retailers and at the Time Shop and Amazon. Medicine is both art and science. While any doctor will quickly credit her rigorous medical training in the nuts and bolts of how the human body works, she will just as adamantly school you on how virtually all of the decisions she makes--about how to diagnose disease and how best to treat it--are equally the product of some less tangible measures: her experience from previous patients; her cumulative years of watching and learning from patients, colleagues and the human body. Which is why the idea of introducing machines into medicine seems misguided at the very least, and also foolhardy. How can a robot, no matter how well-trained, take the place of a doctor?
Deep Learning Places New Demands on Data Center Architectures
Machine and deep learning applications bring new workflows and challenges to enterprise data center architectures. One of the key challenges revolves around data and the storage solutions needed to store, manage, and deliver up to AI's demands. Today's intelligent applications require infrastructure that is very different from traditional analytics workloads, and an organization's data architecture decisions will have a big impact on the success of its AI projects. These are among the key takeaways from a new white paper by the research firm Moor Insights & Strategy. "While discussions of machine learning and deep learning naturally gravitate towards compute, it's clear that these solutions force new ways of thinking about data," the firm notes in its "Enterprise Machine & Deep Learning with Intelligent Storage" paper.
Doctor Alexa Will See You Now: Is Amazon Primed To Come To Your Rescue?
Now that it's upending the way you play music, cook, shop, hear the news and check the weather, the friendly voice emanating from your Amazon Alexa-enabled smart speaker is poised to wriggle its way into all things health care. Amazon has big ambitions for its devices. It thinks Alexa, the virtual assistant inside them, could help doctors diagnose mental illness, autism, concussions and Parkinson's disease. It even hopes Alexa will detect when you're having a heart attack. At present, Alexa can perform a handful of health care-related tasks: "She" can track blood glucose levels, describe symptoms, access post-surgical care instructions, monitor home prescription deliveries and make same-day appointments at the nearest urgent care center.