SAN FRANCISCO (AP) -- Facebook announced several new hires of top academics in the field of artificial intelligence Tuesday, among them a roboticist known for her work at Disney making animated figures move in more human-like ways. The hires raise a big question -- why is Facebook interested in robots, anyway? It's not as though the social media giant is suddenly interested in developing mechanical friends, although it does use robotic arms in some of its data centers. The answer is even more central to the problem of how AI systems work today. Today, most successful AI systems have to be exposed to millions of data points labeled by humans -- like, say, photos of cats -- before they can learn to recognize patterns that people take for granted.
The safety of self-driving cars has become a source of concern for U.S. transportation regulators this year after one of Uber Technologies Inc's [UBER.UL] vehicles struck and killed a woman in March in Arizona, prompting the company to shut down its testing efforts for a time. Uber has said it plans to have self-driving cars back on the road by the end of the year.
Artificially intelligent systems map our journeys, unlock our homes, feed us entertainment, and foretell the weather. But could our electronic assistants also start to learn our emotions and use that knowledge to serve us better? In other words, does Alexa know when you get mad? Close up of a boy's face aged 8 years wearing Clown make up face paint with rainbow markings on his arm. In fact, Amazon teams have been working on analyzing your emotions from your vocal intonations for over a year.
SYDNEY – A robot submarine able to hunt and kill the predatory crown-of-thorns starfish that are devastating the Great Barrier Reef was unveiled by Australian researchers on Friday. Scientists at Queensland University of Technology (QUT) said the robot, named the RangerBot and developed with a grant from Google, would serve as a "robo reef protector" for the vast World Heritage site off Australia's northeastern coast. The RangerBot has an eight-hour battery life and computer vision capabilities allowing it to monitor and map reef areas at scales not previously possible. "RangerBot is the world's first underwater robotic system designed specifically for coral reef environments, using only robot-vision for real-time navigation, obstacle avoidance and complex science missions," said Matthew Dunbabin, the QUT professor who unveiled the submarine. "This multifunction ocean drone can monitor a wide range of issues facing coral reefs including coral bleaching, water quality, pest species, pollution and siltation."
Stories of thrilling, new AI use cases in retail have been popping up in our tech news feeds, whether computer vision, facial recognition, or elimination of human workers. But these sensational accounts miss an important piece of the AI learning curve. A complex infrastructure of end-to-end process automation underpins the flashy technology reflected on the front end. Process gaps shatter customer journeys and lead to a lack of transparency in supply chain and customer interactions, which together form a composite of the AI backbone. If your retail organization suffers from process gaps and manual routing -- and 37% of business and technology decision makers report that they are -- you must start with laying the groundwork before leaping to AI bling.
Google's Assistant is picking up the ability to speak with you in two languages without having to switch accounts. Now Google Home and Android smartphone owners will be able to speak in any two of the following languages: English, Spanish, French, German, Italian and Japanese. The Google Assistant will reply in the language of the query it's answering. The company first mentioned it was working on this feature in February, but there hadn't been an update on it for months. The new feature helps Google Assistant serve bilingual households, which make up an increasing percentage of American families.
Purpose: To identify the different machine learning (ML) techniques that have been applied to automate physician competence assessment and evaluate how these techniques can be used to assess different competence domains in several medical specialties. Method: In May 2017, MEDLINE, EMBASE, PsycINFO, Web of Science, ACM Digital Library, IEEE Xplore Digital Library, PROSPERO, and Cochrane Database of Systematic Reviews were searched for articles published from inception to April 30, 2017. Studies were included if they applied at least one ML technique to assess medical students', residents', fellows', or attending physicians' competence. Information on sample size, participants, study setting and design, medical specialty, ML techniques, competence domains, outcomes, and methodological quality was extracted. MERSQI was used to evaluate quality, and a qualitative narrative synthesis of the medical specialties, ML techniques, and competence domains was conducted.
To diagnose depression, clinicians interview patients, asking specific questions -- about, say, past mental illnesses, lifestyle, and mood -- and identify the condition based on the patient's responses. In recent years, machine learning has been championed as a useful aid for diagnostics. Machine-learning models, for instance, have been developed that can detect words and intonations of speech that may indicate depression. But these models tend to predict that a person is depressed or not, based on the person's specific answers to specific questions. These methods are accurate, but their reliance on the type of question being asked limits how and where they can be used.
Uber didn't necessarily get into self-driving cars to make friends. It launched its program in Pittsburgh by gutting the robotics program at Carnegie Mellon University, after all. But in the three years since--as the company has struggled with wayward leadership, a broken corporate culture, and this spring's fatal crash, which killed an Arizona woman--Uber has learned that the buddy system may not be so bad. As this new technology moves slowly toward commercialization, its creators are grappling with how a robo-car business should work, exactly. It's a murky world in which exploration feels safer, somehow, with a partner by your side.
The genesis of the modern self-driving car across three Darpa challenges in the early 2000s has been well documented, here and elsewhere. Teams of universities, enthusiasts and automakers struggled to get cars to drive themselves through desert and city conditions. In the process, they kick-started the sensor, software and mapping technologies that would power today's self-driving taxis and trucks. A fascinating new book, "Autonomy" by Lawrence Burns, explores both the Darpa races and what happened next--in particular, how Google's self-driving car effort,now spun out as Waymo, came to dominate the field. Burns is a long-time auto executive, having come up through the ranks at GM and spent time championing that company's own autonomous vehicle effort, the impressive but ill-fated EN-V urban mobility concept.