"AI is being fed directly into the bloodstream of society, and in many cases without sufficient checks and balances," says Kate Crawford, a professor and cofounder of New York University's AI Now, the world's first academic research institute dedicated to the social impact of artificial intelligence. Last year, Crawford partnered with data-viz guru Vladan Joler to create "Anatomy of an AI System," a map and research paper demonstrating the real-world consequences of developing and manufacturing the Amazon Echo. The paper highlights the radical differences in income distribution between Amazon executives and the workers who enable its vast infrastructure, as well as its devastating environmental impacts. The project has been exhibited at museums around the world, and Crawford has presented it to leaders in France, Germany, Spain, and Argentina.
This week Amazon CEO Jeff Bezos got the tech sector's attention with emerging reports of his "fascination" with the rapidly developing world of autonomous autos. "If you think about the auto industry right now, there's so many things going on with Uber-ization, electrification, the connected car -- so it's a fascinating industry," Bezos said. "It's going to be something very interesting to watch and participate in, and I'm very excited about that whole industry." Amazon has made some sizable investments to accompany that interest -- most notably in automation and electrification start-up Rivian and self-driving startup Aurora. And fascination aside, Amazon has a race for the consumer's whole paycheck to vie in with Walmart -- and there is little doubt that auto automation plays like Rivian and Aurora could put some octane, so to speak, behind that effort.
Within the next decade, healthcare will see emerging technologies including artificial intelligence, cloud computing, predictive analytics and blockchain spurring billions of dollars in value increases, according to a new McKinsey & Company report on this tech-driven "era of exponential growth." For these innovations to impact areas like clinical productivity, care delivery and waste reduction, though, certain value pools will need to be disrupted across the entire industry. Here are four possible disruptive changes that could transform healthcare in the coming years, according to McKinsey. More articles about AI: How AI can enhance clinical productivity IBM Research using self-driving car tech to promote seniors' wellbeing Bill calls for $2.2B in federal AI funding
In September 1955, John McCarthy, a young assistant professor of mathematics at Dartmouth College, boldly proposed that "every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it." McCarthy called this new field of study "artificial intelligence," and suggested that a two-month effort by a group of 10 scientists could make significant advances in developing machines that could "use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves." At the time, scientists optimistically believed we would soon have thinking machines doing any work a human could do. Now, more than six decades later, advances in computer science and robotics have helped us automate many of the tasks that previously required the physical and cognitive labor of humans. But true artificial intelligence, as McCarthy conceived it, continues to elude us.
Mobile maps route us through traffic, algorithms can now pilot automobiles, virtual assistants help us smoothly toggle between work and life, and smart code is adept at surfacing our next our new favorite song. But AI could prove dangerous, too. Tesla CEO Elon Musk once warned that biased, unmonitored and unregulated AI could be the "greatest risk we face as a civilization." Instead, AI experts are concerned that automated systems are likely to absorb bias from human programmers. And when bias is coded into the algorithms that power AI it will be nearly impossible to remove.
Wireless carriers around the world are beginning to deploy 5G, the latest and greatest in mobile broadband technology. Like the evolution from 3G to 4G, the jump to 5G will mean faster speeds, lower latency and many other benefits. It'll be a major boost for businesses, gamers, livestreamers and more. It could be a huge leap in other ways, too -- 5G is so much faster than 4G, and has so much less latency, that it could become the platform for all sorts of new services. Of course, there are also downsides.
Google has unveiled updates for its artificially intelligent voice assistant and new privacy tools to give people more control over how they're being tracked on the go or at home. The company also unveiled a new Pixel phone and smart home display. Google just made ordering pizza, pad thai and fried chicken from your favorite restaurants even easier. The search giant announced on Thursday that it updated apps like Google Maps, Google Search and the Google Assistant to make ordering food online more convenient, so you don't have to download as many third-party apps. "When I was pregnant with my son last year, my cravings were completely overpowering," said Google's senior product manager of food ordering, Anantica Singh, in a blog post.
Creating driverless cars capable of humanlike reasoning is a long-standing pursuit of companies like Waymo, GM's Cruise, Uber, and others. Intel's Mobileye proposes a mathematical model -- the Responsibility-Sensitive Safety (RSS) -- it describes as a "common sense" approach to on-the-road decision-making that codifies good habits like giving other cars the right of way. For its part, Nvidia is actively developing Safety Force Field, a decision-making policy in a motion-planning stack that monitors unsafe actions by analyzing real-time sensor data. Now, a team of MIT scientists are investigating an approach that leverages GPS-like maps and visual data to enable autonomous cars to learn human steering patterns, and to apply the learned knowledge to complex planned routes in previously unseen environments. Their work -- which will be presented at the International Conference on Robotics and Automation in Long Beach, California next month -- builds on end-to-end navigation systems architected by Daniel Rus, director of the Computer Science and Artificial Intelligence Laboratory (CSAIL).