It's the ultimate unanswerable question we all face: When will I die? If we knew, would we live differently? So far, science has been no more accurate at predicting life span than a $10 fortune teller. But that's starting to change. The measures being developed will never get good enough to forecast an exact date or time of death, but insurance companies are already finding them useful, as are hospitals and palliative care teams.
Hossein Rahnama knows a CEO of a major financial company who wants to live on after he's dead, and Rahnama thinks he can help him do it. Rahnama is creating a digital avatar for the CEO that they both hope could serve as a virtual "consultant" when the actual CEO is gone. Some future company executive deciding whether to accept an acquisition bid might pull out her cell phone, open a chat window, and pose the question to the late CEO. "I'm not a fan of that company's leadership," the avatar might say, and the screen would go red to indicate disapproval. Maybe, but Rahnama believes we'll come to embrace the digital afterlife.
"My stomach is killing me!" "I'm sorry to hear that," says a female voice. "Are you happy to answer a few questions?" And so the consultation begins. Does it come and go? There's some deliberation before you get an opinion. "This sounds like dyspepsia to me. Dyspepsia is doctor-speak for indigestion."
Wherever artificial intelligence is deployed, you will find it has failed in some amusing way. Take the strange errors made by translation algorithms that confuse having someone for dinner with, well, having someone for dinner. But as AI is used in ever more critical situations, such as driving autonomous cars, making medical diagnoses, or drawing life-or-death conclusions from intelligence information, these failures will no longer be a laughing matter. That's why DARPA, the research arm of the US military, is addressing AI's most basic flaw: it has zero common sense. "Common sense is the dark matter of artificial intelligence," says Oren Etzioni, CEO of the Allen Institute for AI, a research nonprofit based in Seattle that is exploring the limits of the technology.
You could argue that Waymo, the self-driving subsidiary of Alphabet, has the safest autonomous cars around. It's certainly covered the most miles. But in recent years, serious accidents involving early systems from Uber and Tesla have eroded public trust in the nascent technology. To win it back, putting in the miles on real roads just isn't enough. So today Waymo not only announced that its vehicles have clocked more than 10 million miles since 2009.
When Facebook chief executive Mark Zuckerberg promised Congress that AI would help solve the problem of fake news, he revealed little in the way of how. New research brings us one step closer to figuring that out. In an extensive study that will be presented at a conference later this month, researchers from MIT, Qatar Computing Research Institute (QCRI), and Sofia University in Bulgaria tested over 900 possible variables for predicting a media outlet's trustworthiness--probably the largest set ever proposed. The researchers then trained a machine-learning model on different combinations of the variables to see which would produce the most accurate results. The best model accurately labeled news outlets with "low," "medium," or "high" factuality just 65% of the time.
Kai-Fu Lee, a prominent investor and entrepreneur based in Beijing, has been talking up China's artificial intelligence potential for a while. Now he's got a message for the United States. The real threat to American preeminence in AI isn't China's rise, he says--it's the US government's complacency. Lee is well placed to understand the issue, even if he isn't altogether unbiased. He worked on machine learning at Carnegie Mellon University during the 1980s, led Microsoft's research lab in China in the 1990s, and then spearheaded Google's venture into China in the 2000s.
This week, the US Defense Advanced Research Projects Agency announced a challenge to push the limits of robotic design and control. DARPA's Subterranean Challenge will require teams to have robots maneuver objects through three different environments: a series of caves, a bunker-like "urban environment," and a labyrinth of confined tunnels. While the robots will be remote-controlled, they'll need some serious autonomous skills. They will need to rapidly map and explore unfamiliar environments even when communications are spotty and conditions are challenging for sensors. The teams will be allowed to use as many different types of robot as they like, but this will mean dealing with greater complexity in communications and coordination.
During the opening ceremony of Alibaba's 2018 computing conference last week, Simon Hu, president of Alibaba Cloud, invited the MC to taste some tea on the stage--but, first, to distinguish between tea roasted by hand and by machine. While the MC stared helplessly at two saucers filled with nearly identical-looking tea leaves, Hu pulled out his smartphone. He took a photo of each saucer and fed them into an app developed by Tmall, one of Alibaba's e-commerce platforms. Using an algorithm specially trained to tell the difference between different kinds of tea leave, the app solved the problem. It was a small example of the interplay between Alibaba's research on fundamental technologies and the demands of its business.
Once cars can finally drive themselves, we'll have more time to enjoy the journey and do other, much more interesting stuff instead. At least that's the concept behind some of the designs below, developed by retail giant IKEA's "future living lab," SPACE10, based in Copenhagen. SPACE10 was asked to come up with designs for autonomous vehicles that would be extensions of our homes, offices, and local institutions. Some of the agency's seven ideas, shown below, are almost practical. Who can't imagine autonomously driven cafés or pop-up stores?