What will be the next thing to revolutionize data science in 2019? Reinforcement learning will be the next big thing in data science in 2019. While RL has been around for a long time in academia, it has hardly seen any industry adoption at all. Why? Partly because there have been plenty of low-hanging fruits to pick in predictive analytics, but mostly because of the barriers in implementation, knowledge and available tools. The potential value in using RL in proactive analytics and AI is enormous, but it also demands a greater skillset to master.
This installment of Research for Practice features a curated selection from Alex Ratner and Chris Ré, who provide an overview of recent developments in Knowledge Base Construction (KBC). While knowledge bases have a long history dating to the expert systems of the 1970s, recent advances in machine learning have led to a knowledge base renaissance, with knowledge bases now powering major product functionality including Google Assistant, Amazon Alexa, Apple Siri, and Wolfram Alpha. Ratner and Re's selections highlight key considerations in the modern KBC process, from interfaces that extract knowledge from domain experts to algorithms and representations that transfer knowledge across tasks.
Virtual assistants like Amazon Alexa, Microsoft Cortana, Google Assistant, and Apple Siri employ conversational experiences and language-understanding technologies to help users accomplish a range of tasks, from reminder creation to home automation. Voice is the primary means of engagement, and voice-activated assistants are growing in popularity; estimates as of June 2017 put the number of monthly active users of voice-based assistant devices in the U.S. at 36 million.a Many are "headless" devices that lack displays. Smart speakers (such as Amazon Echo and Google Home) are among the most popular devices in this category. Speakers are tethered to one location, but there are other settings where voice-activated assistants can be helpful, including automobiles (such as for suggesting convenient locations to help with pending tasks5) and personal audio (such as for providing private notifications and suggestions18).
Countless dollars and entire scientific careers have been dedicated to predicting where and when the next big earthquake will strike. But unlike weather forecasting, which has significantly improved with the use of better satellites and more powerful mathematical models, earthquake prediction has been marred by repeated failure. Some of the world's most destructive earthquakes -- China in 2008, Haiti in 2010 and Japan in 2011, among them -- occurred in areas that seismic hazard maps had deemed relatively safe. The last large earthquake to strike Los Angeles, Northridge in 1994, occurred on a fault that did not appear on seismic maps. Now, with the help of artificial intelligence, a growing number of scientists say changes in the way they can analyze massive amounts of seismic data can help them better understand earthquakes, anticipate how they will behave, and provide quicker and more accurate early warnings.
Self-driving cars are being developed by several major technology companies and carmakers. When a driver slams on the brakes to avoid hitting a pedestrian crossing the road illegally, she is making a moral decision that shifts risk from the pedestrian to the people in the car. Self-driving cars might soon have to make such ethical judgments on their own -- but settling on a universal moral code for the vehicles could be a thorny task, suggests a survey of 2.3 million people from around the world. The largest ever survey of machine ethics1, published today in Nature, finds that many of the moral principles that guide a driver's decisions vary by country. For example, in a scenario in which some combination of pedestrians and passengers will die in a collision, people from relatively prosperous countries with strong institutions were less likely to spare a pedestrian who stepped into traffic illegally.
A small metal droplet can propel a wheeled robot forward with a simple electric current. The technique paves the way for larger robots that can trundle like tumbleweeds through unfriendly terrain. Shi-Yang Tang at the University of Wollongong in Australia and his colleagues started with a plastic wheel about five centimetres across with walls along its edges, shaped like a car tyre. Inside the wheel they placed a drop of liquid metal made mostly of gallium.
Tiny robots no bigger than a cell could be mass-produced using a new method developed by researchers at MIT. The microscopic devices, which the team calls "syncells" (short for synthetic cells), might eventually be used to monitor conditions inside an oil or gas pipeline, or to search out disease while floating through the bloodstream. The key to making such tiny devices in large quantities lies in a method the team developed for controlling the natural fracturing process of atomically-thin, brittle materials, directing the fracture lines so that they produce miniscule pockets of a predictable size and shape. Embedded inside these pockets are electronic circuits and materials that can collect, record, and output data. The novel process, called "autoperforation," is described in a paper published today in the journal Nature Materials, by MIT Professor Michael Strano, postdoc Pingwei Liu, graduate student Albert Liu, and eight others at MIT.
IBM has announced AI OpenScale, a service that aims to bring visibility and explainability of AI models for enterprises. When it comes to adopting AI for business use, there are multiple concerns among enterprise customers. Lack of visibility of the model, unwanted bias, interoperability among tools and frameworks, compliance in building and consuming AI models are some of the critical issues with AI. IBM AI OpenScale provides explanations into how AI models are making decisions, and automatically detects and mitigates bias to produce fair, trusted outcomes. It attempts to bring confidence to enterprises by addressing the challenges involved in adopting artificial intelligence.
A new report says Uber plans to roll out a fleet of food-delivery drones by 2021. A drone flies over a city. Uber's flight ambitions expand beyond just shuttling people. It also includes delivering food. According to a job posting spotted by The Wall Street Journal, Uber is looking to hire an executive to help launch its drone food delivery program known internally as UberExpress.