When machine learning algorithms are used in life-critical or mission-critical applications (e.g., self driving cars, cyber security, surgical robotics), it is important to ensure that they provide some high-level correctness guarantees. We introduce a paradigm called Trusted Machine Learning (TML) with the goal of making learning techniques more trustworthy. We outline methods that show how symbolic analysis (specifi- cally parametric model checking) can be used to learn the dynamical model of a system where the learned model satis- fies correctness requirements specified in the form of temporal logic properties (e.g., safety, liveness). When a learned model does not satisfy the desired guarantees, we try two approaches: (1) Model Repair, wherein we modify a learned model directly, and (2) Data Repair, wherein we modify the data so that re-learning from the modified data will result in a trusted model. Model Repair tries to make the minimal changes to the trained model while satisfying the properties, whereas Data Repair tries to make the minimal changes to the dataset used to train the model for ensuring satisfaction of the properties. We show how the Model Repair and Data Repair problems can be solved for the case of probabilistic models, specifically Discrete-Time Markov Chains (DTMC) or Markov Decision Processes (MDP), when the desired properties are expressed in Probabilistic Computation Tree Logic (PCTL). Specifically, we outline how the parameter learning problem in the probabilistic Markov models under temporal logic constraints can be equivalently expressed as a non-linear optimization with non-linear rational constraints, by performing symbolic transformations using a parametric model checker. We illustrate the approach on two case studies: a controller for automobile lane changing, and query router for a wireless sensor network.
You don't have to agree with Elon Musk's apocalyptic fears of artificial intelligence to be concerned that, in the rush to apply the technology in the real world, some algorithms could inadvertently cause harm. This type of self-learning software powers Uber's self-driving cars, helps Facebook identify people in social-media posts, and let's Amazon's Alexa understand your questions. Now DeepMind, the London-based AI company owned by Alphabet Inc., has developed a simple test to check if these new algorithms are safe.
Today at the Frankfurt motor show, one of the biggest and most prestigious motor shows in the world, Sheryl Sandberg, COO of Facebook, spoke before German Chancellor Angela Merkel. Now what is Facebook and most importantly, Sheryl Sandberg doing at an automotive industry event? The obvious answer that comes to mind when one relates Facebook and the car industry is the billions of advertising dollars the industry spends on marketing and advertising. However, that does not seem to be Facebook's game plan, as highlighted by Sheryl and shown at their pavilion. Facebook seems to have a strategy of leveraging its capabilities in social marketing, AR & VR and interestingly, who would have thought of it, leveraging its advanced AI and deep learning capabilities to support the development of autonomous vehicles.
Robots are coming for our jobs, and the work left over for humans is getting worse and paying less. Changes in technology and culture over the past decade have created jobs your high school guidance counselor could never imagine in their wildest dreams. Meanwhile, the safe, traditional jobs like lawyering and doctoring come with ever-increasing price tags and fewer career prospects. Unless the post-work utopia theorists are raving about comes around soon, picking your career is one of the most important choices of your life. You might as well make it one that's fulfilling and cuts a decent paycheck.