Memory-Based Learning


New Project at Jefferson Lab Aims to Use Machine Learning to Improve Up-Time of Particle Accelerators

#artificialintelligence

NEWPORT NEWS, Va., Jan. 30, 2020 – More than 1,600 nuclear physicists worldwide depend on the Continuous Electron Beam Accelerator Facility for their research. Located at the Department of Energy's Thomas Jefferson National Accelerator Facility in Newport News, Va., CEBAF is a DOE User Facility that is scheduled to conduct research for limited periods each year, so it must perform at its best during each scheduled run. But glitches in any one of CEBAF's tens of thousands of components can cause the particle accelerator to temporarily fault and interrupt beam delivery, sometimes by mere seconds but other times by many hours. Now, accelerator scientists are turning to machine learning in hopes that they can more quickly recover CEBAF from faults and one day even prevent them. Anna Shabalina is a Jefferson Lab staff member and principal investigator on the project, which has been funded by the Laboratory Directed Research & Development program for the fiscal year 2020.



Apple could use machine learning to improve Apple Maps GPS data

#artificialintelligence

A new Apple patent application suggests that the company is working on technology that would allow machine learning to augment existing GPS location mapping. The patent application, spotted by Apple Insider, is titled "Machine learning-assisted satellite-based positioning" and appears to use machine learning as a comparison source for GPS location data. The gist seems to be that machine learning would generate a model based on a device's estimated location. That data can then be compared with GPS location data to allow Apple Maps to take into consideration things like weak GPS signal when placing a user's location on a map in the future.


The Bayesian Case Model: A Generative Approach for Case-Based Reasoning and Prototype Classification

Neural Information Processing Systems

We present the Bayesian Case Model (BCM), a general framework for Bayesian case-based reasoning (CBR) and prototype classification and clustering. BCM brings the intuitive power of CBR to a Bayesian generative framework. The BCM learns prototypes, the quintessential observations that best represent clusters in a dataset, by performing joint inference on cluster labels, prototypes and important features. Simultaneously, BCM pursues sparsity by learning subspaces, the sets of features that play important roles in the characterization of the prototypes. The prototype and subspace representation provides quantitative benefits in interpretability while preserving classification accuracy.


Machine Learning Patentability in 2019: 5 Cases Analyzed and Lessons Learned Part 1

#artificialintelligence

Claims 1 and 8 as recited are not practically performed in the human mind. As discussed above, the claims recite monitoring operation of machines using neural networks, logic decision trees, confidence assessments, fuzzy logic, smart agent profiling, and case-based reasoning. . . .


Machine Learning Patentability in 2019: 5 Cases Analyzed and Lessons Learned Part 1 JD Supra

#artificialintelligence

Claims 1 and 8 as recited are not practically performed in the human mind. As discussed above, the claims recite monitoring operation of machines using neural networks, logic decision trees, confidence assessments, fuzzy logic, smart agent profiling, and case-based reasoning. . . .


Why Overfitting is a Bad Idea and How to Avoid It (Part 1: Overfitting in general)

#artificialintelligence

We want our AI models to be as accurate as they can be. That's one of the selling points of AI -- that we can encode the best version of our past knowledge and have an automated model infer and apply our judgement. How can we tell when the model is accurate enough to trust? More importantly how can we tell if our efforts to improve accuracy are actually making the model worse? This situation can happen through a training problem called overfitting.



IBM's Watson Center pitches AI for everyone, from chefs to engineers

#artificialintelligence

At the IBM Watson Experience Center, digital and physical worlds meet in a futuristic-looking lounge overlooking San Francisco's Financial District. "Regardless of the industry you're in, there's likely an application for AI … even as a chef," said IBM's data and AI engagement lead Euniq Nebo, as he stood before a 32-foot digital screen displaying human-size images of various professionals. A chef on the screen stepped forward and came to life. Nebo spoke of questions facing a restaurant chef, such as which cutting-edge tools to invest in, or whether to incorporate local produce into a cuisine. But IBM is betting its AI can "extract the insights" from data to help its clients stay ahead of the curve, Nebo said.


How to Build an Assistant Using IBM Watson (Part 1 of 2)

#artificialintelligence

In these two articles, I'll show you how to build an assistant that will control a mock smart home thermostat. These articles are intended to get you started with building assistants by creating something relevant in the real world. If you reach the end of the article and want to take a deeper dive or get help with a different chat framework such as Twilio, Drift, Lex, or something else, please leave me a comment. Below is the list of tools and services that we'll cover: I work in product management, and strictly speaking, a product manager doesn't need to possess tech skills to do their work. After all, that's what engineers are for.