Goto

Collaborating Authors

Memory-Based Learning


CARMA: A Case-Based Rangeland Management Adviser

AI Magazine

CARMA is an advisory system for rangeland grasshopper infestations that demonstrates how AI technology can deliver expert advice to compensate for cutbacks in public services. CARMA uses two knowledge sources for the key task of predicting forage consumption by grasshoppers: (1) cases obtained by asking a group of experts to solve representative hypothetical problems and (2) a numeric model of rangeland ecosystems. These knowledge sources are integrated through the technique of model-based adaptation, in which case-based reasoning is used to find an approximate solution, and the model is used to adapt this approximate solution into a more precise solution. CARMA has been used in Wyoming counties since 1996. The combination of a simple interface, flexible control strategy, and integration of multiple knowledge sources makes CARMA accessible to inexperienced users and capable of producing advice comparable to that produced by human experts.


AI and Music: From Composition to Expressive Performance

AI Magazine

In this article, we first survey the three major types of computer music systems based on AI techniques: (1) compositional, (2) improvisational, and (3) performance systems. Representative examples of each type are briefly described. Then, we look in more detail at the problem of endowing the resulting performances with the expressiveness that characterizes human-generated music. This is one of the most challenging aspects of computer music that has been addressed just recently. The main problem in modeling expressiveness is to grasp the performer's "touch," that is, the knowledge applied when performing a score.


Playing with Cases: Rendering Expressive Music with Case-Based Reasoning

AI Magazine

Following a brief overview discussing why we prefer listening to expressive music instead of lifeless synthesized music, we examine a representative selection of well-known approaches to expressive computer music performance with an emphasis on AI-related approaches. In the main part of the paper we focus on the existing CBR approaches to the problem of synthesizing expressive music, and particularly on TempoExpress, a case-based reasoning system developed at our Institute, for applying musically acceptable tempo transformations to monophonic audio recordings of musical performances. Finally we briefly describe an ongoing extension of our previous work consisting on complementing audio information with information of the gestures of the musician. Music is played through our bodies, therefore capturing the gesture of the performer is a fundamental aspect that has to be taken into account in future expressive music renderings. This paper is based on the "2011 Robert S. Engelmore Memorial Lecture" given by the first author at AAAI/IAAI 2011.


How doctors are using machine learning to improve health outcomes

#artificialintelligence

An ounce of prevention is worth a pound of cure, as the old saying goes. Until recently, that simply meant living a healthy lifestyle, getting regular checkups, and hoping that signs of anything serious were caught early. But today, doctors are using artificial intelligence (AI) and machine learning systems to make preventative care, diagnosis, and treatment more accurate and effective than ever. "Machine learning involves adaptive learning and as such, can identify patterns over time as new data is aggregated and analyzed," explains Melissa Manice, co-founder of healthcare startup Cohero Health. "Therefore, machine learning and AI allows doctors to detect abnormal behaviors and predictive insights with the application of clinical thresholds to machine learning algorithms," she continues.


IBM's Watson AIOps automates IT anomaly detection and remediation

#artificialintelligence

Today during its annual IBM Think conference, IBM announced the launch of Watson AIOps, a service that taps AI to automate the real-time detection, diagnosing, and remediation of network anomalies. It also unveiled new offerings targeting the rollout of 5G technologies and the devices on those networks, as well as a coalition of telecommunications partners -- the IBM Telco Network Cloud Ecosystem -- that will work with IBM to deploy edge computing technologies. Watson AIOps marks IBM's foray into the mammoth AIOps market, which is expected to grow from $2.55 billion in 2018 to $11.02 billion by 2023, according to Markets and Markets. That might be a conservative projection in light of the pandemic, which is forcing IT teams to increasingly conduct their work remotely. In lieu of access to infrastructure, tools like Watson AIOps could help prevent major outages, the cost of which a study from Aberdeen pegged at $260,000 per hour.


Generalization through Memorization: Nearest Neighbor Language Models - Facebook Research

#artificialintelligence

We introduce kNN-LMs, which extend a pre-trained neural language model (LM) by linearly interpolating it with a k-nearest neighbors (kNN) model. The nearest neighbors are computed according to distance in the pre-trained LM embedding space, and can be drawn from any text collection, including the original LM training data. Applying this augmentation to a strong WIKITEXT-103 LM, with neighbors drawn from the original training set, our kNN-LM achieves a new state-of-the-art perplexity of 15.79 – a 2.9 point improvement with no additional training. We also show that this approach has implications for efficiently scaling up to larger training sets and allows for effective domain adaptation, by simply varying the nearest neighbor datastore, again without further training. Qualitatively, the model is particularly helpful in predicting rare patterns, such as factual knowledge.


IBM Watson can answer all your coronavirus questions

#artificialintelligence

In order to help government agencies, academic institutions and healthcare organizations handle the influx of calls and messages regarding the coronavirus, IBM has announced that it will provide a bundle of Watson services for free. The company will combine Watson Assistant, which uses IBM Research's natural language processing technology, with Watson Discovery to create IBM Watson Assistant for Citizens. The new Watson suite will be available online and on smartphones and will be free for at least 90 days. According to IBM, wait times for coronavirus-related questions are exceeding two hours, so the company believes that using AI via Watson may be able to help speed up response times. "While helping government agencies and healthcare institutions use AI to get critical information out to their citizens remains a high priority right now, the current environment has made it clear that every business in every industry should find ways to digitally engage with their clients and employees. With today's news, IBM is taking years of experience in helping thousands of global businesses and institutions use Natural Language Processing and other advanced AI technologies to better meet the demands of their constituents, and now applying it to the COVID-19 crisis. AI has the power to be your assistant during this uncertain time."


Former IBM Watson Team Leader David Ferrucci on AI and Elemental Cognition

#artificialintelligence

Dr. David Ferrucci is one of the few people who have created a benchmark in the history of AI because when IBM Watson won Jeopardy we reached a milestone many thought impossible. I was very privileged to have Ferrucci on my podcast in early 2012 when we spent an hour on Watson's intricacies and importance. Well, it's been almost 8 years since our original conversation and it was time to catch up with David to talk about the things that have happened in the world of AI, the things that didn't happen but were supposed to, and our present and future in relation to Artificial Intelligence. All in all, I was super excited to have Ferrucci back on my podcast and hope you enjoy our conversation as much as I did. During this 90 min interview with David Ferffucci, we cover a variety of interesting topics such as: his perspective on IBM Watson; AI, hype and human cognition; benchmarks on the singularity timeline; his move away from IBM to the biggest hedge fund in the world; Elemental Cognition and its goals, mission and architecture; Noam Chomsky and Marvin Minsky's skepticism of Watson; deductive, inductive and abductive learning; leading and managing from the architecture down; Black Box vs Open Box AI; CLARA – Collaborative Learning and Reading Agent and the best and worst applications thereof; the importance of meaning and whether AI can be the source of it; whether AI is the greatest danger humanity is facing today; why technology is a magnifying mirror; why the world is transformed by asking questions.


Learn to Forget: User-Level Memorization Elimination in Federated Learning

arXiv.org Machine Learning

Federated learning is a decentralized machine learning technique that evokes widespread attention in both the research field and the real-world market. However, the current privacy-preserving federated learning scheme only provides a secure way for the users to contribute their private data but never leaves a way to withdraw the contribution to model update. Such an irreversible setting potentially breaks the regulations about data protection and increases the risk of data extraction. To resolve the problem, this paper describes a novel concept for federated learning, called memorization elimination. Based on the concept, we propose \sysname, a federated learning framework that allows the user to eliminate the memorization of its private data in the trained model. Specifically, each user in \sysname is deployed with a trainable dummy gradient generator. After steps of training, the generator can produce dummy gradients to stimulate the neurons of a machine learning model to eliminate the memorization of the specific data. Also, we prove that the additional memorization elimination service of \sysname does not break the common procedure of federated learning or lower its security.


Small ReLU networks are powerful memorizers: a tight analysis of memorization capacity

Neural Information Processing Systems

We study finite sample expressivity, i.e., memorization power of ReLU networks. Recent results require $N$ hidden nodes to memorize/interpolate arbitrary $N$ data points. In contrast, by exploiting depth, we show that 3-layer ReLU networks with $\Omega(\sqrt{N})$ hidden nodes can perfectly memorize most datasets with $N$ points. We also prove that width $\Theta(\sqrt{N})$ is necessary and sufficient for memorizing $N$ data points, proving tight bounds on memorization capacity. The sufficiency result can be extended to deeper networks; we show that an $L$-layer network with $W$ parameters in the hidden layers can memorize $N$ data points if $W \Omega(N)$.