Goto

Collaborating Authors

Optimization: AI-Alerts


Novel deep learning framework for symbolic regression

#artificialintelligence

Lawrence Livermore National Laboratory (LLNL) computer scientists have developed a new framework and an accompanying visualization tool that leverages deep reinforcement learning for symbolic regression problems, outperforming baseline methods on benchmark problems. The paper was recently accepted as an oral presentation at the International Conference on Learning Representations (ICLR 2021), one of the top machine learning conferences in the world. The conference takes place virtually May 3-7. In the paper, the LLNL team describes applying deep reinforcement learning to discrete optimization--problems that deal with discrete "building blocks" that must be combined in a particular order or configuration to optimize a desired property. The team focused on a type of discrete optimization called symbolic regression--finding short mathematical expressions that fit data gathered from an experiment.


Making Machine Learning Robust Against Adversarial Inputs

Communications of the ACM

Machine learning has advanced radically over the past 10 years, and machine learning algorithms now achieve human-level performance or better on a number of tasks, including face recognition,31 optical character recognition,8 object recognition,29 and playing the game Go.26 Yet machine learning algorithms that exceed human performance in naturally occurring scenarios are often seen as failing dramatically when an adversary is able to modify their input data even subtly. Machine learning is already used for many highly important applications and will be used in even more of even greater importance in the near future. Search algorithms, automated financial trading algorithms, data analytics, autonomous vehicles, and malware detection are all critically dependent on the underlying machine learning algorithms that interpret their respective domain inputs to provide intelligent outputs that facilitate the decision-making process of users or automated systems. As machine learning is used in more contexts where malicious adversaries have an incentive to interfere with the operation of a given machine learning system, it is increasingly important to provide protections, or "robustness guarantees," against adversarial manipulation. The modern generation of machine learning services is a result of nearly 50 years of research and development in artificial intelligence--the study of computational algorithms and systems that reason about their environment to make predictions.25 A subfield of artificial intelligence, most modern machine learning, as used in production, can essentially be understood as applied function approximation; when there is some mapping from an input x to an output y that is difficult for a programmer to describe through explicit code, a machine learning algorithm can learn an approximation of the mapping by analyzing a dataset containing several examples of inputs and their corresponding outputs. Google's image-classification system, Inception, has been trained with millions of labeled images.28 It can classify images as cats, dogs, airplanes, boats, or more complex concepts on par or improving on human accuracy. Increases in the size of machine learning models and their accuracy is the result of recent advancements in machine learning algorithms,17 particularly to advance deep learning.7 One focus of the machine learning research community has been on developing models that make accurate predictions, as progress was in part measured by results on benchmark datasets. In this context, accuracy denotes the fraction of test inputs that a model processes correctly--the proportion of images that an object-recognition algorithm recognizes as belonging to the correct class, and the proportion of executables that a malware detector correctly designates as benign or malicious. The estimate of a model's accuracy varies greatly with the choice of the dataset used to compute the estimate.