Goto

Collaborating Authors

A high-bias, low-variance introduction to Machine Learning for physicists

arXiv.org Machine Learning

Machine Learning (ML) is one of the most exciting and dynamic areas of modern research and application. The purpose of this review is to provide an introduction to the core concepts and tools of machine learning in a manner easily understood and intuitive to physicists. The review begins by covering fundamental concepts in ML and modern statistics such as the bias-variance tradeoff, overfitting, regularization, and generalization before moving on to more advanced topics in both supervised and unsupervised learning. Topics covered in the review include ensemble models, deep learning and neural networks, clustering and data visualization, energy-based models (including MaxEnt models and Restricted Boltzmann Machines), and variational methods. Throughout, we emphasize the many natural connections between ML and statistical physics. A notable aspect of the review is the use of Python notebooks to introduce modern ML/statistical packages to readers using physics-inspired datasets (the Ising Model and Monte-Carlo simulations of supersymmetric decays of proton-proton collisions). We conclude with an extended outlook discussing possible uses of machine learning for furthering our understanding of the physical world as well as open problems in ML where physicists maybe able to contribute. (Notebooks are available at https://physics.bu.edu/~pankajm/MLnotebooks.html )


Explanation in Human-AI Systems: A Literature Meta-Review, Synopsis of Key Ideas and Publications, and Bibliography for Explainable AI

arXiv.org Artificial Intelligence

This is an integrative review that address the question, "What makes for a good explanation?" with reference to AI systems. Pertinent literatures are vast. Thus, this review is necessarily selective. That said, most of the key concepts and issues are expressed in this Report. The Report encapsulates the history of computer science efforts to create systems that explain and instruct (intelligent tutoring systems and expert systems). The Report expresses the explainability issues and challenges in modern AI, and presents capsule views of the leading psychological theories of explanation. Certain articles stand out by virtue of their particular relevance to XAI, and their methods, results, and key points are highlighted. It is recommended that AI/XAI researchers be encouraged to include in their research reports fuller details on their empirical or experimental methods, in the fashion of experimental psychology research reports: details on Participants, Instructions, Procedures, Tasks, Dependent Variables (operational definitions of the measures and metrics), Independent Variables (conditions), and Control Conditions.


Artificial Intelligence : from Research to Application ; the Upper-Rhine Artificial Intelligence Symposium (UR-AI 2019)

arXiv.org Artificial Intelligence

The TriRhenaTech alliance universities and their partners presented their competences in the field of artificial intelligence and their cross-border cooperations with the industry at the tri-national conference 'Artificial Intelligence : from Research to Application' on March 13th, 2019 in Offenburg. The TriRhenaTech alliance is a network of universities in the Upper Rhine Trinational Metropolitan Region comprising of the German universities of applied sciences in Furtwangen, Kaiserslautern, Karlsruhe, and Offenburg, the Baden-Wuerttemberg Cooperative State University Loerrach, the French university network Alsace Tech (comprised of 14 'grandes \'ecoles' in the fields of engineering, architecture and management) and the University of Applied Sciences and Arts Northwestern Switzerland. The alliance's common goal is to reinforce the transfer of knowledge, research, and technology, as well as the cross-border mobility of students.


Computers and Thought

Classics

E.A. Feigenbaum and J. Feldman (Eds.). Computers and Thought. McGraw-Hill, 1963. This collection includes twenty classic papers by such pioneers as A. M. Turing and Marvin Minsky who were behind the pivotal advances in artificially simulating human thought processes with computers. All Parts are available as downloadable pdf files; most individual chapters are also available separately. COMPUTING MACHINERY AND INTELLIGENCE. A. M. Turing. CHESS-PLAYING PROGRAMS AND THE PROBLEM OF COMPLEXITY. Allen Newell, J.C. Shaw and H.A. Simon. SOME STUDIES IN MACHINE LEARNING USING THE GAME OF CHECKERS. A. L. Samuel. EMPIRICAL EXPLORATIONS WITH THE LOGIC THEORY MACHINE: A CASE STUDY IN HEURISTICS. Allen Newell J.C. Shaw and H.A. Simon. REALIZATION OF A GEOMETRY-THEOREM PROVING MACHINE. H. Gelernter. EMPIRICAL EXPLORATIONS OF THE GEOMETRY-THEOREM PROVING MACHINE. H. Gelernter, J.R. Hansen, and D. W. Loveland. SUMMARY OF A HEURISTIC LINE BALANCING PROCEDURE. Fred M. Tonge. A HEURISTIC PROGRAM THAT SOLVES SYMBOLIC INTEGRATION PROBLEMS IN FRESHMAN CALCULUS. James R. Slagle. BASEBALL: AN AUTOMATIC QUESTION ANSWERER. Green, Bert F. Jr., Alice K. Wolf, Carol Chomsky, and Kenneth Laughery. INFERENTIAL MEMORY AS THE BASIS OF MACHINES WHICH UNDERSTAND NATURAL LANGUAGE. Robert K. Lindsay. PATTERN RECOGNITION BY MACHINE. Oliver G. Selfridge and Ulric Neisser. A PATTERN-RECOGNITION PROGRAM THAT GENERATES, EVALUATES, AND ADJUSTS ITS OWN OPERATORS. Leonard Uhr and Charles Vossler. GPS, A PROGRAM THAT SIMULATES HUMAN THOUGHT. Allen Newell and H.A. Simon. THE SIMULATION OF VERBAL LEARNING BEHAVIOR. Edward A. Feigenbaum. PROGRAMMING A MODEL OF HUMAN CONCEPT FORMULATION. Earl B. Hunt and Carl I. Hovland. SIMULATION OF BEHAVIOR IN THE BINARY CHOICE EXPERIMENT Julian Feldman. A MODEL OF THE TRUST INVESTMENT PROCESS. Geoffrey P. E. Clarkson. A COMPUTER MODEL OF ELEMENTARY SOCIAL BEHAVIOR. John T. Gullahorn and Jeanne E. Gullahorn. TOWARD INTELLIGENT MACHINES. Paul Armer. STEPS TOWARD ARTIFICIAL INTELLIGENCE. Marvin Minsky. A SELECTED DESCRIPTOR-INDEXED BIBLIOGRAPHY TO THE LITERATURE ON ARTIFICIAL INTELLIGENCE. Marvin Minsky.


Deep Reinforcement Learning

arXiv.org Machine Learning

We discuss deep reinforcement learning in an overview style. We draw a big picture, filled with details. We discuss six core elements, six important mechanisms, and twelve applications, focusing on contemporary work, and in historical contexts. We start with background of artificial intelligence, machine learning, deep learning, and reinforcement learning (RL), with resources. Next we discuss RL core elements, including value function, policy, reward, model, exploration vs. exploitation, and representation. Then we discuss important mechanisms for RL, including attention and memory, unsupervised learning, hierarchical RL, multi-agent RL, relational RL, and learning to learn. After that, we discuss RL applications, including games, robotics, natural language processing (NLP), computer vision, finance, business management, healthcare, education, energy, transportation, computer systems, and, science, engineering, and art. Finally we summarize briefly, discuss challenges and opportunities, and close with an epilogue.