Goto

Collaborating Authors

A high-bias, low-variance introduction to Machine Learning for physicists

arXiv.org Machine Learning

Machine Learning (ML) is one of the most exciting and dynamic areas of modern research and application. The purpose of this review is to provide an introduction to the core concepts and tools of machine learning in a manner easily understood and intuitive to physicists. The review begins by covering fundamental concepts in ML and modern statistics such as the bias-variance tradeoff, overfitting, regularization, and generalization before moving on to more advanced topics in both supervised and unsupervised learning. Topics covered in the review include ensemble models, deep learning and neural networks, clustering and data visualization, energy-based models (including MaxEnt models and Restricted Boltzmann Machines), and variational methods. Throughout, we emphasize the many natural connections between ML and statistical physics. A notable aspect of the review is the use of Python notebooks to introduce modern ML/statistical packages to readers using physics-inspired datasets (the Ising Model and Monte-Carlo simulations of supersymmetric decays of proton-proton collisions). We conclude with an extended outlook discussing possible uses of machine learning for furthering our understanding of the physical world as well as open problems in ML where physicists maybe able to contribute. (Notebooks are available at https://physics.bu.edu/~pankajm/MLnotebooks.html )


Machine Learning Towards Intelligent Systems: Applications, Challenges, and Opportunities

arXiv.org Artificial Intelligence

The emergence and continued reliance on the Internet and related technologies has resulted in the generation of large amounts of data that can be made available for analyses. However, humans do not possess the cognitive capabilities to understand such large amounts of data. Machine learning (ML) provides a mechanism for humans to process large amounts of data, gain insights about the behavior of the data, and make more informed decision based on the resulting analysis. ML has applications in various fields. This review focuses on some of the fields and applications such as education, healthcare, network security, banking and finance, and social media. Within these fields, there are multiple unique challenges that exist. However, ML can provide solutions to these challenges, as well as create further research opportunities. Accordingly, this work surveys some of the challenges facing the aforementioned fields and presents some of the previous literature works that tackled them. Moreover, it suggests several research opportunities that benefit from the use of ML to address these challenges.


What Should I Learn First: Introducing LectureBank for NLP Education and Prerequisite Chain Learning

arXiv.org Machine Learning

Recent years have witnessed the rising popularity of Natural Language Processing (NLP) and related fields such as Artificial Intelligence (AI) and Machine Learning (ML). Many online courses and resources are available even for those without a strong background in the field. Often the student is curious about a specific topic but does not quite know where to begin studying. To answer the question of "what should one learn first," we apply an embedding-based method to learn prerequisite relations for course concepts in the domain of NLP. We introduce LectureBank, a dataset containing 1,352 English lecture files collected from university courses which are each classified according to an existing taxonomy as well as 208 manually-labeled prerequisite relation topics, which is publicly available. The dataset will be useful for educational purposes such as lecture preparation and organization as well as applications such as reading list generation. Additionally, we experiment with neural graph-based networks and non-neural classifiers to learn these prerequisite relations from our dataset.


Educational Content Linking for Enhancing Learning Need Remediation in MOOCs

arXiv.org Artificial Intelligence

Since its introduction in 2011, there have been over 4000 MOOCs on various subjects on the Web, serving over 35 million learners. MOOCs have shown the ability to democratize knowledge dissemination and bring the best education in the world to every learner. However, the disparate distances between participants, the size of the learner population, and the heterogeneity of the learners' backgrounds make it extremely difficult for instructors to interact with the learners in a timely manner, which adversely affects learning experience. To address the challenges, in this thesis, we propose a framework: educational content linking. By linking and organizing pieces of learning content scattered in various course materials into an easily accessible structure, we hypothesize that this framework can provide learners guidance and improve content navigation. Since most instruction and knowledge acquisition in MOOCs takes place when learners are surveying course materials, better content navigation may help learners find supporting information to resolve their confusion and thus improve learning outcome and experience. To support our conjecture, we present end-to-end studies to investigate our framework around two research questions: 1) can manually generated linking improve learning? 2) can learning content be generated with machine learning methods? For studying the first question, we built an interface that present learning materials and visualize the linking among them simultaneously. We found the interface enables users to search for desired course materials more efficiently, and retain more concepts more readily. For the second question, we propose an automatic content linking algorithm based on conditional random fields. We demonstrate that automatically generated linking can still lead to better learning, although the magnitude of the improvement over the unlinked interface is smaller.


Patterns, predictions, and actions: A story about machine learning

arXiv.org Machine Learning

This graduate textbook on machine learning tells a story of how patterns in data support predictions and consequential actions. Starting with the foundations of decision making, we cover representation, optimization, and generalization as the constituents of supervised learning. A chapter on datasets as benchmarks examines their histories and scientific bases. Self-contained introductions to causality, the practice of causal inference, sequential decision making, and reinforcement learning equip the reader with concepts and tools to reason about actions and their consequences. Throughout, the text discusses historical context and societal impact. We invite readers from all backgrounds; some experience with probability, calculus, and linear algebra suffices.