The mission of AI100 is to launch a study every five years, over the course of a century, to better track and anticipate how artificial intelligence propagates through society, and how it shapes different aspects of our lives. This IJCAI session brought together some of the people involved in the AI100 initiative to discuss their efforts and the direction of the project. The goals of the AI100 are "to support a longitudinal study of AI advances on people and society, centering on periodic studies of developments, trends, futures, and potential disruptions associated with the developments in machine intelligence, and formulating assessments, recommendations and guidance on proactive efforts". Working on the AI100 project are a standing committee and a study panel. The first study panel report, released in 2016, can be read in full here.
We address the problem of building theoretical models that help elucidate the function of the visual brain at computational/algorithmic and structural/mechanistic levels. We seek to understand how the receptive fields and topographic maps found in visual cortical areas relate to underlying computational desiderata. We view the development of sensory systems from the popular perspective of probability density estimation; this is motivated by the notion that an effective internal representational scheme is likely to reflect the statistical structure of the environment in which an organism lives. We apply biologically based constraints on elements of the model. The thesis begins by surveying the relevant literature from the fields of neurobiology, theoretical neuroscience, and machine learning. After this review we present our main theoretical and algorithmic developments: we propose a class of probabilistic models, which we refer to as "energy-based models", and show equivalences between this framework and various other types of probabilistic model such as Markov random fields and factor graphs; we also develop and discuss approximate algorithms for performing maximum likelihood learning and inference in our energy based models. The rest of the thesis is then concerned with exploring specific instantiations of such models. By performing constrained optimisation of model parameters to maximise the likelihood of appropriate, naturalistic datasets we are able to qualitatively reproduce many of the receptive field and map properties found in vivo, whilst simultaneously learning about statistical regularities in the data.
Hierarchical planning has attracted renewed interest in the last couple of years. As a consequence, the time was right to establish a workshop devoted entirely to hierarchical planning – an insight shared by many supporters. In this paper we report on the first ICAPS workshop on Hierarchical Planning held in Delft, The Netherlands, in 2018 as well as on the second workshop held in Berkeley, CA, USA, in 2019. Hierarchical planning approaches incorporate hierarchies in the domain model. In the most common form, the hierarchy is defined among tasks, leading to the distinction between primitive and abstract tasks.
This report documents ideas for improving the field of machine learning, which arose from discussions at the ML Retrospectives workshop at NeurIPS 2019. The goal of the report is to disseminate these ideas more broadly, and in turn encourage continuing discussion about how the field could improve along these axes. We focus on topics that were most discussed at the workshop: incentives for encouraging alternate forms of scholarship, restructuring the review process, participation from academia and industry, and how we might better train computer scientists as scientists. Videos from the workshop can be accessed at Lowe et al. (2019).
In this survey, we provide a detailed review of recent advances in the recovery of continuous domain multidimensional signals from their few nonuniform (multichannel) measurements using structured low-rank matrix completion formulation. This framework is centered on the fundamental duality between the compactness (e.g., sparsity) of the continuous signal and the rank of a structured matrix, whose entries are functions of the signal. This property enables the reformulation of the signal recovery as a low-rank structured matrix completion, which comes with performance guarantees. We will also review fast algorithms that are comparable in complexity to current compressed sensing methods, which enables the application of the framework to large-scale magnetic resonance (MR) recovery problems. The remarkable flexibility of the formulation can be used to exploit signal properties that are difficult to capture by current sparse and low-rank optimization strategies. We demonstrate the utility of the framework in a wide range of MR imaging (MRI) applications, including highly accelerated imaging, calibration-free acquisition, MR artifact correction, and ungated dynamic MRI. The slow nature of signal acquisition in magnetic resonance imaging (MRI), where the image is formed from a sequence of Fourier samples, often restricts the achievable spatial and temporal resolution in multidimensional static and dynamic imaging applications. Discrete compressed sensing (CS) methods provided a major breakthrough to accelerate the magnetic resonance (MR) signal acquisition by reducing the sampling burden. As described in an introductory article in this special issue  these algorithms exploited the sparsity of the discrete signal in a transform domain to recover the images from a few measurements. In this paper, we review a continuous domain extension of CS using a structured low-rank (SLR) framework for the recovery of an image or a series of images from a few measurements using various compactness assumptions -. The general strategy of the SLR framework starts with defining a lifting operation to construct a structured matrix, whose entries are functions of the signal samples. The SLR algorithms exploit the dual relationships between the signal compactness properties (e.g. This dual relationship allows recovery of the signal from a few samples in the measurement domain as an SLR optimization problem. MJ and MM are with the University of Iowa, Iowa City, IA 52242 (emails: email@example.com,firstname.lastname@example.org). JCY is with the Department of Bio and Brain Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon 34141, Republic of Korea (email: email@example.com).
A constraint satisfaction problem (CSP) is a computational problem where the input consists of a finite set of variables and a finite set of constraints, and where the task is to decide whether there exists a satisfying assignment of values to the variables. Depending on the type of constraints that we allow in the input, a CSP might be tractable, or computationally hard. In recent years, general criteria have been discovered that imply that a CSP is polynomial-time tractable, or that it is NP-hard. Finite-domain CSPs have become a major common research focus of graph theory, artificial intelligence, and finite model theory. It turned out that the key questions for complexity classification of CSPs are closely linked to central questions in universal algebra. This thesis studies CSPs where the variables can take values from an infinite domain. This generalization enhances dramatically the range of computational problems that can be modeled as a CSP. Many problems from areas that have so far seen no interaction with constraint satisfaction theory can be formulated using infinite domains, e.g. problems from temporal and spatial reasoning, phylogenetic reconstruction, and operations research. It turns out that the universal-algebraic approach can also be applied to study large classes of infinite-domain CSPs, yielding elegant complexity classification results. A new tool in this thesis that becomes relevant particularly for infinite domains is Ramsey theory. We demonstrate the feasibility of our approach with two complete complexity classification results: one on CSPs in temporal reasoning, the other on a generalization of Schaefer's theorem for propositional logic to logic over graphs. We also study the limits of complexity classification, and present classes of computational problems provably do not exhibit a complexity dichotomy into hard and easy problems.
Albrecht, Stefano (The University of Texas at Austin) | Bouchard, Bruno (Université du Québec à Chicoutimi) | Brownstein, John S. (Harvard University) | Buckeridge, David L. (McGill University) | Caragea, Cornelia (University of North Texas) | Carter, Kevin M. (MIT Lincoln Laboratory) | Darwiche, Adnan (University of California, Los Angeles) | Fortuna, Blaz (Bloomberg L.P. and Jozef Stefan Institute) | Francillette, Yannick (Université du Québec à Chicoutimi) | Gaboury, Sébastien (Université du Québec à Chicoutimi) | Giles, C. Lee (Pennsylvania State University) | Grobelnik, Marko (Jozef Stefan Institute) | Hruschka, Estevam R. (Federal University of São Carlos) | Kephart, Jeffrey O. (IBM Thomas J. Watson Research Center) | Kordjamshidi, Parisa (University of Illinois at Urbana-Champaign) | Lisy, Viliam (University of Alberta) | Magazzeni, Daniele (King's College London) | Marques-Silva, Joao (University of Lisbon) | Marquis, Pierre (Université d'Artois) | Martinez, David (MIT Lincoln Laboratory) | Michalowski, Martin (Adventium Labs) | Shaban-Nejad, Arash (University of California, Berkeley) | Noorian, Zeinab (Ryerson University) | Pontelli, Enrico (New Mexico State University) | Rogers, Alex (University of Oxford) | Rosenthal, Stephanie (Carnegie Mellon University) | Roth, Dan (University of Illinois at Urbana-Champaign) | Sinha, Arunesh (University of Southern California) | Streilein, William (MIT Lincoln Laboratory) | Thiebaux, Sylvie (The Australian National University) | Tran, Son Cao (New Mexico State University) | Wallace, Byron C. (University of Texas at Austin) | Walsh, Toby (University of New South Wales and Data61) | Witbrock, Michael (Lucid AI) | Zhang, Jie (Nanyang Technological University)
The Workshop Program of the Association for the Advancement of Artificial Intelligence’s Thirtieth AAAI Conference on Artificial Intelligence (AAAI-16) was held at the beginning of the conference, February 12-13, 2016. Workshop participants met and discussed issues with a selected focus — providing an informal setting for active exchange among researchers, developers and users on topics of current interest. To foster interaction and exchange of ideas, the workshops were kept small, with 25-65 participants. Attendance was sometimes limited to active participants only, but most workshops also allowed general registration by other interested individuals. The AAAI-16 Workshops were an excellent forum for exploring emerging approaches and task areas, for bridging the gaps between AI and other fields or between subfields of AI, for elucidating the results of exploratory research, or for critiquing existing approaches. The fifteen workshops held at AAAI-16 were Artificial Intelligence Applied to Assistive Technologies and Smart Environments (WS-16-01), AI, Ethics, and Society (WS-16-02), Artificial Intelligence for Cyber Security (WS-16-03), Artificial Intelligence for Smart Grids and Smart Buildings (WS-16-04), Beyond NP (WS-16-05), Computer Poker and Imperfect Information Games (WS-16-06), Declarative Learning Based Programming (WS-16-07), Expanding the Boundaries of Health Informatics Using AI (WS-16-08), Incentives and Trust in Electronic Communities (WS-16-09), Knowledge Extraction from Text (WS-16-10), Multiagent Interaction without Prior Coordination (WS-16-11), Planning for Hybrid Systems (WS-16-12), Scholarly Big Data: AI Perspectives, Challenges, and Ideas (WS-16-13), Symbiotic Cognitive Systems (WS-16-14), and World Wide Web and Population Health Intelligence (WS-16-15).
These articles were selected for their description of AI technologies that are either in practical use or close to it. Five of the articles describe deployed application case studies. These articles present fielded AI applications that distinguish themselves for their innovative use of AI technology. One article describes an emerging application. It presents an area where AI technology can have a practical impact. Another article describes a challenge problem; it presents to the AI community at large a problem where AI could make a significant difference.
Part Two of the special issue of AI Magazine presents articles on some of the most interesting projects at the intersection of AI and Education. Included are articles on integrated systems such as virtual humans, an intellgent textbook a game-based learning environment as well as technology focused components such as student models and data mining. The issue concludes with an article summarizing the contemporary and emerging challenges at the intersection of AI and education.