Goto

Collaborating Authors

 Educational Setting


A Massively Parallel Digital Learning Processor

Neural Information Processing Systems

We present a new, massively parallel architecture for accelerating machine learning algorithms, based on arrays of variable-resolution arithmetic vector processing elements (VPE). Groups of VPEs operate in SIMD (single instruction multiple data) mode, and each group is connected to an independent memory bank. In this way memory bandwidth scales with the number of VPE, and the main data flows are local, keeping power dissipation low. With 256 VPEs, implemented on two FPGA (field programmable gate array) chips, we obtain a sustained speed of 19 GMACS (billion multiply-accumulate per sec.) for SVM training, and 86 GMACS for SVM classification. This performance is more than an order of magnitude higher than that of any FPGA implementation reported so far. The speed on one FPGA is similar to the fastest speeds published on a Graphics Processor for the MNIST problem, despite a clock rate of the FPGA that is six times lower. High performance at low clock rates makes this massively parallel architecture particularly attractive for embedded applications, where low power dissipation is critical. Tests with Convolutional Neural Networks and other learning algorithms are under way now.


An interior-point stochastic approximation method and an L1-regularized delta rule

Neural Information Processing Systems

The stochastic approximation method is behind the solution to many important, actively-studied problems in machine learning. Despite its far-reaching application, there is almost no work on applying stochastic approximation to learning problems with constraints. The reason for this, we hypothesize, is that no robust, widely-applicable stochastic approximation method exists for handling such problems. We propose that interior-point methods are a natural solution. We establish the stability of a stochastic interior-point approximation method both analytically and empirically, and demonstrate its utility by deriving an on-line learning algorithm that also performs feature selection via L1 regularization.


A survey of statistical network models

arXiv.org Machine Learning

Networks are ubiquitous in science and have become a focal point for discussion in everyday life. Formal statistical models for the analysis of network data have emerged as a major topic of interest in diverse areas of study, and most of these involve a form of graphical representation. Probability models on graphs date back to 1959. Along with empirical studies in social psychology and sociology from the 1960s, these early works generated an active network community and a substantial literature in the 1970s. This effort moved into the statistical literature in the late 1970s and 1980s, and the past decade has seen a burgeoning network literature in statistical physics and computer science. The growth of the World Wide Web and the emergence of online networking communities such as Facebook, MySpace, and LinkedIn, and a host of more specialized professional network communities has intensified interest in the study of networks and network data. Our goal in this review is to provide the reader with an entry point to this burgeoning literature. We begin with an overview of the historical development of statistical network modeling and then we introduce a number of examples that have been studied in the network literature. Our subsequent discussion focuses on a number of prominent static and dynamic network models and their interconnections. We emphasize formal model descriptions, and pay special attention to the interpretation of parameters and their estimation. We end with a description of some open problems and challenges for machine learning and statistics.


Dynamics of Price Sensitivity and Market Structure in an Evolutionary Matching Model

AAAI Conferences

The relationship between equilibrium convergence to a uniform quality distribution and price is investigated in the Q-model, a self-organizing, evolutionary computational matching model of a fixed-price post-secondary higher education created by Ortmann and Slobodyan (2006). The Q-model is replicated with price equaling 100% its Ortmann and Slobodyan (2006) value, Varying the fixed price between 0% and 200% reveals thresholds at which the Q-model reaches different market clustering configurations. Results indicate structural market robustness to prices less than 100% and high sensitivity to prices greater than 100%.


Interactive Learning Using Manifold Geometry

AAAI Conferences

We present an interactive learning method that enables a user to iteratively refine a regression model. The user examines the output of the model, visualized as the vertical axis of a 2D scatterplot, and provides corrections by repositioning individual data points to the correct output level. Each repositioned data point acts as a control point for altering the learned model, using the geometry underlying the data. We capture the underlying structure of the data as a manifold, on which we compute a set of basis functions as the foundation for learning. Our results show that manifold-based interactive learning achieves dramatic improvement over alternative approaches.


Funding Opportunities for Cognitive and Computer Scientists through the Institute of Education Sciences

AAAI Conferences

The Institute of Education Sciences (IES) provides funding opportunities for researchers to bring their knowledge of learning, cognitive science, and technology to bear on education practice. This panel describes opportunities available through the National Center for Education Research and the National Center for Special Education Research.


Computational Argument as a Diagnostic Tool: The role of reliability.

AAAI Conferences

Formal and computational models of argument are ideally suited for education in ill-defined domains such as law, public policy, and science.  Open-ended arguments play a central role in these areas but students of the domains may not have been taught an explicit model of argument.  Computational models of argument may be ideally suited to act as argument tutors guiding students in the formation of arguments and argument analysis according to an explicit model.  In order to achieve this it is important to establish that the models can be understood and evaluated reliably, an empirical question.  In this paper we report ongoing work on the diagnostic utility of argument diagrams produced in the LARGO tutoring system.


How Primary Classes Visually Represent While Temporal Relations: A Preliminary Evaluation Study

AAAI Conferences

We are working on a temporal reasoning web tool for 7-11 olds. The acquisition of temporal relations and reasoning with them depends on age and experience, as well as linguistic factors. We conducted a preliminary evaluation with 6–8 olds in order to assess whether and how they would visually represent “while” temporal relations of a story. In this paper, we present and discuss our experimental evaluation, which paves the way for the visual representation of such relations in our e-tool.


MetaTutor: A MetaCognitive Tool for Enhancing Self-Regulated Learning

AAAI Conferences

Learning about complex and challenging science topics with advanced learning technologies requires students to regulate their learning. The deployment of key cognitive and metacognitive regulatory processes is key to enhancing learning in open-ended learning environments such as hypermedia. In this paper, we propose a metaphor—Computers as MetaCognitive tools—to characterize the complex nature of the learning context, self- regulatory processes, task conditions, and features of advanced learning technologies. We briefly outline the theoretical and conceptual assumptions of self-regulated learning (SRL) underlying MetaTutor, a hypermedia environment designed to train and foster students’ SRL processes in biology. Lastly, we provide preliminary learning outcome and SRL process data on the deployment of SRL processes during learning with MetaTutor.


A Platform-Independent Tracking and Monitoring Toolkit

AAAI Conferences

Issues concerning students involved with online learning paths, that need to be faced by e-Tutors on their day-to-day activity, most often than not fall into known pedagogical patterns - that are problems and difficulties already occurred in the past and dealt with. These pedagogical patterns belong to e-Tutors' know-how and experience and their resolution are frequently a matter of activating routine processes or giving  pre-factored answers; nevertheless statistical data indicates that these issues consume a considerable slice of tutors' time. While a portion of the scientific community is still devoting much effort in developing artificial tutoring systems - by deploying AI/MAS-enabled technologies - the solution being investigated by our team focuses on enhancing already-available, open source LMS by implementing a general-purpose tracking and monitoring toolkit able to support e-Tutors in recognizing and dealing with pedagogical patterns stored into a decentralised Knowledge Base. The system architecture is designed to house multiple platforms (only one adapter interface needs to be written for each LMS) and is able to perform real-time, as well as scheduled, data collection by means of Jade-based agents and schedulers.  Information obtained from the processed data is then returned to the platform via web services and specific interfaces (instant messaging chatbot). The first deployed prototype is currently being experimented in adult higher education learning paths and is able to track student activity, forum readings and writings and offers a basic chat-based help interface. Our aim is to turn a standard LMS into a knowledge aggregator where information about its users, its contents and interactions between the two can be mined via Knowledge Services; resulting data could then be used to refine users' and groups' profiles, to monitor learners' deviance from expected learning path, and ultimately to adjust the applied pedagogical model.