Goto

Collaborating Authors

Teaching Methods


Towards open and expandable cognitive AI architectures for large-scale multi-agent human-robot collaborative learning

arXiv.org Artificial Intelligence

Learning from Demonstration (LfD) constitutes one of the most robust methodologies for constructing efficient cognitive robotic systems. Current key challenges in the field include those of multi-agent learning and long-term autonomy. Towards this direction, a novel cognitive architecture for multi-agent LfD robotic learning is introduced in this paper, targeting to enable the reliable deployment of open, scalable and expandable robotic systems in large-scale and complex environments. In particular, the designed architecture capitalizes on the recent advances in the Artificial Intelligence (AI) (and especially the Deep Learning (DL)) field, by establishing a Federated Learning (FL)-based framework for incarnating a multi-human multi-robot collaborative learning environment. The fundamental conceptualization relies on employing multiple AI-empowered cognitive processes (implementing various robotic tasks) that operate at the edge nodes of a network of robotic platforms, while global AI models (underpinning the aforementioned robotic tasks) are collectively created and shared among the network, by elegantly combining information from a large number of human-robot interaction instances. Pivotal novelties of the designed cognitive architecture include: a) it introduces a new FL-based formalism that extends the conventional LfD learning paradigm to support large-scale multi-agent operational settings, b) it elaborates previous FL-based self-learning robotic schemes so as to incorporate the human in the learning loop, and c) it consolidates the fundamental principles of FL with additional sophisticated AI-enabled learning methodologies for modelling the multi-level inter-dependencies among the robotic tasks. The applicability of the proposed framework is explained using an example of a real-world industrial case study for agile production-based Critical Raw Materials (CRM) recovery.


Active Learning: Problem Settings and Recent Developments

arXiv.org Machine Learning

Supervised learning is a typical problem setting for machine learning that approximates the relationship between the input and output based on a given sets of input and output data. The accuracy of the approximation can be increased using more input and output data to build the model; however, obtaining the appropriate output for the input can be costly. A classic example is the crossbreeding of plants. The environmental conditions (e.g., average monthly temperature, type and amount of fertilizer used, watering conditions, weather) are the input, and the specific properties of the crops are the output. In this case, the controllable variables are related to the fertilizer and watering conditions, but it would take several months to years to perform experiments under various conditions and determine the optimal fertilizer composition and watering conditions.


AI World School unveils online AI learning platform for school students

#artificialintelligence

AI World School (AIWS) is launching its remote self-learning platform providing AI and Coding technology education to students from ages 7 to 18. AIWS announces the global launch of its self-paced online learning platform providing AI learning experiences to students at home, to homeschoolers and in K12 schools. The world is changing faster than ever amid these testing times of COVID. Children safety is paramount more than ever, and they need to face these challenges and navigate them successfully in the future. To combat this, AIWS believes that innovation, creativity & STEM skills will be essential to prepare one's' child to become future-ready. AI & Coding provides a competitive advantage when applying to colleges, internships, and for career opportunities.


Generalized Chernoff Sampling for Active Learning and Structured Bandit Algorithms

arXiv.org Machine Learning

Active learning and structured stochastic bandit problems are intimately related to the classical problem of sequential experimental design. This paper studies active learning and best-arm identification in structured bandit settings from the viewpoint of active sequential hypothesis testing, a framework initiated by Chernoff (1959). We first characterize the sample complexity of Chernoff's original procedure by uncovering terms that reduce in significance as the allowed error probability $\delta \rightarrow 0$, but are nevertheless relevant at any fixed value of $\delta > 0$. While initially proposed for testing among finitely many hypotheses, we obtain the analogue of Chernoff sampling for the case when the hypotheses belong to a compact space. This makes it applicable to active learning and structured bandit problems, where the unknown parameter specifying the arm means is often assumed to be an element of Euclidean space. Empirically, we demonstrate the potential of our proposed approach for active learning of neural network models and in the linear bandit setting, where we observe that our general-purpose approach compares favorably to state-of-the-art methods.


Active Learning for Deep Gaussian Process Surrogates

arXiv.org Machine Learning

Deep Gaussian processes (DGPs) are increasingly popular as predictive models in machine learning (ML) for their non-stationary flexibility and ability to cope with abrupt regime changes in training data. Here we explore DGPs as surrogates for computer simulation experiments whose response surfaces exhibit similar characteristics. In particular, we transport a DGP's automatic warping of the input space and full uncertainty quantification (UQ), via a novel elliptical slice sampling (ESS) Bayesian posterior inferential scheme, through to active learning (AL) strategies that distribute runs non-uniformly in the input space -- something an ordinary (stationary) GP could not do. Building up the design sequentially in this way allows smaller training sets, limiting both expensive evaluation of the simulator code and mitigating cubic costs of DGP inference. When training data sizes are kept small through careful acquisition, and with parsimonious layout of latent layers, the framework can be both effective and computationally tractable. Our methods are illustrated on simulation data and two real computer experiments of varying input dimensionality. We provide an open source implementation in the "deepgp" package on CRAN.


Active Hierarchical Imitation and Reinforcement Learning

arXiv.org Artificial Intelligence

Humans can leverage hierarchical structures to split a task into sub-tasks and solve problems efficiently. Both imitation and reinforcement learning or a combination of them with hierarchical structures have been proven to be an efficient way for robots to learn complex tasks with sparse rewards. However, in the previous work of hierarchical imitation and reinforcement learning, the tested environments are in relatively simple 2D games, and the action spaces are discrete. Furthermore, many imitation learning works focusing on improving the policies learned from the expert polices that are hard-coded or trained by reinforcement learning algorithms, rather than human experts. In the scenarios of human-robot interaction, humans can be required to provide demonstrations to teach the robot, so it is crucial to improve the learning efficiency to reduce expert efforts, and know human's perception about the learning/training process. In this project, we explored different imitation learning algorithms and designed active learning algorithms upon the hierarchical imitation and reinforcement learning framework we have developed. We performed an experiment where five participants were asked to guide a randomly initialized agent to a random goal in a maze. Our experimental results showed that using DAgger and reward-based active learning method can achieve better performance while saving more human efforts physically and mentally during the training process.


Attentional Biased Stochastic Gradient for Imbalanced Classification

arXiv.org Machine Learning

In this paper~\footnote{The original title is "Momentum SGD with Robust Weighting For Imbalanced Classification"}, we present a simple yet effective method (ABSGD) for addressing the data imbalance issue in deep learning. Our method is a simple modification to momentum SGD where we leverage an attentional mechanism to assign an individual importance weight to each gradient in the mini-batch. Unlike existing individual weighting methods that learn the individual weights by meta-learning on a separate balanced validation data, our weighting scheme is self-adaptive and is grounded in distributionally robust optimization. The weight of a sampled data is systematically proportional to exponential of a scaled loss value of the data, where the scaling factor is interpreted as the regularization parameter in the framework of information-regularized distributionally robust optimization. We employ a step damping strategy for the scaling factor to balance between the learning of feature extraction layers and the learning of the classifier layer. Compared with exiting meta-learning methods that require three backward propagations for computing mini-batch stochastic gradients at three different points at each iteration, our method is more efficient with only one backward propagation at each iteration as in standard deep learning methods. Compared with existing class-level weighting schemes, our method can be applied to online learning without any knowledge of class prior, while enjoying further performance boost in offline learning combined with existing class-level weighting schemes. Our empirical studies on several benchmark datasets also demonstrate the effectiveness of our proposed method


Next Wave Artificial Intelligence: Robust, Explainable, Adaptable, Ethical, and Accountable

arXiv.org Artificial Intelligence

The history of AI has included several "waves" of ideas. The first wave, from the mid-1950s to the 1980s, focused on logic and symbolic hand-encoded representations of knowledge, the foundations of so-called "expert systems". The second wave, starting in the 1990s, focused on statistics and machine learning, in which, instead of hand-programming rules for behavior, programmers constructed "statistical learning algorithms" that could be trained on large datasets. In the most recent wave research in AI has largely focused on deep (i.e., many-layered) neural networks, which are loosely inspired by the brain and trained by "deep learning" methods. However, while deep neural networks have led to many successes and new capabilities in computer vision, speech recognition, language processing, game-playing, and robotics, their potential for broad application remains limited by several factors. A concerning limitation is that even the most successful of today's AI systems suffer from brittleness-they can fail in unexpected ways when faced with situations that differ sufficiently from ones they have been trained on. This lack of robustness also appears in the vulnerability of AI systems to adversarial attacks, in which an adversary can subtly manipulate data in a way to guarantee a specific wrong answer or action from an AI system. AI systems also can absorb biases-based on gender, race, or other factors-from their training data and further magnify these biases in their subsequent decision-making. Taken together, these various limitations have prevented AI systems such as automatic medical diagnosis or autonomous vehicles from being sufficiently trustworthy for wide deployment. The massive proliferation of AI across society will require radically new ideas to yield technology that will not sacrifice our productivity, our quality of life, or our values.


Forecasting Costa Rican inflation with machine learning methods

#artificialintelligence

We present a first assessment of the predictive ability of machine learning methods for inflation forecasting in Costa Rica. We compute forecasts using two variants of k-nearest neighbors, random forests, extreme gradient boosting and a long short-term memory (LSTM) network. We evaluate their properties according to criteria from the optimal forecast literature, and we compare their performance with that of an average of univariate inflation forecasts currently used by the Central Bank of Costa Rica. We find that the best-performing forecasts are those of LSTM, univariate KNN and, to a lesser extent, random forests. Furthermore, a combination performs better than the individual forecasts included in it and the average of the univariate forecasts.


Cost-Based Budget Active Learning for Deep Learning

arXiv.org Machine Learning

Majorly classical Active Learning (AL) approach usually uses statistical theory such as entropy and margin to measure instance utility, however it fails to capture the data distribution information contained in the unlabeled data. This can eventually cause the classifier to select outlier instances to label. Meanwhile, the loss associated with mislabeling an instance in a typical classification task is much higher than the loss associated with the opposite error. To address these challenges, we propose a Cost-Based Bugdet Active Learning (CBAL) which considers the classification uncertainty as well as instance diversity in a population constrained by a budget. A principled approach based on the min-max is considered to minimize both the labeling and decision cost of the selected instances, this ensures a near-optimal results with significantly less computational effort. Extensive experimental results show that the proposed approach outperforms several state-of -the-art active learning approaches.