Goto

Collaborating Authors

Undirected Networks


An Introduction to Machine Learning - Notes on New Technologies

#artificialintelligence

Humans learn from past experiences, Machines follow the instructions given by humans but, what if humans can train the machines to learn from the past experiences (data) and can do act much faster, here comes the concept of Machine Learning. Machine learning is the field of study that gives computers the capability to learn without being explicitly programmed. Machine learning algorithms build a mathematical model based on the data, known as training data, in order to make predictions or decisions. Machine learning is not only about learning, but also about understanding and reasoning. Machine Learning is not programmed, it is taught with data.


Facebook's Open Source Framework For Training Graph-Based ML Models

#artificialintelligence

In this case, GTN will be used in automatic differentiation of weighted finite-state transducers (WFSTs), which is an expressive and powerful graph. This framework enables the separation of graphs from operations on them that helps in exploring new structured loss functions and which in turn makes the encoding of prior knowledge on learning algorithms easier. Further, in a paper published by Awni Hannun, Vineel Pratap, Jacob Kahn & Wei-Ning Hsu of the Facebook AI Research, in this regard, proposed a convolutional WFST layer to be used in the interior of a deep neural network for mapping lower-level to higher-level representations. GTN is written in C and has bindings to Python. GTN can be used to express and design sequence-level loss functions.


Agent-Centered Search

AI Magazine

In this article, I describe agent-centered search (also called real-time search or local search) and illustrate this planning paradigm with examples. Agent-centered search methods interleave planning and plan execution and restrict planning to the part of the domain around the current state of the agent, for example, the current location of a mobile robot or the current board position of a game. These methods can execute actions in the presence of time constraints and often have a small sum of planning and execution cost, both because they trade off planning and execution cost and because they allow agents to gather information early in nondeterministic domains, which reduces the amount of planning they have to perform for unencountered situations. These advantages become important as more intelligent systems are interfaced with the world and have to operate autonomously in complex environments. Agent-centered search methods have been applied to a variety of domains, including traditional search, strips-type planning, moving-target search, planning with totally and partially observable Markov decision process models, reinforcement learning, constraint satisfaction, and robot navigation.


Python Data Science with Pandas: Master 12 Advanced Projects

#artificialintelligence

Online Courses Udemy - Python Data Science with Pandas: Master 12 Advanced Projects, Work with Pandas, SQL Databases, JSON, Web APIs & more to master your real-world Machine Learning & Finance Projects Bestseller Created by Alexander Hagmann English [Auto] Students also bought Machine Learning and AI: Support Vector Machines in Python Unsupervised Machine Learning Hidden Markov Models in Python Natural Language Processing with Deep Learning in Python Advanced AI: Deep Reinforcement Learning in Python Deep Learning: Advanced Computer Vision (GANs, SSD, More!) Cutting-Edge AI: Deep Reinforcement Learning in Python Preview this course GET COUPON CODE Description Welcome to the first advanced and project-based Pandas Data Science Course! This Course starts where many other courses end: You can write some Pandas code but you are still struggling with real-world Projects because Real-World Data is typically not provided in a single or a few text/excel files - more advanced Data Importing Techniques are required Real-World Data is large, unstructured, nested and unclean - more advanced Data Manipulation and Data Analysis/Visualization Techniques are required many easy-to-use Pandas methods work best with relatively small and clean Datasets - real-world Datasets require more General Code (incorporating other Libraries/Modules) No matter if you need excellent Pandas skills for Data Analysis, Machine Learning or Finance purposes, this is the right Course for you to get your skills to Expert Level! This Course covers the full Data Workflow A-Z: Import (complex and nested) Data from JSON files. Efficiently import and merge Data from many text/CSV files. Clean, handle and flatten nested and stringified Data in DataFrames.


Visual Methods for Sign Language Recognition: A Modality-Based Review

arXiv.org Artificial Intelligence

Sign language visual recognition from continuous multi-modal streams is still one of the most challenging fields. Recent advances in human actions recognition are exploiting the ascension of GPU-based learning from massive data, and are getting closer to human-like performances. They are then prone to creating interactive services for the deaf and hearing-impaired communities. A population that is expected to grow considerably in the years to come. This paper aims at reviewing the human actions recognition literature with the sign-language visual understanding as a scope. The methods analyzed will be mainly organized according to the different types of unimodal inputs exploited, their relative multi-modal combinations and pipeline steps. In each section, we will detail and compare the related datasets, approaches then distinguish the still open contribution paths suitable for the creation of sign language related services. Special attention will be paid to the approaches and commercial solutions handling facial expressions and continuous signing.


LTLf Synthesis on Probabilistic Systems

arXiv.org Artificial Intelligence

Many systems are naturally modeled as Markov Decision Processes (MDPs), combining probabilities and strategic actions. Given a model of a system as an MDP and some logical specification of system behavior, the goal of synthesis is to find a policy that maximizes the probability of achieving this behavior. A popular choice for defining behaviors is Linear Temporal Logic (LTL). Policy synthesis on MDPs for properties specified in LTL has been well studied. LTL, however, is defined over infinite traces, while many properties of interest are inherently finite. Linear Temporal Logic over finite traces (LTLf) has been used to express such properties, but no tools exist to solve policy synthesis for MDP behaviors given finite-trace properties. We present two algorithms for solving this synthesis problem: the first via reduction of LTLf to LTL and the second using native tools for LTLf. We compare the scalability of these two approaches for synthesis and show that the native approach offers better scalability compared to existing automaton generation tools for LTL.


The relationship between dynamic programming and active inference: the discrete, finite-horizon case

arXiv.org Artificial Intelligence

Active inference is a normative framework for generating behaviour based upon the free energy principle, a theory of self-organisation. This framework has been successfully used to solve reinforcement learning and stochastic control problems, yet, the formal relation between active inference and reward maximisation has not been fully explicated. In this paper, we consider the relation between active inference and dynamic programming under the Bellman equation, which underlies many approaches to reinforcement learning and control. We show that, on partially observable Markov decision processes, dynamic programming is a limiting case of active inference. In active inference, agents select actions to minimise expected free energy. In the absence of ambiguity about states, this reduces to matching expected states with a target distribution encoding the agent's preferences. When target states correspond to rewarding states, this maximises expected reward, as in reinforcement learning. When states are ambiguous, active inference agents will choose actions that simultaneously minimise ambiguity. This allows active inference agents to supplement their reward maximising (or exploitative) behaviour with novelty-seeking (or exploratory) behaviour. This clarifies the connection between active inference and reinforcement learning, and how both frameworks may benefit from each other.


Reinforcement Learning Approaches in Social Robotics

arXiv.org Artificial Intelligence

In order to facilitate natural interaction, researchers in social robotics have focused on robots that can adapt to diverse conditions and to the different users with whom they interact. Recently, there has been great interest in the use of machine learning methods for adaptive social robots [48], [29], [106], [45], [49], [86]. Machine Learning (ML) algorithms can be categorized into three subfields [2]: supervised learning, unsupervised learning and reinforcement learning. In supervised learning, correct input/output pairs are available and the goal is to find a correct mapping from input to output space. In unsupervised learning, output data is not available and the goal is to find patterns in the input data. Reinforcement Learning (RL) [96] is a framework for decision-making problems in which an agent interacts through trial-and-error with its environment to discover an optimal behavior. The agent does not receive direct feedback of correctness, instead it receives scarce feedback about the actions it has taken in the past.


Exploiting Submodular Value Functions For Scaling Up Active Perception

arXiv.org Artificial Intelligence

In active perception tasks, an agent aims to select sensory actions that reduce its uncertainty about one or more hidden variables. While partially observable Markov decision processes (POMDPs) provide a natural model for such problems, reward functions that directly penalize uncertainty in the agent's belief can remove the piecewise-linear and convex property of the value function required by most POMDP planners. Furthermore, as the number of sensors available to the agent grows, the computational cost of POMDP planning grows exponentially with it, making POMDP planning infeasible with traditional methods. In this article, we address a twofold challenge of modeling and planning for active perception tasks. We show the mathematical equivalence of $\rho$POMDP and POMDP-IR, two frameworks for modeling active perception tasks, that restore the PWLC property of the value function. To efficiently plan for active perception tasks, we identify and exploit the independence properties of POMDP-IR to reduce the computational cost of solving POMDP-IR (and $\rho$POMDP). We propose greedy point-based value iteration (PBVI), a new POMDP planning method that uses greedy maximization to greatly improve scalability in the action space of an active perception POMDP. Furthermore, we show that, under certain conditions, including submodularity, the value function computed using greedy PBVI is guaranteed to have bounded error with respect to the optimal value function. We establish the conditions under which the value function of an active perception POMDP is guaranteed to be submodular. Finally, we present a detailed empirical analysis on a dataset collected from a multi-camera tracking system employed in a shopping mall. Our method achieves similar performance to existing methods but at a fraction of the computational cost leading to better scalability for solving active perception tasks.


Dynamic Horizon Value Estimation for Model-based Reinforcement Learning

arXiv.org Artificial Intelligence

Existing model-based value expansion methods typically leverage a world model for value estimation with a fixed rollout horizon to assist policy learning. However, the fixed rollout with an inaccurate model has a potential to harm the learning process. In this paper, we investigate the idea of using the model knowledge for value expansion adaptively. We propose a novel method called Dynamic-horizon Model-based Value Expansion (DMVE) to adjust the world model usage with different rollout horizons. Inspired by reconstruction-based techniques that can be applied for visual data novelty detection, we utilize a world model with a reconstruction module for image feature extraction, in order to acquire more precise value estimation. The raw and the reconstructed images are both used to determine the appropriate horizon for adaptive value expansion. On several benchmark visual control tasks, experimental results show that DMVE outperforms all baselines in sample efficiency and final performance, indicating that DMVE can achieve more effective and accurate value estimation than state-of-the-art model-based methods.