Goto

Collaborating Authors

Results


The Role of Symbolic AI and Machine Learning in Robotics

#artificialintelligence

Robotics is a multi-disciplinary field in computer science dedicated to the design and manufacture of robots, with applications in industries such as manufacturing, space exploration and defence. While the field has existed for over 50 years, recent advances such as the Spot and Atlas robots from Boston Dynamics are truly capturing the public's imagination as science fiction becomes reality. Traditionally, robotics has relied on machine learning/deep learning techniques such as object recognition. While this has led to huge advancements, the next frontier in robotics is to enable robots to operate in the real world autonomously, with as little human interaction as possible. Such autonomous robots differ to non-autonomous ones as they operate in an open world, with undefined rules, uncertain real-world observations, and an environment -- the real world -- which is constantly changing.


What is Artificial Intelligence? How does AI work, Types, Trends and Future of it?

#artificialintelligence

Let's take a detailed look. This is the most common form of AI that you'd find in the market now. These Artificial Intelligence systems are designed to solve one single problem and would be able to execute a single task really well. By definition, they have narrow capabilities, like recommending a product for an e-commerce user or predicting the weather. This is the only kind of Artificial Intelligence that exists today. They're able to come close to human functioning in very specific contexts, and even surpass them in many instances, but only excelling in very controlled environments with a limited set of parameters. AGI is still a theoretical concept. It's defined as AI which has a human-level of cognitive function, across a wide variety of domains such as language processing, image processing, computational functioning and reasoning and so on.


Artificial Intelligence Tutorial for Beginners

#artificialintelligence

This Artificial Intelligence tutorial provides basic and intermediate information on concepts of Artificial Intelligence. It is designed to help students and working professionals who are complete beginners. In this tutorial, our focus will be on artificial intelligence, if you wish to learn more about machine learning, you can check out this tutorial for complete beginners tutorial of Machine Learning. Through the course of this Artificial Intelligence tutorial, we will look at various concepts such as the meaning of artificial intelligence, the levels of AI, why AI is important, it's various applications, the future of artificial intelligence, and more. Usually, to work in the field of AI, you need to have a lot of experience. Thus, we will also discuss the various job profiles which are associated with artificial intelligence and will eventually help you to attain relevant experience. You don't need to be from a specific background before joining the field of AI as it is possible to learn and attain the skills needed. While the terms Data Science, Artificial Intelligence (AI) and Machine learning fall in the same domain and are connected, they have their specific applications and meaning. Simply put, artificial intelligence aims at enabling machines to execute reasoning by replicating human intelligence. Since the main objective of AI processes is to teach machines from experience, feeding the right information and self-correction is crucial. The answer to this question would depend on who you ask. A layman, with a fleeting understanding of technology, would link it to robots. If you ask about artificial intelligence to an AI researcher, (s)he would say that it's a set of algorithms that can produce results without having to be explicitly instructed to do so. Both of these answers are right.


Best usages of Artificial Intelligence in everyday life (2022) - Dataconomy

#artificialintelligence

There are so many great applications of Artificial Intelligence in daily life, by using machine learning and other techniques in the background. AI is everywhere in our lives, from reading our emails to receiving driving directions to obtaining music or movie suggestions. Don't be scared of AI jargon; we've created a detailed AI glossary for the most commonly used Artificial Intelligence terms and the basics of Artificial Intelligence. Now if you're ready, let's look at how we use AI in 2022. Artificial intelligence (AI) appears in popular culture most often as a group of intelligent robots bent on destroying humanity, or at the very least a stunning theme park. We're safe for now because machines with general artificial intelligence don't yet exist, and they aren't expected to anytime soon. You can learn the risk and benefits of Artificial Intelligence with this article.


Agent-Based Modeling for Predicting Pedestrian Trajectories Around an Autonomous Vehicle

Journal of Artificial Intelligence Research

This paper addresses modeling and simulating pedestrian trajectories when interacting with an autonomous vehicle in a shared space. Most pedestrian–vehicle interaction models are not suitable for predicting individual trajectories. Data-driven models yield accurate predictions but lack generalizability to new scenarios, usually do not run in real time and produce results that are poorly explainable. Current expert models do not deal with the diversity of possible pedestrian interactions with the vehicle in a shared space and lack microscopic validation. We propose an expert pedestrian model that combines the social force model and a new decision model for anticipating pedestrian–vehicle interactions. The proposed model integrates different observed pedestrian behaviors, as well as the behaviors of the social groups of pedestrians, in diverse interaction scenarios with a car. We calibrate the model by fitting the parameters values on a training set. We validate the model and evaluate its predictive potential through qualitative and quantitative comparisons with ground truth trajectories. The proposed model reproduces observed behaviors that have not been replicated by the social force model and outperforms the social force model at predicting pedestrian behavior around the vehicle on the used dataset. The model generates explainable and real-time trajectory predictions. Additional evaluation on a new dataset shows that the model generalizes well to new scenarios and can be applied to an autonomous vehicle embedded prediction.


Neural Marionette: Unsupervised Learning of Motion Skeleton and Latent Dynamics from Volumetric Video

arXiv.org Artificial Intelligence

We present Neural Marionette, an unsupervised approach that discovers the skeletal structure from a dynamic sequence and learns to generate diverse motions that are consistent with the observed motion dynamics. Given a video stream of point cloud observation of an articulated body under arbitrary motion, our approach discovers the unknown low-dimensional skeletal relationship that can effectively represent the movement. Then the discovered structure is utilized to encode the motion priors of dynamic sequences in a latent structure, which can be decoded to the relative joint rotations to represent the full skeletal motion. Our approach works without any prior knowledge of the underlying motion or skeletal structure, and we demonstrate that the discovered structure is even comparable to the hand-labeled ground truth skeleton in representing a 4D sequence of motion. The skeletal structure embeds the general semantics of possible motion space that can generate motions for diverse scenarios. We verify that the learned motion prior is generalizable to the multi-modal sequence generation, interpolation of two poses, and motion retargeting to a different skeletal structure.


Using Deep Learning to Bootstrap Abstractions for Hierarchical Robot Planning

arXiv.org Artificial Intelligence

This paper addresses the problem of learning abstractions that boost robot planning performance while providing strong guarantees of reliability. Although state-of-the-art hierarchical robot planning algorithms allow robots to efficiently compute long-horizon motion plans for achieving user desired tasks, these methods typically rely upon environment-dependent state and action abstractions that need to be hand-designed by experts. We present a new approach for bootstrapping the entire hierarchical planning process. This allows us to compute abstract states and actions for new environments automatically using the critical regions predicted by a deep neural network with an auto-generated robot-specific architecture. We show that the learned abstractions can be used with a novel multi-source bi-directional hierarchical robot planning algorithm that is sound and probabilistically complete. An extensive empirical evaluation on twenty different settings using holonomic and non-holonomic robots shows that (a) our learned abstractions provide the information necessary for efficient multi-source hierarchical planning; and that (b) this approach of learning, abstractions, and planning outperforms state-of-the-art baselines by nearly a factor of ten in terms of planning time on test environments not seen during training.


Accountability in an Algorithmic Society: Relationality, Responsibility, and Robustness in Machine Learning

arXiv.org Artificial Intelligence

In 1996, philosopher Helen Nissenbaum issued a clarion call concerning the erosion of accountability in society due to the ubiquitous delegation of consequential functions to computerized systems. Using the conceptual framing of moral blame, Nissenbaum described four types of barriers to accountability that computerization presented: 1) "many hands," the problem of attributing moral responsibility for outcomes caused by many moral actors; 2) "bugs," a way software developers might shrug off responsibility by suggesting software errors are unavoidable; 3) "computer as scapegoat," shifting blame to computer systems as if they were moral actors; and 4) "ownership without liability," a free pass to the tech industry to deny responsibility for the software they produce. We revisit these four barriers in relation to the recent ascendance of data-driven algorithmic systems--technology often folded under the heading of machine learning (ML) or artificial intelligence (AI)--to uncover the new challenges for accountability that these systems present. We then look ahead to how one might construct and justify a moral, relational framework for holding responsible parties accountable, and argue that the FAccT community is uniquely well-positioned to develop such a framework to weaken the four barriers.


Memory-based gaze prediction in deep imitation learning for robot manipulation

arXiv.org Artificial Intelligence

Deep imitation learning is a promising approach that does not require hard-coded control rules in autonomous robot manipulation. The current applications of deep imitation learning to robot manipulation have been limited to reactive control based on the states at the current time step. However, future robots will also be required to solve tasks utilizing their memory obtained by experience in complicated environments (e.g., when the robot is asked to find a previously used object on a shelf). In such a situation, simple deep imitation learning may fail because of distractions caused by complicated environments. We propose that gaze prediction from sequential visual input enables the robot to perform a manipulation task that requires memory. The proposed algorithm uses a Transformer-based self-attention architecture for the gaze estimation based on sequential data to implement memory. The proposed method was evaluated with a real robot multi-object manipulation task that requires memory of the previous states.


Conditional Motion In-betweening

arXiv.org Artificial Intelligence

Motion in-betweening (MIB) is a process of generating intermediate skeletal movement between the given start and target poses while preserving the naturalness of the motion, such as periodic footstep motion while walking. Although state-of-the-art MIB methods are capable of producing plausible motions given sparse key-poses, they often lack the controllability to generate motions satisfying the semantic contexts required in practical applications. We focus on the method that can handle pose or semantic conditioned MIB tasks using a unified model. We also present a motion augmentation method to improve the quality of pose-conditioned motion generation via defining a distribution over smooth trajectories. Our proposed method outperforms the existing state-of-the-art MIB method in pose prediction errors while providing additional controllability.