Goto

Collaborating Authors

Results


Artificial Intelligence Tutorial for Beginners

#artificialintelligence

This Artificial Intelligence tutorial provides basic and intermediate information on concepts of Artificial Intelligence. It is designed to help students and working professionals who are complete beginners. In this tutorial, our focus will be on artificial intelligence, if you wish to learn more about machine learning, you can check out this tutorial for complete beginners tutorial of Machine Learning. Through the course of this Artificial Intelligence tutorial, we will look at various concepts such as the meaning of artificial intelligence, the levels of AI, why AI is important, it's various applications, the future of artificial intelligence, and more. Usually, to work in the field of AI, you need to have a lot of experience. Thus, we will also discuss the various job profiles which are associated with artificial intelligence and will eventually help you to attain relevant experience. You don't need to be from a specific background before joining the field of AI as it is possible to learn and attain the skills needed. While the terms Data Science, Artificial Intelligence (AI) and Machine learning fall in the same domain and are connected, they have their specific applications and meaning. Simply put, artificial intelligence aims at enabling machines to execute reasoning by replicating human intelligence. Since the main objective of AI processes is to teach machines from experience, feeding the right information and self-correction is crucial. The answer to this question would depend on who you ask. A layman, with a fleeting understanding of technology, would link it to robots. If you ask about artificial intelligence to an AI researcher, (s)he would say that it's a set of algorithms that can produce results without having to be explicitly instructed to do so. Both of these answers are right.


Gato, the latest from Deepmind. Towards true AI?

#artificialintelligence

The deep learning field is progressing rapidly, and the latest work from Deepmind is a good example of this. Their Gato model is able to learn to play Atari games, generate realistic text, process images, control robotic arms, and more, all with the same neural network. Inspired by large-scale language models, Deepmind applied a similar approach but extended beyond the realm of text outputs. This new AGI (after Artificial General Intelligence) works as a multi-modal, multi-task, multi-embodiment network, which means that the same network (i.e. a single architecture with a single set of weights) can perform all tasks, despite involving inherently different kinds of inputs and outputs. While Deepmind's preprint presenting Gato is not very detailed, it is clear enough in that it is strongly rooted in transformers as used for natural language processing and text generation.


Core Challenges in Embodied Vision-Language Planning

Journal of Artificial Intelligence Research

Recent advances in the areas of multimodal machine learning and artificial intelligence (AI) have led to the development of challenging tasks at the intersection of Computer Vision, Natural Language Processing, and Embodied AI. Whereas many approaches and previous survey pursuits have characterised one or two of these dimensions, there has not been a holistic analysis at the center of all three. Moreover, even when combinations of these topics are considered, more focus is placed on describing, e.g., current architectural methods, as opposed to also illustrating high-level challenges and opportunities for the field. In this survey paper, we discuss Embodied Vision-Language Planning (EVLP) tasks, a family of prominent embodied navigation and manipulation problems that jointly use computer vision and natural language. We propose a taxonomy to unify these tasks and provide an in-depth analysis and comparison of the new and current algorithmic approaches, metrics, simulated environments, as well as the datasets used for EVLP tasks. Finally, we present the core challenges that we believe new EVLP works should seek to address, and we advocate for task construction that enables model generalizability and furthers real-world deployment.


Best usages of Artificial Intelligence in everyday life (2022) - Dataconomy

#artificialintelligence

There are so many great applications of Artificial Intelligence in daily life, by using machine learning and other techniques in the background. AI is everywhere in our lives, from reading our emails to receiving driving directions to obtaining music or movie suggestions. Don't be scared of AI jargon; we've created a detailed AI glossary for the most commonly used Artificial Intelligence terms and the basics of Artificial Intelligence. Now if you're ready, let's look at how we use AI in 2022. Artificial intelligence (AI) appears in popular culture most often as a group of intelligent robots bent on destroying humanity, or at the very least a stunning theme park. We're safe for now because machines with general artificial intelligence don't yet exist, and they aren't expected to anytime soon. You can learn the risk and benefits of Artificial Intelligence with this article.


16 best artificial intelligence games & AI gaming - Dataconomy

#artificialintelligence

Do you love artificial intelligence games? Artificial intelligence (AI) has played an increasingly important and productive role in the gaming industry since IBM's computer program, Deep Blue, defeated Garry Kasparov in a 1997 chess match. AI is used to enhance game assets, behaviors, and settings in various ways. According to some experts, the most effective AI applications in gaming are those that aren't obvious. Every year, AI games come in a variety of forms. Games will utilize AI differently for each kind. It's more than likely that artificial intelligence is responsible for the replies and actions of non-playable characters. Because these characters must exhibit human-like competence, it is essential there. AI was previously used to foretell your next best move. AI enhances your game's visuals and solves gameplay issues (and for) you in this age of gaming. AI games, on the other hand, are not reliant upon AI. AI technologies improved significantly as a result of research for game development.


AI glossary: Artificial Intelligence terms - Dataconomy

#artificialintelligence

The most completed list of Artificial Intelligence terms as a dictionary is here for you. Artificial intelligence is already all around us. As AI becomes increasingly prevalent in the workplace, it's more important than ever to keep up with the newest words and use types. Leaders in the field of artificial intelligence are well aware that it is revolutionizing business. So, how much do you know about it? You'll discover concise definitions for automation tools and phrases below. It's no surprise that the world is moving ahead quickly thanks to artificial intelligence's wonders. Technology has introduced new values and creativity to our personal and professional lives. While frightening at times, the rapid evolution has been complemented by artificial intelligence (AI) technology with new aspects. It has provided us with new phrases to add to our everyday vocab that we have never heard of before.


Predicting Decisions in Language Based Persuasion Games

Journal of Artificial Intelligence Research

Sender-receiver interactions, and specifically persuasion games, are widely researched in economic modeling and artificial intelligence, and serve as a solid foundation for powerful applications. However, in the classic persuasion games setting, the messages sent from the expert to the decision-maker are abstract or well-structured application-specific signals rather than natural (human) language messages, although natural language is a very common communication signal in real-world persuasion setups. This paper addresses the use of natural language in persuasion games, exploring its impact on the decisions made by the players and aiming to construct effective models for the prediction of these decisions. For this purpose, we conduct an online repeated interaction experiment. At each trial of the interaction, an informed expert aims to sell an uninformed decision-maker a vacation in a hotel, by sending her a review that describes the hotel. While the expert is exposed to several scored reviews, the decision-maker observes only the single review sent by the expert, and her payoff in case she chooses to take the hotel is a random draw from the review score distribution available to the expert only. The expert’s payoff, in turn, depends on the number of times the decision-maker chooses the hotel. We also compare the behavioral patterns in this experiment to the equivalent patterns in similar experiments where the communication is based on the numerical values of the reviews rather than the reviews’ text, and observe substantial differences which can be explained through an equilibrium analysis of the game. We consider a number of modeling approaches for our verbal communication setup, differing from each other in the model type (deep neural network (DNN) vs. linear classifier), the type of features used by the model (textual, behavioral or both) and the source of the textual features (DNN-based vs. hand-crafted). Our results demonstrate that given a prefix of the interaction sequence, our models can predict the future decisions of the decision-maker, particularly when a sequential modeling approach and hand-crafted textual features are applied. Further analysis of the hand-crafted textual features allows us to make initial observations about the aspects of text that drive decision making in our setup.


Interpretable pipelines with evolutionarily optimized modules for RL tasks with visual inputs

arXiv.org Artificial Intelligence

The importance of explainability in AI has become a pressing concern, for which several explainable AI (XAI) approaches have been recently proposed. However, most of the available XAI techniques are post-hoc methods, which however may be only partially reliable, as they do not reflect exactly the state of the original models. Thus, a more direct way for achieving XAI is through interpretable (also called glass-box) models. These models have been shown to obtain comparable (and, in some cases, better) performance with respect to black-boxes models in various tasks such as classification and reinforcement learning. However, they struggle when working with raw data, especially when the input dimensionality increases and the raw inputs alone do not give valuable insights on the decision-making process. Here, we propose to use end-to-end pipelines composed of multiple interpretable models co-optimized by means of evolutionary algorithms, that allows us to decompose the decision-making process into two parts: computing high-level features from raw data, and reasoning on the extracted high-level features. We test our approach in reinforcement learning environments from the Atari benchmark, where we obtain comparable results (with respect to black-box approaches) in settings without stochastic frame-skipping, while performance degrades in frame-skipping settings.


Latent gaze information in highly dynamic decision-tasks

arXiv.org Artificial Intelligence

Digitization is penetrating more and more areas of life. Tasks are increasingly being completed digitally, and are therefore not only fulfilled faster, more efficiently but also more purposefully and successfully. The rapid developments in the field of artificial intelligence in recent years have played a major role in this, as they brought up many helpful approaches to build on. At the same time, the eyes, their movements, and the meaning of these movements are being progressively researched. The combination of these developments has led to exciting approaches. In this dissertation, I present some of these approaches which I worked on during my Ph.D. First, I provide insight into the development of models that use artificial intelligence to connect eye movements with visual expertise. This is demonstrated for two domains or rather groups of people: athletes in decision-making actions and surgeons in arthroscopic procedures. The resulting models can be considered as digital diagnostic models for automatic expertise recognition. Furthermore, I show approaches that investigate the transferability of eye movement patterns to different expertise domains and subsequently, important aspects of techniques for generalization. Finally, I address the temporal detection of confusion based on eye movement data. The results suggest the use of the resulting model as a clock signal for possible digital assistance options in the training of young professionals. An interesting aspect of my research is that I was able to draw on very valuable data from DFB youth elite athletes as well as on long-standing experts in arthroscopy. In particular, the work with the DFB data attracted the interest of radio and print media, namely DeutschlandFunk Nova and SWR DasDing. All resulting articles presented here have been published in internationally renowned journals or at conferences.


Conversational Agents: Theory and Applications

arXiv.org Artificial Intelligence

In this chapter, we provide a review of conversational agents (CAs), discussing chatbots, intended for casual conversation with a user, as well as task-oriented agents that generally engage in discussions intended to reach one or several specific goals, often (but not always) within a specific domain. We also consider the concept of embodied conversational agents, briefly reviewing aspects such as character animation and speech processing. The many different approaches for representing dialogue in CAs are discussed in some detail, along with methods for evaluating such agents, emphasizing the important topics of accountability and interpretability. A brief historical overview is given, followed by an extensive overview of various applications, especially in the fields of health and education. We end the chapter by discussing benefits and potential risks regarding the societal impact of current and future CA technology.