Goto

Collaborating Authors

Agents


Yann LeCun's vision for creating autonomous machines

#artificialintelligence

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. In the midst of the heated debate about AI sentience, conscious machines and artificial general intelligence, Yann LeCun, Chief AI Scientist at Meta, published a blueprint for creating "autonomous machine intelligence." LeCun has compiled his ideas in a paper that draws inspiration from progress in machine learning, robotics, neuroscience and cognitive science. He lays out a roadmap for creating AI that can model and understand the world, reason and plan to do tasks on different timescales. While the paper is not a scholarly document, it provides a very interesting framework for thinking about the different pieces needed to replicate animal and human intelligence. It also shows how the mindset of LeCun, an award-winning pioneer of deep learning, has changed and why he thinks current approaches to AI will not get us to human-level AI.


Mastering the Game of Stratego with Model-Free Multiagent Reinforcement Learning

#artificialintelligence

We introduce DeepNash, an autonomous agent capable of learning to play the imperfect information game Stratego from scratch, up to a human expert level. Stratego is one of the few iconic board games that Artificial Intelligence (AI) has not yet mastered. This popular game has an enormous game tree on the order of $10^{535}$ nodes, i.e., $10^{175}$ times larger than that of Go. It has the additional complexity of requiring decision-making under imperfect information, similar to Texas hold'em poker, which has a significantly smaller game tree (on the order of $10^{164}$ nodes). Decisions in Stratego are made over a large number of discrete actions with no obvious link between action and outcome. Episodes are long, with often hundreds of moves before a player wins, and situations in Stratego can not easily be broken down into manageably-sized sub-problems as in poker. For these reasons, Stratego has been a grand challenge for the field of AI for decades, and existing AI methods barely reach an amateur level of play. DeepNash uses a game-theoretic, model-free deep reinforcement learning method, without search, that learns to master Stratego via self-play. The Regularised Nash Dynamics (R-NaD) algorithm, a key component of DeepNash, converges to an approximate Nash equilibrium, instead of 'cycling' around it, by directly modifying the underlying multi-agent learning dynamics. DeepNash beats existing state-of-the-art AI methods in Stratego and achieved a yearly (2022) and all-time top-3 rank on the Gravon games platform, competing with human expert players.


Toward multi-target self-organizing pursuit in a partially observable Markov game

#artificialintelligence

The multiple-target self-organizing pursuit (SOP) problem has wide applications and has been considered a challenging self-organization game for distributed systems, in which intelligent agents cooperatively pursue multiple dynamic targets with partial observations. This work proposes a framework for decentralized multi-agent systems to improve intelligent agents' search and pursuit capabilities. The proposed distributed algorithm: fuzzy self-organizing cooperative coevolution (FSC2) is then leveraged to resolve the three challenges in multi-target SOP: distributed self-organizing search (SOS), distributed task allocation, and distributed single-target pursuit. FSC2 includes a coordinated multi-agent deep reinforcement learning method that enables homogeneous agents to learn natural SOS patterns. Additionally, we propose a fuzzy-based distributed task allocation method, which locally decomposes multi-target SOP into several single-target pursuit problems.


Implementing the Particle Swarm Optimization (PSO) Algorithm in Python

#artificialintelligence

There are lots of definitions of AI. According to the Merrian-Webster dictionary, Artificial Intelligence is a large area of computer science that simulates intelligent behavior in computers. Based on this, an algorithm implementation based on metaheuristic called Particle Swarm Optimization (originaly proposed to simulate birds searching for food, the movement of fishes' shoal, etc.) is able to simulate behaviors of swarms in order to optimize a numeric problem iteratively. It can be classified as a swarm intelligence algorithm like Ant Colony Algorithm, Artificial Bee Colony Algorithm and Bacterial Foraging, for example. Proposed in 1995 by J. Kennedy an R.Eberhart, the article "Particle Swarm Optimization" became very popular due his continue optimization process allowing variations to multi targets and more.


AI Makes Strides in Virtual Worlds More Like Our Own

#artificialintelligence

In 2009, a computer scientist then at Princeton University named Fei-Fei Li invented a data set that would change the history of artificial intelligence. Known as ImageNet, the data set included millions of labeled images that could train sophisticated machine-learning models to recognize something in a picture. The machines surpassed human recognition abilities in 2015. Soon after, Li began looking for what she called another of the "North Stars" that would give AI a different push toward true intelligence. She found inspiration by looking back in time over 530 million years to the Cambrian explosion, when numerous land-dwelling animal species appeared for the first time.


Top 5 Practical Applications Of AI In The Innovative World - Dan Fiehn

#artificialintelligence

With the likely development of superintelligent programs in the near future, many scientists have raised the issue of safety as it relates to such technology. A common theme in Artificial Intellgence (AI) safety research is the possibility of keeping a super-intelligent agent in a sealed hardware so as to prevent it from doing any harm to humankind. In this essay we will review specific proposals aimed at creating restricted environments for safely interacting with artificial minds. We will evaluate feasibility of presented proposals and suggest a protocol aimed at enhancing safety and security of such methodologies. While it is unlikely that long-term and secure confinement of AI is possible, we are hopeful that the proposed protocol will give researchers a little more time to find a permanent and satisfactory solution for addressing existential risks associated with appearance of super-intelligent machines.


The Impact of AI on Healthcare: How to Make the Models Work?

#artificialintelligence

Research into Artificial Intelligence (AI) has been ongoing for decades, with early proposals dating back to 1950. However, only in recent years, it has seen a resurgence in popularity thanks to the increased availability of computing power and the growth of big data and machine learning. AI is the ability of machines to perform tasks that ordinarily require human intelligence, such as understanding natural language and recognizing objects. With the rapid expansion of AI, there are opportunities for businesses and individuals alike to capitalize on its capabilities. AI is a field of computer science and engineering focused on the creation of intelligent agents, which are systems that can reason, learn, and act autonomously.


Law Smells - Artificial Intelligence and Law

#artificialintelligence

In modern societies, law is one of the main tools to regulate human activities. These activities are constantly changing, and law co-evolves with them. In the past decades, human activities have become increasingly differentiated and intertwined, e.g., in developments described as globalization or digitization. Consequently, legal rules, too, have grown more complex, and statutes and regulations have increased in volume, interconnectivity, and hierarchical structure (Katz et al. 2020; Coupette et al. 2021a). A similar trend can be observed in software engineering, albeit on a much shorter time scale.


3 things large language models need in an era of 'sentient' AI hype

#artificialintelligence

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. All hell broke loose in the AI world after The Washington Post reported last week that a Google engineer thought that LaMDA, one of the company's large language models (LLM), was sentient. The news was followed by a frenzy of articles, videos and social media debates over whether current AI systems understand the world as we do, whether AI systems can be conscious, what are the requirements for consciousness, etc. We are currently in a state where our large language models have become good enough to convince many people -- including engineers -- that they are on par with natural intelligence. At the same time, they are still bad enough to make dumb mistakes, as these experiments by computer scientist Ernest Davis show.


The AI containment problem

#artificialintelligence

Elon Musk plans to build his Tesla Bot, Optimus, so that humans "can run away from it and most likely overpower it" should they ever need to. "Hopefully, that doesn't ever happen, but you never know," says Musk. But is this really enough to make an AI safe? The problem of keeping AI contained, and only doing the things we want it to, is a deceptively tricky one, writes Roman V. Yampolskiy. With the likely development of superintelligent programs in the near future, many scientists have raised the issue of safety as it relates to such technology. A common theme in Artificial Intellgence (AI) safety research is the possibility of keeping a super-intelligent agent in a sealed hardware so as to prevent it from doing any harm to humankind. In this essay we will review specific proposals aimed at creating restricted environments for safely interacting with artificial minds. We will evaluate feasibility of presented proposals and suggest a protocol aimed at enhancing safety and security of such methodologies.