Goto

Collaborating Authors

Results


Toward multi-target self-organizing pursuit in a partially observable Markov game

#artificialintelligence

The multiple-target self-organizing pursuit (SOP) problem has wide applications and has been considered a challenging self-organization game for distributed systems, in which intelligent agents cooperatively pursue multiple dynamic targets with partial observations. This work proposes a framework for decentralized multi-agent systems to improve intelligent agents' search and pursuit capabilities. The proposed distributed algorithm: fuzzy self-organizing cooperative coevolution (FSC2) is then leveraged to resolve the three challenges in multi-target SOP: distributed self-organizing search (SOS), distributed task allocation, and distributed single-target pursuit. FSC2 includes a coordinated multi-agent deep reinforcement learning method that enables homogeneous agents to learn natural SOS patterns. Additionally, we propose a fuzzy-based distributed task allocation method, which locally decomposes multi-target SOP into several single-target pursuit problems.


Amazon digs into ambient and generalizable intelligence at re:MARS

#artificialintelligence

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Many, if not most, AI experts maintain that artificial general intelligence (AGI) is still many decades away, if not longer. And the AGI debate has been heating up over the past couple of months. However, according to Amazon, the route to "generalizable intelligence" begins with ambient intelligence. And it says that future is unfurling now.


Humans in the loop help robots find their way: Computer scientists' interactive program aids motion planning for environments with obstacles

#artificialintelligence

Engineers at Rice University have developed a method that allows humans to help robots "see" their environments and carry out tasks. The strategy called Bayesian Learning IN the Dark -- BLIND, for short -- is a novel solution to the long-standing problem of motion planning for robots that work in environments where not everything is clearly visible all the time. The peer-reviewed study led by computer scientists Lydia Kavraki and Vaibhav Unhelkar and co-lead authors Carlos Quintero-Peña and Constantinos Chamzas of Rice's George R. Brown School of Engineering was presented at the Institute of Electrical and Electronics Engineers' International Conference on Robotics and Automation in late May. The algorithm developed primarily by Quintero-Peña and Chamzas, both graduate students working with Kavraki, keeps a human in the loop to "augment robot perception and, importantly, prevent the execution of unsafe motion," according to the study. To do so, they combined Bayesian inverse reinforcement learning (by which a system learns from continually updated information and experience) with established motion planning techniques to assist robots that have "high degrees of freedom" -- that is, a lot of moving parts.



The Role of Symbolic AI and Machine Learning in Robotics

#artificialintelligence

Robotics is a multi-disciplinary field in computer science dedicated to the design and manufacture of robots, with applications in industries such as manufacturing, space exploration and defence. While the field has existed for over 50 years, recent advances such as the Spot and Atlas robots from Boston Dynamics are truly capturing the public's imagination as science fiction becomes reality. Traditionally, robotics has relied on machine learning/deep learning techniques such as object recognition. While this has led to huge advancements, the next frontier in robotics is to enable robots to operate in the real world autonomously, with as little human interaction as possible. Such autonomous robots differ to non-autonomous ones as they operate in an open world, with undefined rules, uncertain real-world observations, and an environment -- the real world -- which is constantly changing.


Top Posts June 20-26: 20 Basic Linux Commands for Data Science Beginners - KDnuggets

#artificialintelligence

Decision Tree Algorithm, Explained by Nagesh Singh Chauhan 21 Cheat Sheets for Data Science Interviews by Nate Rosidi 15 Python Coding Interview Questions You Must Know For Data Science by Nate Rosidi Naïve Bayes Algorithm: Everything You Need to Know by Nagesh Singh Chauhan 14 Essential Git Commands for Data Scientists by Abid Ali Awan Top Programming Languages and Their Uses by Claire D. Costa 3 Ways Understanding Bayes Theorem Will Improve Your Data Science by Nicole Janeway Bills DBSCAN Clustering Algorithm in Machine Learning by Nagesh Singh Chauhan The Complete Collection of Data Science Books – Part 2 by Abid Ali Awan 5 Different Ways to Load Data in Python by Ahmad Anis Top Posts June 13-19: 14 Essential Git Commands for Data Scientists 20 Basic Linux Commands for Data Science Beginners KDnuggets News, June 15: 14 Essential Git Commands for Data Scientists; A… KDnuggets Top Posts for March 2022: Why Are So Many Data Scientists… Top Posts April 4-10: The Complete Collection Of Data Repositories – Part 1 Top Posts March 21-27: Why Are So Many Data Scientists Quitting Their Jobs? Top Posts March 21-27: Why Are So Many Data Scientists Quitting Their Jobs?


Implementing the Particle Swarm Optimization (PSO) Algorithm in Python

#artificialintelligence

There are lots of definitions of AI. According to the Merrian-Webster dictionary, Artificial Intelligence is a large area of computer science that simulates intelligent behavior in computers. Based on this, an algorithm implementation based on metaheuristic called Particle Swarm Optimization (originaly proposed to simulate birds searching for food, the movement of fishes' shoal, etc.) is able to simulate behaviors of swarms in order to optimize a numeric problem iteratively. It can be classified as a swarm intelligence algorithm like Ant Colony Algorithm, Artificial Bee Colony Algorithm and Bacterial Foraging, for example. Proposed in 1995 by J. Kennedy an R.Eberhart, the article "Particle Swarm Optimization" became very popular due his continue optimization process allowing variations to multi targets and more.


Three opportunities of Digital Transformation: AI, IoT and Blockchain

#artificialintelligence

Koomey's law This law posits that the energy efficiency of computation doubles roughly every one-and-a-half years (see Figure 1–7). In other words, the energy necessary for the same amount of computation halves in that time span. To visualize the exponential impact this has, consider the face that a fully charged MacBook Air, when applying the energy efficiency of computation of 1992, would completely drain its battery in a mere 1.5 seconds. According to Koomey's law, the energy requirements for computation in embedded devices is shrinking to the point that harvesting the required energy from ambient sources like solar power and thermal energy should suffice to power the computation necessary in many applications. Metcalfe's law This law has nothing to do with chips, but all to do with connectivity. Formulated by Robert Metcalfe as he invented Ethernet, the law essentially states that the value of a network increases exponentially with regard to the number of its nodes (see Figure 1–8).


AI Makes Strides in Virtual Worlds More Like Our Own

#artificialintelligence

In 2009, a computer scientist then at Princeton University named Fei-Fei Li invented a data set that would change the history of artificial intelligence. Known as ImageNet, the data set included millions of labeled images that could train sophisticated machine-learning models to recognize something in a picture. The machines surpassed human recognition abilities in 2015. Soon after, Li began looking for what she called another of the "North Stars" that would give AI a different push toward true intelligence. She found inspiration by looking back in time over 530 million years to the Cambrian explosion, when numerous land-dwelling animal species appeared for the first time.


Pinaki Laskar on LinkedIn: #AI #machinelearning #algorithms

#artificialintelligence

AI Researcher, Cognitive Technologist Inventor - AI Thinking, Think Chain Innovator - AIOT, XAI, Autonomous Cars, IIOT Founder Fisheyebox Spatial Computing Savant, Transformative Leader, Industry X.0 Practitioner How can a mathematically-oriented machine truly learn things? Mathematical machines are either formal logical systems, operationalized as symbolic rules-based AI or expert systems, or statistical learning machines, dubbed as narrow/Weak AI, ML, DL, ANNs. Such machines follow blind and mindless mathematical and statistical algorithms, codes, models, programs, and solutions, transforming input data (as independent variables) into the output data (as dependent variables), dubbed as predictions, recommendations, decisions, etc. They are unable to real knowing or learning, as having no interactions with the world, its various domains, rules, laws, objects, events, or processes. Learning is the "acquiring new understanding, knowledge, behaviors, skills, values, attitudes, and preferences" via senses, experience, trial and error, intuition, study and research.