Gartner, Inc. identified the top 10 data and analytics (D&A) technology trends for 2020 that can help data and analytics leaders navigate their COVID-19 response and recovery and prepare for a post-pandemic reset. "To innovate their way beyond a post-COVID-19 world, data and analytics leaders require an ever-increasing velocity and scale of analysis in terms of processing and access to succeed in the face of unprecedented market shifts," said Rita Sallam, distinguished research vice president at Gartner. By the end of 2024, 75% of organizations will shift from piloting to operationalizing artificial intelligence (AI), driving a 5 times increase in streaming data and analytics infrastructures. Within the current pandemic context, AI techniques such as machine learning (ML), optimization and natural language processing (NLP) are providing vital insights and predictions about the spread of the virus and the effectiveness and impact of countermeasures. Other smarter AI techniques such as reinforcement learning and distributed learning are creating more adaptable and flexible systems to handle complex business situations; for example, agent-based systems that model and simulate complex systems.
Follow up on my previous post discussing the key technologies around the conversational AI solution, I will be dive into the typical challenges the AI Engineer team would encounter when building a virtual agent or a chatbot solution for your clients or customers. Let firstly define the scope and goal of the conversational application. The conversational agents can be categorized into two main streams. The typical agents for Open Domain Conversation are Siri, Google Assistant, BlenderBot from Facebook, Meena from Google. Users can start a conversation without a clear goal, and the topics are unrestricted.
Recently, there has been increasing interest in transparency and interpretability in Deep Reinforcement Learning (DRL) systems. Verbal explanations, as the most natural way of communication in our daily life, deserve more attention, since they allow users to gain a better understanding of the system which ultimately could lead to a high level of trust and smooth collaboration. This paper reports a novel work in generating verbal explanations for DRL behaviors agent. A rule-based model is designed to construct explanations using a series of rules which are predefined with prior knowledge. A learning model is then proposed to expand the implicit logic of generating verbal explanation to general situations by employing rule-based explanations as training data.
Swarm-based multi-agent simulation leads to better modeling of tasks in biology, engineering, economics, art, and many other areas. It also facilitates an understanding of complicated phenomena that cannot be solved analytically. Agent-Based Modeling and Simulation with Swarm provides the methodology for a multi-agent-based modeling approach that integrates computational techniques such as artificial life, cellular automata, and bio-inspired optimization. Each chapter gives an overview of the problem, explores state-of-the-art technology in the field, and discusses multi-agent frameworks. The author describes step by step how to assemble algorithms for generating a simulation model, program, method for visualization, and further research tasks.
Reinforcement learning (RL) is often touted as a promising approach for costly and risk-sensitive applications, yet practicing and learning in those domains directly is expensive. It costs time (e.g., OpenAI's Dota2 project used 10,000 years of experience), it costs money (e.g., "inexpensive" robotic arms used in research typically cost $10,000 to $30,000), and it could even be dangerous to humans. How can an intelligent agent learn to solve tasks in environments in which it cannot practice? For many tasks, such as assistive robotics and self-driving cars, we may have access to a different practice area, which we will call the source domain. While the source domain has different dynamics than the target domain, experience in the source domain is much cheaper to collect.
When the average person thinks about AI and robots what often comes to mind are post-apocalyptic visions of scary, super-intelligent machines taking over the world, or even the universe. The Terminator movie series is a good reflection of this fear of AI, with the core technology behind the intelligent machines powered by Skynet, referred to as an "artificial neural network-based conscious group mind and artificial general superintelligence system". However, the AI of today looks nothing like the worrisome science fiction representation. Rather, AI is performing many tedious and manual tasks and providing value from recognition and conversation systems to predictive analytics pattern matching and autonomous systems. In that context, the fact that governments and military organizations are investing heavily in AI shouldn't be as much concerning as it is intriguing. The ways that machine learning and AI are being implemented are both mundane from the perspective of enabling humans to do their existing tasks better, and very interesting seeing how machines are being made more intelligent to give humans better understanding and control of the environment around them.
Facebook researchers have developed a general AI framework called Recursive Belief-based Learning (ReBeL) that they say achieves better-than-human performance in heads-up, no-limit Texas hold'em poker while using less domain knowledge than any prior poker AI. They assert that ReBeL is a step toward developing universal techniques for multi-agent interactions -- in other words, general algorithms that can be deployed in large-scale, multi-agent settings. Potential applications run the gamut from auctions, negotiations, and cybersecurity to self-driving cars and trucks. Combining reinforcement learning with search at AI model training and test time has led to a number of advances. Reinforcement learning is where agents learn to achieve goals by maximizing rewards, while search is the process of navigating from a start to a goal state.
Just like natural evolution that transformed all living creatures throughout history, machines can evolve and behave the same way! Unlike what most people would think, AI is not a new technology. However, it has undoubtedly evolved tremendously over the past years with the advancement in the training of deep artificial neural networks, primarily driven by the increase in available compute power which is necessary to train such networks for meaningful results. Swarm intelligence (SI), a sub-field of artificial intelligence, is the collective behavior of decentralized, self-organized systems. It does not require as much compute power as that needed for Deep Learning, but it can be employed in specific cases as a simple and efficient solution.
In recent years, many sectors have experienced significant progress in automation, associated with the growing advances in artificial intelligence and machine learning. There are already automated robotic weapons, which are able to evaluate and engage with targets on their own, and there are already autonomous vehicles that do not need a human driver. It is argued that the use of increasingly autonomous systems (AS) should be guided by the policy of human control, according to which humans should execute a certain significant level of judgment over AS. While in the military sector there is a fear that AS could mean that humans lose control over life and death decisions, in the transportation domain, on the contrary, there is a strongly held view that autonomy could bring significant operational benefits by removing the need for a human driver. This article explores the notion of human control in the United States in the two domains of defense and transportation.