Goto

Collaborating Authors

abstraction


Learning Compositional Neural Programs for Continuous Control

#artificialintelligence

We propose a novel solution to challenging sparse-reward, continuous control problems that require hierarchical planning at multiple levels of abstraction. Our solution, dubbed AlphaNPI-X, involves three separate stages of learning. First, we use off-policy reinforcement learning algorithms with experience replay to learn a set of atomic goal-conditioned policies, which can be easily repurposed for many tasks. Second, we learn self-models describing the effect of the atomic policies on the environment. Third, the self-models are harnessed to learn recursive compositional programs with multiple levels of abstraction. The key insight is that the self-models enable planning by imagination, obviating the need for interaction with the world when learning higher-level compositional programs. To accomplish the third stage of learning, we extend the AlphaNPI algorithm, which applies AlphaZero to learn recursive neural programmer-interpreters. We empirically show that AlphaNPI-X can effectively learn to tackle challenging sparse manipulation tasks, such as stacking multiple blocks, where powerful model-free baselines fail.


Harvard University offers free online courses for programmers

#artificialintelligence

How to engage with a vibrant community of like-minded learners. How to develop and present a final programming project. The entry-level course is taught by David J. Malan and the computer science learners get to learn the following programming languages through this course - C, PHP, and JavaScript plus SQL, CSS, and HTML. What makes this course more interesting is that the problem sets included in the course curriculum are inspired by real-world domains of biology, cryptography, finance, forensics, and gaming. Mentioned below are important details of the course.


Toward a machine learning model that can reason about everyday actions

#artificialintelligence

The ability to reason abstractly about events as they unfold is a defining feature of human intelligence. We know instinctively that crying and writing are means of communicating, and that a panda falling from a tree and a plane landing are variations on descending. Organizing the world into abstract categories does not come easily to computers, but in recent years researchers have inched closer by training machine learning models on words and images infused with structural information about the world, and how objects, animals, and actions relate. In a new study at the European Conference on Computer Vision this month, researchers unveiled a hybrid language-vision model that can compare and contrast a set of dynamic events captured on video to tease out the high-level concepts connecting them. Their model did as well as or better than humans at two types of visual reasoning tasks--picking the video that conceptually best completes the set, and picking the video that doesn't fit.


Decentralized reinforcement learning: global decision-making via local economic transactions

AIHub

Many neural network architectures that underlie various artificial intelligence systems today bear an interesting similarity to the early computers a century ago. Just as early computers were specialized circuits for specific purposes like solving linear systems or cryptanalysis, so too does the trained neural network generally function as a specialized circuit for performing a specific task, with all parameters coupled together in the same global scope. One might naturally wonder what it might take for learning systems to scale in complexity in the same way as programmed systems have. And if the history of how abstraction enabled computer science to scale gives any indication, one possible place to start would be to consider what it means to build complex learning systems at multiple levels of abstraction, where each level of learning is the emergent consequence of learning from the layer below. This post discusses our recent paper that introduces a framework for societal decision-making, a perspective on reinforcement learning through the lens of a self-organizing society of primitive agents.


Toward a machine learning model that can reason about everyday actions

#artificialintelligence

The ability to reason abstractly about events as they unfold is a defining feature of human intelligence. We know instinctively that crying and writing are means of communicating, and that a panda falling from a tree and a plane landing are variations on descending. Organizing the world into abstract categories does not come easily to computers, but in recent years researchers have inched closer by training machine learning models on words and images infused with structural information about the world, and how objects, animals, and actions relate. In a new study at the European Conference on Computer Vision this month, researchers unveiled a hybrid language-vision model that can compare and contrast a set of dynamic events captured on video to tease out the high-level concepts connecting them. Their model did as well as or better than humans at two types of visual reasoning tasks -- picking the video that conceptually best completes the set, and picking the video that doesn't fit.


What is TensorFlow? The machine learning library explained

#artificialintelligence

Machine learning is a complex discipline. But implementing machine learning models is far less daunting and difficult than it used to be, thanks to machine learning frameworks--such as Google's TensorFlow--that ease the process of acquiring data, training models, serving predictions, and refining future results. Created by the Google Brain team, TensorFlow is an open source library for numerical computation and large-scale machine learning. TensorFlow bundles together a slew of machine learning and deep learning (aka neural networking) models and algorithms and makes them useful by way of a common metaphor. It uses Python to provide a convenient front-end API for building applications with the framework, while executing those applications in high-performance C .


How neural network training methods are modeled after the human brain

#artificialintelligence

Much of what makes us human is the power of our brain and cognitive abilities. The human brain is a somewhat miraculous organ that gives humans the power to communicate, imagine, plan and write. However, the brain is a mystery; we don't know quite how it works. The brain has long perplexed scientists, researchers, philosophers and thinkers on the mechanisms of cognition and consciousness. When AI started to gain popularity decades ago, there was debate as to how to make a machine "learn," since developers still had little idea how humans learned.


Top 8 Machine Learning Libraries In Kotlin One Must Know

#artificialintelligence

According to the Stack Overflow Developer survey report, Kotlin is one of the most loved programming languages among professional developers. It secured the 4th position among 25 programming languages. As per the official documentation, Kotlin claims to be a preferred choice for building data pipelines, productionising machine learning models, among others. In this article, we list down the top 8 machine learning libraries in Kotlin. Kotlin Statistics is a machine learning library that aims to express meaningful statistical and data analysis with functional and object-oriented programming while making the code legible and intuitive.


Decentralized Reinforcement Learning

#artificialintelligence

Many associations in the world like the biological ecosystems, government and corporations are physically decentralized however they are unified in the sense of their functionality. For instance, a financial institution operates with a global policy of maximizing their profits, hence appearing as a single entity; however, this entity abstraction is an illusion, as a financial institution is composed of a group of individual human agents solving their optimization problems with our without collaboration. The policy function parameters are fine-tuned depending on the gradients of the defined objective function. This approach is called the monolithic decision-making framework as the policy function's learning parameters are coupled globally solely using an objective function. Having covered a brief background of a centralized reinforcement learning framework, let us move forward to some promising decentralized reinforcement learning frameworks.


AIpoint Blogpost

#artificialintelligence

Machine learning frameworks such as Google use a TensorFlow that ease the process of acquiring data, training models, serving prediction, and refining future results. Tensorflow bundles together the machine learning and deep learning models and algorithms and makes them useful by way of common metaphor. Google uses machine learning in all of us products to improve search engine, translation, image captioning, and ordinary recommendation. To give you a concrete example, Google uses and experiences faster and modifying research with artificial intelligence. You all know if we type a keyword in Google search bar, Google provides a recommendation "what would be the next to search with".