Goto

Collaborating Authors

Results


GPT-3 Creative Fiction

#artificialintelligence

What if I told a story here, how would that story start?" Thus, the summarization prompt: "My second grader asked me what this passage means: …" When a given prompt isn't working and GPT-3 keeps pivoting into other modes of completion, that may mean that one hasn't constrained it enough by imitating a correct output, and one needs to go further; writing the first few words or sentence of the target output may be necessary.


An introduction to Machine Learning with Brain.js

#artificialintelligence

In this post, we'll look at some machine learning concepts and learn more about Brain.js.We will discuss some aspects of understanding how neural networks work. We will learn terms like forward and backward propagation along with some other terms used in the machine learning community. Then we will leverage on the power of Brain.js to build a day to day meeting scheduling application using a constitutional neural network. Using Brain.js is a fantastic way to build a neural network. It learns the patterns and relationship between the inputs and output in order to make a somewhat educated guess when dealing with related issues. One example of a neural network is Cloudinary's image recognition add-on system.


Assessing Injury Risk With Zone7's Deep Learning

#artificialintelligence

Zone7 bases its analysis on more than five million hours of performance data. While it has started pilot programs in MLB and the NHL, its focus is on global soccer, with about three dozen clients spanning Bundesliga, Serie A, Ligue 1 and the English Football League Championship, which is the second division below the Premier League. Its most high-profile success (that it is able to disclose) has been Getafe CF, which is currently in fifth place in Spain's La Liga despite a team wage bill in the league's bottom half. By some measures, they've reduced injuries by 65% with Zone7.


Learning to Play Soccer by Reinforcement and Applying Sim-to-Real to Compete in the Real World

arXiv.org Artificial Intelligence

This work presents an application of Reinforcement Learning (RL) for the complete control of real soccer robots of the IEEE Very Small Size Soccer (VSSS) [1], a traditional league in the Latin American Robotics Competition (LARC). In the VSSS league, two teams of three small robots play against each other. We propose a simulated environment in which continuous or discrete control policies can be trained, and a Sim-to-Real method to allow using the obtained policies to control a robot in the real world. The results show that the learned policies display a broad repertoire of behaviors which are difficult to specify by hand. This approach, called VSSS-RL, was able to beat the human-designed policy for the striker of the team ranked 3rd place in the 2018 LARC, in 1-vs-1 matches.


Reinforcement-learning AIs are vulnerable to a new kind of attack

#artificialintelligence

The soccer bot lines up to take a shot at the goal. But instead of getting ready to block it, the goalkeeper drops to ground and wiggles its legs. Confused, the striker does a weird little sideways dance, stamping its feet and waving one arm, and then falls over. It's not a tactic you'll see used by the pros, but it shows that an artificial intelligence trained via deep reinforcement learning--the technique behind cutting-edge game-playing AIs like AlphaZero and the OpenAI Five--is more vulnerable to attack than previously thought. And that could have serious consequences.


Training Question Answering Models From Synthetic Data

arXiv.org Artificial Intelligence

Question and answer generation is a data augmentation method that aims to improve question answering (QA) models given the limited amount of human labeled data. However, a considerable gap remains between synthetic and human-generated question-answer pairs. This work aims to narrow this gap by taking advantage of large language models and explores several factors such as model size, quality of pretrained models, scale of data synthesized, and algorithmic choices. On the SQuAD1.1 question answering task, we achieve higher accuracy using solely synthetic questions and answers than when using the SQuAD1.1 training set questions alone. Removing access to real Wikipedia data, we synthesize questions and answers from a synthetic corpus generated by an 8.3 billion parameter GPT-2 model. With no access to human supervision and only access to other models, we are able to train state of the art question answering networks on entirely model-generated data that achieve 88.4 Exact Match (EM) and 93.9 F1 score on the SQuAD1.1 dev set. We further apply our methodology to SQuAD2.0 and show a 2.8 absolute gain on EM score compared to prior work using synthetic data.


How Machine Learning Will Lead to Better Maps

#artificialintelligence

Despite being one of the richest countries in the world, in Qatar, digital maps are lagging behind. While the country is adding new roads and constantly improving old ones in preparation for the 2022 FIFA World Cup, Qatar isn't a high priority for the companies that actually build out maps, like Google. "While visiting Qatar, we've had experiences where our Uber driver can't figure out how to get where he's going, because the map is so off," Sam Madden, a professor at MIT's Department of Electrical Engineering and Computer Science, said in a prepared statement. "If navigation apps don't have the right information, for things such as lane merging, this could be frustrating or worse." It's faster, cheaper, and way easier to obtain satellite images than it is for a tech company to drive around grabbing street-view photos.


Deep Reinforcement Learning for Complex Manipulation Tasks with Sparse Feedback

arXiv.org Machine Learning

Learning optimal policies from sparse feedback is a known challenge in reinforcement learning. Hindsight Experience Replay (HER) is a multi-goal reinforcement learning algorithm that comes to solve such tasks. The algorithm treats every failure as a success for an alternative (virtual) goal that has been achieved in the episode and then generalizes from that virtual goal to real goals. HER has known flaws and is limited to relatively simple tasks. In this thesis, we present three algorithms based on the existing HER algorithm that improves its performances. First, we prioritize virtual goals from which the agent will learn more valuable information. We call this property the \textit{instructiveness} of the virtual goal and define it by a heuristic measure, which expresses how well the agent will be able to generalize from that virtual goal to actual goals. Secondly, we designed a filtering process that detects and removes misleading samples that may induce bias throughout the learning process. Lastly, we enable the learning of complex, sequential, tasks using a form of curriculum learning combined with HER. We call this algorithm \textit{Curriculum HER}. To test our algorithms, we built three challenging manipulation environments with sparse reward functions. Each environment has three levels of complexity. Our empirical results show vast improvement in the final success rate and sample efficiency when compared to the original HER algorithm.


Improved Structural Discovery and Representation Learning of Multi-Agent Data

arXiv.org Machine Learning

Central to all machine learning algorithms is data representation. For multi-agent systems, selecting a representation which adequately captures the interactions among agents is challenging due to the latent group structure which tends to vary depending on context. However, in multi-agent systems with strong group structure, we can simultaneously learn this structure and map a set of agents to a consistently ordered representation for further learning. In this paper, we present a dynamic alignment method which provides a robust ordering of structured multi-agent data enabling representation learning to occur in a fraction of the time of previous methods. We demonstrate the value of this approach using a large amount of soccer tracking data from a professional league. The natural representation for many sources of unstructured data is intuitive to us as humans: for images, a 2D pixel representation; for speech, a spectrogram or linear filter-bank features; and for text, letters and characters. All of these possess fixed, rigid structure in space, time, or sequential ordering which are immediately amenable for further learning. For other unstructured data sources such as point clouds, semantic graphs, and multi-agent trajectories, such an initial ordered structure does not naturally exist. These data sources are set or graph-like in nature and therefore the natural representation is unordered, posing a significant challenge for many machine-learning techniques.


Generative adversarial networks: What GANs are and how they've evolved

#artificialintelligence

Perhaps you've read about AI capable of producing humanlike speech or generating images of people that are difficult to distinguish from real-life photographs. More often than not, these systems build upon generative adversarial networks (GANs), which are two-part AI models consisting of a generator that creates samples and a discriminator that attempts to differentiate between the generated samples and real-world samples. This unique arrangement enables GANs to achieve impressive feats of media synthesis, from composing melodies and swapping sheep for giraffes to hallucinating footage of ice skaters and soccer players. In point of fact, it's because of this prowess that GANs have been used to produce problematic content like deepfakes, which is media that takes a person in existing media and replaces them with someone else's likeness. The evolution of GANs -- which Facebook AI research director Yann LeCun has called the most interesting idea of the decade -- is somewhat long and winding, and very much continues to this day.