If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
One of the biggest trends in AI recently has been the creation of machine learning models that can generate the written word with unprecedented fluidity. These programs are game-changers, potentially supercharging computers' ability to parse and produce language. But something that's gone largely unnoticed is a secondary trend -- a shadow to the first -- and that is: a surprising number of these tools are named after Muppets. To date, this new breed of language AIs includes an ELMo, a BERT, a Grover, a Big BIRD, a Rosita, a RoBERTa, at least two ERNIEs (three if you include ERNIE 2.0), and a KERMIT. Big tech players like Google, Facebook, and the Allen Institute for AI are all involved, and the craze has global reach, with Chinese search giant Baidu and Beijing's Tsinghua University contributing models.
The advent of general adversarial networks (GANs) has led to increased popularity and adoption of artificial intelligence in the art world. It has been quite a few years since researchers have been trying to infuse the artistic skills into AI and there have been many interesting developments since then. Artists such as Mario Klingemann, Anna Ridler and many others have been at the forefront of this new-age GAN-powered art. Not only is AI creating breathtaking artwork but it is also being sold at auctions for hefty amounts. For instance, Canadian-Mexican artist Rafael Lozano-Hemmer has already made around $600,000 for an AI artwork.
Some AI systems achieve goals in challenging environments by drawing on representations of the world informed by past experiences. They generalize these to novel situations, enabling them to complete tasks even in settings they haven't encountered before. As it turns out, reinforcement learning -- a training technique that employs rewards to drive software policies toward goals -- is particularly well-suited to learning world models that summarize an agent's experience, and by extension to facilitating the learning of novel behaviors. Researchers hailing from Google, Alphabet subsidiary DeepMind, and the University of Toronto sought to exploit this with an agent -- Dreamer -- designed to internalize a world model and plan ahead to select actions by "imagining" their long-term outcomes. They say that it not only works for any learning objective, but that Dreamer exceeds existing approaches in data efficiency and computation time as well as final performance.
The rise of AI has made it possible for automated visual inspection systems to identify anomalies in manufactured products with high accuracy. If implemented successfully, these systems can greatly improve quality control and optimize costs. Although many manufacturers are trying to implement such systems into their workflow, very few have managed to reach full-scale production. The disconnect occurs because proof of concept solutions are put together in a controlled setting, largely by trial and error. However, when pushed into the real world with real-world constraints like variable environmental conditions, real-time requirements, and integrations with existing workflows, proof of concept often breaks down.
Deep networks can potentially express a learning problem more efficiently than local learning machines. While deep networks outperform local learning machines on some problems, it is still unclear how their nice representation emerges from their complex structure. We present an analysis based on Gaussian kernels that measures how the representation of the learning problem evolves layer after layer as the deep network builds higher-level abstract representations of the input. We use this analysis to show empirically that deep networks build progressively better representations of the learning problem and that the best representations are obtained when the deep network discriminates only in the last layers. Papers published at the Neural Information Processing Systems Conference.
The study of object representations in computer vision has primarily focused on developing representations that are useful for image classification, object detection, or semantic segmentation as downstream tasks. In this work we aim to learn object representations that are useful for control and reinforcement learning (RL). To this end, we introduce Transporter, a neural network architecture for discovering concise geometric object representations in terms of keypoints or image-space coordinates. Our method learns from raw video frames in a fully unsupervised manner, by transporting learnt image features between video frames using a keypoint bottleneck. The discovered keypoints track objects and object parts across long time-horizons more accurately than recent similar methods.
We present a framework for efficient perceptual inference that explicitly reasons about the segmentation of its inputs and features. Rather than being trained for any specific segmentation, our framework learns the grouping process in an unsupervised manner or alongside any supervised task. We enable a neural network to group the representations of different objects in an iterative manner through a differentiable mechanism. We achieve very fast convergence by allowing the system to amortize the joint iterative inference of the groupings and their representations. In contrast to many other recently proposed methods for addressing multi-object scenes, our system does not assume the inputs to be images and can therefore directly handle other modalities.
We introduce deep neural networks for end-to-end differentiable theorem proving that operate on dense vector representations of symbols. These neural networks are recursively constructed by following the backward chaining algorithm as used in Prolog. Specifically, we replace symbolic unification with a differentiable computation on vector representations of symbols using a radial basis function kernel, thereby combining symbolic reasoning with learning subsymbolic vector representations. The resulting neural network can be trained to infer facts from a given incomplete knowledge base using gradient descent. By doing so, it learns to (i) place representations of similar symbols in close proximity in a vector space, (ii) make use of such similarities to prove facts, (iii) induce logical rules, and (iv) it can use provided and induced logical rules for complex multi-hop reasoning.
From February 7 through March 17, 2018, Pilevneli Gallery presented Refik Anadol's latest project on the materiality of remembering. Melting Memories offered new insights into the representational possibilities emerging from the intersection of advanced technology and contemporary art. By showcasing several interdisciplinary projects that translate the elusive process of memory retrieval into data collections, the exhibition immersed visitors in Anadol's creative vision of "recollection." "Science states meanings; art expresses them," writes American philosopher John Dewey and draws a curious distinction between what he sees as the principal modes of communication in both disciplines. In Melting Memories, Refik Anadol's expressive statements provide the viewer with revealing and contemplative artworks that will generate responses to Dewey's thesis.
Biological movement is built up of sub-blocks or motion primitives. Such primitives provide a compact representation of movement which is also desirable in robotic control applications. We analyse handwriting data to gain a better understanding of use of primitives and their timings in biological movements. Inference of the shape and the timing of primitives can be done using a factorial HMM based model, allowing the handwriting to be represented in primitive timing space. This representation provides a distribution of spikes corresponding to the primitive activations, which can also be modelled using HMM architectures.