If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
In this paper, we revisit the recurrent back-propagation (RBP) algorithm, discuss the conditions under which it applies as well as how to satisfy them in deep neural networks. We show that RBP can be unstable and propose two variants based on conjugate gradient on the normal equations (CG-RBP) and Neumann series (Neumann-RBP). We further investigate the relationship between Neumann-RBP and back propagation through time (BPTT) and its truncated version (TBPTT). Our Neumann-RBP has the same time complexity as TBPTT but only requires constant memory, whereas TBPTT's memory cost scales linearly with the number of truncation steps. We examine all RBP variants along with BPTT and TBPTT in three different application domains: associative memory with continuous Hopfield networks, document classification in citation networks using graph neural networks and hyperparameter optimization for fully connected networks. All experiments demonstrate that RBPs, especially the Neumann-RBP variant, are efficient and effective for optimizing convergent recurrent neural networks.
Data efficiency, i.e., learning from small data sets, is critical in many practical applications where data collection is time consuming or expensive, e.g., robotics, animal experiments or drug design. Meta learning is one way to increase the data efficiency of learning algorithms by generalizing learned concepts from a set of training tasks to unseen, but related, tasks. Often, this relationship between tasks is hard coded or relies in some other way on human expertise. In this paper, we propose to automatically learn the relationship between tasks using a latent variable model. Our approach finds a variational posterior over tasks and averages over all plausible (according to this posterior) tasks when making predictions. We apply this framework within a model-based reinforcement learning setting for learning dynamics models and controllers of many related tasks. We apply our framework in a model-based reinforcement learning setting, and show that our model effectively generalizes to novel tasks, and that it reduces the average interaction time needed to solve tasks by up to 60% compared to strong baselines.
In 1976, philosopher Julian Jaynes issued the provocative theory that recent ancestors lacked self-awareness. Instead, they mistook their inner voices for outside sources–the voice of God, say, or the ghosts of their ancestors. Jaynes called his theory "bicameralism" (Westworld fans will recall an episode from the last season called "The Bicameral Mind") and, in his telling, it persisted in early humans until about 3,000 years ago.
Recent years have seen a dramatic growth of natural language text data, including web pages, news articles, scientific literature, emails, enterprise documents, and social media such as blog articles, forum posts, product reviews, and tweets. This has led to an increasing demand for powerful software tools to help people manage and analyze vast amount of text data effectively and efficiently. Unlike data generated by a computer system or sensors, text data are usually generated directly by humans for humans.
Storage is an important component underpinning artificial intelligence (AI) and other emerging technologies with similar infrastructure demands, according to Robert Lee, VP and chief architect at Pure Storage, and therefore needs to be included in discussions about such technologies. Lee told ZDNet that significant advancements in technology -- particularly around parallelisation, compute, and networking -- enable new algorithms to apply more compute power against data. "Historically, the limit to how much data has been able to be processed, the limit to how much insight we've been able to garner from data has been bottlenecked by storage's ability to keep the compute fed," said Lee, who previously worked at Oracle before joining Pure Storage in 2013. "Somewhere around the early 2000s, the hardware part of compute, CPUs started getting more parallel. It started doing multi-socket architectures, hyper threading multi-core.
We are in a similar pre-conscious state now, but the voice we hear is not the other side of our brains. It's our digital self–a version of us that is quickly becoming inseparable from our physical self. I call this comingled digital and analog self our "Meta Me." The more the Meta Me uses digital tools, the more conscious it will become–a development that will have tremendous social, ethical, and legal implications. Some are already coming to light.
Originally posted on The Horizons Tracker. Automation is reaching into a vast range of professions, and my own is certainly no different. Through this blog and various other means, I try to locate interesting research and practices from around the world, and bring them together into some kind of narrative. With so much going on around the world, it stands to reason that a computer could be trained to do a similar task, and that's certainly the aim of Semantic Scholar, a new tool launched by The Allen Institute for Artificial Intelligence. The tool offers users a means of hunting for papers in specific fields, and then filter your search by date, publication and so on.
Pure Storage (NYSE: PSTG), the market's leading independent all-flash data platform vendor for the cloud era, today announced Pure1 META, it's Artificial Intelligence (AI) platform for delivering on the vision of self-driving storage. Pure1 META is delivers global predictive intelligence by collecting and analyzing over 1 trillion array telemetry data points per day and enables effortless management, analytics and support. Pure1 META represents a major breakthrough in enterprise artificial intelligence and machine learning. Through the new Workload DNA generated by Pure1 META, customers will be able, for the first time in the industry, to predict both capacity and performance and get intelligent advice on workload deployment, interaction and optimization. With Pure1 META, Pure Storage continues to advance in its vision of delivering self-driving storage.
Pure Storage announced a 75-blade all-flash system that operates as one unit as well as an artificial intelligence engine called Meta that aims to make storage arrays more autonomous. The common thread: Storage is increasingly all about the software. At Pure Storage's Accelerate customer powwow, the company outlined a bevy of software updates and features for big data, analytics and artificial intelligence workloads as well as multi-cloud management tools. While Pure's approach to integrated hardware and software has resulted in strong demand and growth, the upshot is that software is what's driving all-flash arrays. Pure outlined a series of updates to its FlashBlade product line including a 17TB system to go with its 8TB and 52TB configurations.