For the four-hundredth anniversary of Shakespeare's death, Gregory Doran, the artistic director of the Royal Shakespeare Company, wanted to dazzle. He turned to "The Tempest," the late romance that includes flying spirits, a shipwreck, a vanishing banquet, and a masque-like pageant that the magician Prospero stages to celebrate his daughter's marriage. "The Tempest" was performed at the court of King James I, and it may have been intended in part to showcase the multimedia marvels of Jacobean court masques. "Shakespeare was touching on that new form of theatre," Doran told me recently, over the phone. "So we wanted to think about what the cutting-edge technology is today that Shakespeare, if he were alive now, would be saying, 'Let's use some of that.' " The politics behind Shakespeare and stage illusion are more fraught than usual these days.
Datacenter workloads demand high computational capabilities, flexibility, power efficiency, and low cost. It is challenging to improve all of these factors simultaneously. To advance datacenter capabilities beyond what commodity server designs can provide, we designed and built a composable, reconfigurable hardware fabric based on field programmable gate arrays (FPGA). Each server in the fabric contains one FPGA, and all FPGAs within a 48-server rack are interconnected over a low-latency, high-bandwidth network. We describe a medium-scale deployment of this fabric on a bed of 1632 servers, and measure its effectiveness in accelerating the ranking component of the Bing web search engine.
Analyses performed using Spark of brain activity in a larval zebrafish: embedding dynamics of whole-brain activity into lower-dimensional trajectories. The growth of data volumes in industry and research poses tremendous opportunities, as well as tremendous computational challenges. As data sizes have outpaced the capabilities of single machines, users have needed new systems to scale out computations to multiple nodes. As a result, there has been an explosion of new cluster programming models targeting diverse computing workloads.1,4,7,10 At first, these models were relatively specialized, with new models developed for new workloads; for example, MapReduce4 supported batch processing, but Google also developed Dremel13 for interactive SQL queries and Pregel11 for iterative graph algorithms.
We describe the timely dataflow model for distributed computation and its implementation in the Naiad system. The model supports stateful iterative and incremental computations. It enables both low-latency stream processing and high-throughput batch processing, using a new approach to coordination that combines asynchronous and fine-grained synchronous execution. We describe two of the programming frameworks built on Naiad: GraphLINQ for parallel graph processing, and differential dataflow for nested iterative and incremental computations. We show that a general-purpose system can achieve performance that matches, and sometimes exceeds, that of specialized systems.
NVidia's Titan X graphics card, featuring the company's Pascal-powered graphics processing unit driven by 3,584 CUDA cores running at 1.5GHz. As researchers continue to push the boundaries of neural networks and deep learning--particularly in speech recognition and natural language processing, image and pattern recognition, text and data analytics, and other complex areas--they are constantly on the lookout for new and better ways to extend and expand computing capabilities. For decades, the gold standard has been high-performance computing (HPC) clusters, which toss huge amounts of processing power at problems--albeit at a prohibitively high cost. This approach has helped fuel advances across a wide swath of fields, including weather forecasting, financial services, and energy exploration. However, in 2012, a new method emerged.
Though you couldn't tell from the picture, these particular headphones incorporated a miniature fakir's bed of soft plastic spikes above each ear, pressing gently into the skull and delivering pulses of electric current to the brain. Made by a Silicon Valley startup called Halo Neuroscience, the headphones promise to "accelerate gains in strength, explosiveness, and dexterity" through a proprietary technique called neuropriming. "Thanks to @HaloNeuro for letting me and my teammates try these out!" McAdoo tweeted. On Thursday night, McAdoo and his teammates will seek the eighty-ninth and final win of their record-breaking season, as they defend their National Basketball Association title in Game 6 of the final series against LeBron James's Cleveland Cavaliers. The headphones' apparent results, in other words, have been impressive.
Congratulations are in order for the folks at Google Deepmind (https://deepmind.com) who have mastered Go (https://deepmind.com/alpha-go.html). However, some of the discussion around this seems like giddy overstatement. Wired says, "machines have conquered the last games" (http://bit.ly/200O5zG) The truth is nowhere close. For Go itself, it has been well known for a decade that Monte Carlo tree search (MCTS, http://bit.ly/1YbLm4M; that is, valuation by assuming randomized playout) is unusually effective in Go.
The term'robotic musicianship' may seem like an oxymoron. The first word often carries negative connotations in terms of artistic performance and can be used to describe a lack of expressivity and artistic sensitivity. The second word is used to describe varying levels of an individual's ability to apply musical concepts in order to convey artistry and sensitivity beyond the facets of merely reading notes from a score. To understand the meaning of robotic musicianship, it is important to detail the two primary research areas of which it constitutes: Musical mechatronics, which is the study and construction of physical systems that generate sound through mechanical means;15 and machine musicianship, which focuses on developing algorithms and cognitive models representative of various aspects of music perception, composition, performance, and theory.31 Robotic musicianship refers to the intersection of these areas.
In just our fourth session together, Steve was already beginning to sound discouraged. It was Thursday of the first week of an experiment that I had expected to last for two or three months, but from what Steve was telling me, it might not make much sense to go on. "There appears to be a limit for me somewhere around eight or nine digits," he told me, his words captured by the tape recorder that ran throughout each of our sessions. "With nine digits especially, it's very difficult to get regardless of what pattern I use--you know, my own kind of strategies. It really doesn't matter what I use--it seems very difficult to get." Steve, an undergraduate at Carnegie Mellon University, where I was teaching at the time, had been hired to come in several times a week and work on a simple task: memorizing strings of numbers.
Given the well-known limitations of the Turing Test, there is a need for objective tests to both focus attention on, and measure progress towards, the goals of AI. In this paper we argue that machine performance on standardized tests should be a key component of any new measure of AI, because attaining a high level of performance requires solving significant AI problems involving language understanding and world modeling - critical skills for any machine that lays claim to intelligence. In addition, standardized tests have all the basic requirements of a practical test: they are accessible, easily comprehensible, clearly measurable, and offer a graduated progression from simple tasks to those requiring deep understanding of the world.