Results


[Report] Causal neural network of metamemory for retrospection in primates

Science

We know how confidently we know: Metacognitive self-monitoring of memory states, so-called "metamemory," enables strategic and efficient information collection based on past experiences. However, it is unknown how metamemory is implemented in the brain. By whole-brain searches via functional magnetic resonance imaging, we discovered a neural correlate of metamemory for temporally remote events in prefrontal area 9 (or 9/46d), along with that for recent events within area 6. Reversible inactivation of each of these identified loci induced doubly dissociated selective impairments in metacognitive judgment performance on remote or recent memory, without impairing recognition performance itself. The findings reveal that parallel metamemory streams supervise recognition networks for remote and recent memory, without contributing to recognition itself.


HACC

Communications of the ACM

The Hardware/Hybrid Accelerated Cosmology Code (HACC) framework exploits this diverse landscape at the largest scales of problem size, obtaining high scalability and sustained performance. We demonstrate strong and weak scaling on Titan, obtaining up to 99.2% parallel efficiency, evolving 1.1 trillion particles. The rich structure of the current Universe--planets, stars, solar systems, galaxies, and yet larger collections of galaxies (clusters and filaments) all resulted from the growth of very small primordial fluctuations. Time-stepping criteria follow from a joint consideration of the force and mass resolution.20 Finally, stringent requirements on accuracy are imposed by the very small statistical errors in the observations--some observables must be computed at accuracies of a fraction of a percent.


Two Ways to Bring Shakespeare Into the Twenty-First Century

The New Yorker

For the four-hundredth anniversary of Shakespeare's death, Gregory Doran, the artistic director of the Royal Shakespeare Company, wanted to dazzle. He turned to "The Tempest," the late romance that includes flying spirits, a shipwreck, a vanishing banquet, and a masque-like pageant that the magician Prospero stages to celebrate his daughter's marriage. "The Tempest" was performed at the court of King James I, and it may have been intended in part to showcase the multimedia marvels of Jacobean court masques. "Shakespeare was touching on that new form of theatre," Doran told me recently, over the phone. "So we wanted to think about what the cutting-edge technology is today that Shakespeare, if he were alive now, would be saying, 'Let's use some of that.' " The politics behind Shakespeare and stage illusion are more fraught than usual these days.


Learning Securely

Communications of the ACM

A paper posted online in 2013 launched the modern wave of adversarial machine learning research by showing, for three different image processing neural networks, how to create "adversarial examples"--images that, after tiny modifications to some of the pixels, fool the neural network into classifying them differently from the way humans see them. Last year, for instance, three researchers at the University of California, Berkeley--Alex Kantchelian, Doug Tygar, and Anthony Joseph--showed a highly nonlinear machine learning model called "boosted trees" is also highly susceptible to adversarial examples. Yet even with those first examples, researchers started noticing something strange: examples designed to fool one machine learning algorithm often fooled other machine learning algorithms, too. Some researchers are making machine learning algorithms more robust by essentially "vaccinating" them: adding adversarial examples, correctly labeled, into the training data.


A Reconfigurable Fabric for Accelerating Large-Scale Datacenter Services

Communications of the ACM

Datacenter workloads demand high computational capabilities, flexibility, power efficiency, and low cost. It is challenging to improve all of these factors simultaneously. To advance datacenter capabilities beyond what commodity server designs can provide, we designed and built a composable, reconfigurable hardware fabric based on field programmable gate arrays (FPGA). Each server in the fabric contains one FPGA, and all FPGAs within a 48-server rack are interconnected over a low-latency, high-bandwidth network. We describe a medium-scale deployment of this fabric on a bed of 1632 servers, and measure its effectiveness in accelerating the ranking component of the Bing web search engine.


Apache Spark

Communications of the ACM

Analyses performed using Spark of brain activity in a larval zebrafish: embedding dynamics of whole-brain activity into lower-dimensional trajectories. The growth of data volumes in industry and research poses tremendous opportunities, as well as tremendous computational challenges. As data sizes have outpaced the capabilities of single machines, users have needed new systems to scale out computations to multiple nodes. As a result, there has been an explosion of new cluster programming models targeting diverse computing workloads.1,4,7,10 At first, these models were relatively specialized, with new models developed for new workloads; for example, MapReduce4 supported batch processing, but Google also developed Dremel13 for interactive SQL queries and Pregel11 for iterative graph algorithms.


Incremental, Iterative Data Processing with Timely Dataflow

Communications of the ACM

We describe the timely dataflow model for distributed computation and its implementation in the Naiad system. The model supports stateful iterative and incremental computations. It enables both low-latency stream processing and high-throughput batch processing, using a new approach to coordination that combines asynchronous and fine-grained synchronous execution. We describe two of the programming frameworks built on Naiad: GraphLINQ for parallel graph processing, and differential dataflow for nested iterative and incremental computations. We show that a general-purpose system can achieve performance that matches, and sometimes exceeds, that of specialized systems.


GPUs Reshape Computing

Communications of the ACM

NVidia's Titan X graphics card, featuring the company's Pascal-powered graphics processing unit driven by 3,584 CUDA cores running at 1.5GHz. As researchers continue to push the boundaries of neural networks and deep learning--particularly in speech recognition and natural language processing, image and pattern recognition, text and data analytics, and other complex areas--they are constantly on the lookout for new and better ways to extend and expand computing capabilities. For decades, the gold standard has been high-performance computing (HPC) clusters, which toss huge amounts of processing power at problems--albeit at a prohibitively high cost. This approach has helped fuel advances across a wide swath of fields, including weather forecasting, financial services, and energy exploration. However, in 2012, a new method emerged.


For the Golden State Warriors, Brain-Zapping Could Provide an Edge

The New Yorker

Though you couldn't tell from the picture, these particular headphones incorporated a miniature fakir's bed of soft plastic spikes above each ear, pressing gently into the skull and delivering pulses of electric current to the brain. Made by a Silicon Valley startup called Halo Neuroscience, the headphones promise to "accelerate gains in strength, explosiveness, and dexterity" through a proprietary technique called neuropriming. "Thanks to @HaloNeuro for letting me and my teammates try these out!" McAdoo tweeted. On Thursday night, McAdoo and his teammates will seek the eighty-ninth and final win of their record-breaking season, as they defend their National Basketball Association title in Game 6 of the final series against LeBron James's Cleveland Cavaliers. The headphones' apparent results, in other words, have been impressive.


The Solution to AI, What Real Researchers Do, and Expectations for CS Classrooms

Communications of the ACM

Congratulations are in order for the folks at Google Deepmind (https://deepmind.com) who have mastered Go (https://deepmind.com/alpha-go.html). However, some of the discussion around this seems like giddy overstatement. Wired says, "machines have conquered the last games" (http://bit.ly/200O5zG) The truth is nowhere close. For Go itself, it has been well known for a decade that Monte Carlo tree search (MCTS, http://bit.ly/1YbLm4M; that is, valuation by assuming randomized playout) is unusually effective in Go.