Goto

Collaborating Authors

 algorithm and architecture




Neuromorphic Co-Design as a Game

Vineyard, Craig M., Severa, William M., Aimone, James B.

arXiv.org Artificial Intelligence

Co-design is a prominent topic presently in computing, speaking to the mutual benefit of coordinating design choices of several layers in the technology stack. For example, this may be designing algorithms which can most efficiently take advantage of the acceleration properties of a given architecture, while simultaneously designing the hardware to support the structural needs of a class of computation. The implications of these design decisions are influential enough to be deemed a lottery, enabling an idea to win out over others irrespective of the individual merits. Coordination is a well studied topic in the mathematics of game theory, where in many cases without a coordination mechanism the outcome is sub-optimal. Here we consider what insights game theoretic analysis can offer for computer architecture co-design. In particular, we consider the interplay between algorithm and architecture advances in the field of neuromorphic computing. Analyzing developments of spiking neural network algorithms and neuromorphic hardware as a co-design game we use the Stag Hunt model to illustrate challenges for spiking algorithms or architectures to advance the field independently and advocate for a strategic pursuit to advance neuromorphic computing.


Machine learning: What are membership inference attacks?

#artificialintelligence

This article is part of Demystifying AI, a series of posts that (try to) disambiguate the jargon and myths surrounding AI. One of the wonders of machine learning is that it turns any kind of data into mathematical equations. Once you train a machine learning model on training examples--whether it's on images, audio, raw text, or tabular data--what you get is a set of numerical parameters. In most cases, the model no longer needs the training dataset and uses the tuned parameters to map new and unseen examples to categories or value predictions. You can then discard the training data and publish the model on GitHub or run it on your own servers without worrying about storing or distributing sensitive information contained in the training dataset.


50 million artificial neurons to facilitate machine-learning research

#artificialintelligence

Fifty million artificial neurons--a number roughly equivalent to the brain of a small mammal--were delivered from Portland, Oregon-based Intel Corp. to Sandia National Laboratories last month, said Sandia project leader Craig Vineyard. The neurons will be assembled to advance a relatively new kind of computing, called neuromorphic, based on the principles of the human brain. Its artificial components pass information in a manner similar to the action of living neurons, electrically pulsing only when a synapse in a complex circuit has absorbed enough charge to produce an electrical spike. "With a neuromorphic computer of this scale," Vineyard said, "we have a new tool to understand how brain-based computers are able to do impressive feats that we cannot currently do with ordinary computers." Improved algorithms and computer circuitry can create wider applications for neuromorphic computers, said Vineyard. Sandia manager of cognitive and emerging computing John Wagner said, "This very large neural computer will let us test how brain-inspired processors use information at increasingly realistic scales as they come to actually approximate the processing power of brains.


Noah Schwartz, Co-Founder & CEO of Quorum – Interview Series

#artificialintelligence

Noah is an AI systems architect. Prior to founding Quorum, Noah spent 12 years in academic research, first at the University of Southern California and most recently at Northwestern as the Assistant Chair of Neurobiology. His work focused on information processing in the brain and he has translated his research into products in augmented reality, brain-computer interfaces, computer vision, and embedded robotics control systems. Your interest in AI and robotics started as a little boy. How were you first introduced to these technologies?


Noah Schwartz, Co-Founder & CEO of Quorum – Interview Series

#artificialintelligence

Noah is an AI systems architect. Prior to founding Quorum, Noah spent 12 years in academic research, first at the University of Southern California and most recently at Northwestern as the Assistant Chair of Neurobiology. His work focused on information processing in the brain and he has translated his research into products in augmented reality, brain-computer interfaces, computer vision, and embedded robotics control systems. Your interest in AI and robotics started as a little boy. How were you first introduced to these technologies?


Relaxed Scheduling for Scalable Belief Propagation

Aksenov, Vitaly, Alistarh, Dan, Korhonen, Janne H.

arXiv.org Artificial Intelligence

The ability to leverage large-scale hardware parallelism has been one of the key enablers of the accelerated recent progress in machine learning. Consequently, there has been considerable effort invested into developing efficient parallel variants of classic machine learning algorithms. However, despite the wealth of knowledge on parallelization, some classic machine learning algorithms often prove hard to parallelize efficiently while maintaining convergence. In this paper, we focus on efficient parallel algorithms for the key machine learning task of inference on graphical models, in particular on the fundamental belief propagation algorithm. We address the challenge of efficiently parallelizing this classic paradigm by showing how to leverage scalable relaxed schedulers in this context. We present an extensive empirical study, showing that our approach outperforms previous parallel belief propagation implementations both in terms of scalability and in terms of wall-clock convergence time, on a range of practical applications.


ML (Machine Learning) at Georgia Tech

#artificialintelligence

The United States Department of Energy (DOE) has given three institutions a $5.5 million grant to collectively find solutions to some of the most challenging problems in artificial intelligence (AI) today. Scientists from Georgia Tech, Pacific Northwest National Laboratory, and Sandia National Laboratory will collaborate to develop technologies that are core to the DOE's priorities including cybersecurity, graph analytics, and electric grid resilience. Tushar Krishna, an assistant professor in Georgia Tech's School of Electrical and Computer Engineering and Machine Learning Center (ML@GT), will serve as a deputy director of the newly established Center for Artificial Intelligence-focused Architectures and Algorithms (ARIAA). Georgia Tech is contributing expertise in modeling and developing custom accelerators for machine learning and sparse linear algebra. The institute will also provide access to its advanced computing resources.


Algorithms and architecture for job recommendations

#artificialintelligence

In this article, we'll describe the evolution of our recommendation engine, from the initial minimum viable product (MVP) built with Apache Mahout, to a hybrid offline online pipeline. We'll explore the impact these changes have had on product metrics and how we've addressed challenges by using incremental modifications to algorithms, system architecture, and model format. To close, we'll review some related lessons in system design that apply to any high-traffic machine learning application. Indeed's production applications run in many data centers around the world. Clickstream data, and other application events from every data center, are replicated into a central HDFS repository, based in our Austin data center.