Goto

Collaborating Authors

 computing element


Multiple Threshold Neural Logic

Neural Information Processing Systems

We introduce a new Boolean computing element related to the Lin(cid:173) ear Threshold element, which is the Boolean version of the neuron. Instead of the sign function, it computes an arbitrary (with poly(cid:173) nornialy many transitions) Boolean function of the weighted sum of its inputs. We call the new computing element an LT M element, which stands for Linear Threshold with Multiple transitions. The paper consists of the following main contributions related to our study of LTM circuits: (i) the creation of efficient designs of LTM circuits for the addition of a multiple number of integers and the product of two integers. In particular, we show how to compute the addition of m integers with a single layer of LT M elements.


Research Paves the Way for Honey-Based Neuromorphic Computing

#artificialintelligence

Researchers at Washington State University have built a proof-of-concept device that includes one of the crucial circuits for neuromorphic computing - the memristor - built out of an unlikely medium: honey. The researchers hope their research paves the way for biodegradable, sustainable, organic-based computing systems that are orders of magnitude more efficient than conventional computing architectures. To build the device, the researchers processed true, bee-sourced honey into a solid form held between two metal electrodes, much like how your brain's synapses lay between pairs of neuron. The device was then tested for its ability to quickly switch on and off at speeds ranging between their biological counterparts' 100 and 500 nanoseconds - and it succeeded. "This is a very small device with a simple structure, but it has very similar functionalities to a human neuron," said Feng Zhao, associate professor of WSU's School of Engineering and Computer Science, in the announcement.


Now that computers connect us all, for better and worse, what's next?

#artificialintelligence

This article was written, edited and designed on laptop computers. Such foldable, transportable devices would have astounded computer scientists just a few decades ago, and seemed like sheer magic before that. The machines contain billions of tiny computing elements, running millions of lines of software instructions, collectively written by countless people across the globe. You click or tap or type or speak, and the result seamlessly appears on the screen. Computers were once so large they filled rooms. Now they're everywhere and invisible, embedded in watches, car engines, cameras, televisions and toys. They manage electrical grids, analyze scientific data and predict the weather. The modern world would be impossible without them. Scientists aim to make computers faster and programs more intelligent, while deploying technology in an ethical manner. Their efforts build on more than a century of innovation. In 1833, English mathematician Charles Babbage conceived a programmable machine that presaged today's computing architecture, featuring a "store" for holding numbers, a "mill" for operating on them, an instruction reader and a printer. This Analytical Engine also had logical functions like branching (if X, then Y).


Hybrid computer approach to train a machine learning system

Holzer, Mirko, Ulmann, Bernd

arXiv.org Artificial Intelligence

This book chapter describes a novel approach to training machine learning systems by means of a hybrid computer setup i.e. a digital computer tightly coupled with an analog computer. As an example a reinforcement learning system is trained to balance an inverted pendulum which is simulated on an analog computer, thus demonstrating a solution to the major challenge of adequately simulating the environment for reinforcement learning.


Building the Best Autonomous Brain

#artificialintelligence

When I'm bumper-to-bumper in a sea of exhaust fumes and distracted drivers, it seems like autonomous driving can't get here fast enough. Nor can the potential rewards that come along with fully autonomous vehicles, like far fewer accidents and mobility for people who struggle to get around on their own. To do my part, I'm focusing on how building the best autonomous brain for a car will get us there faster. Every day, we're getting closer to the technology needed to power self-driving cars. But in-vehicle compute needs are complex, and autonomous driving algorithms are changing rapidly.


Chip Magic

#artificialintelligence

Thanks to both a range of demanding new applications, such as Artificial Intelligence (AI), Natural Language Processing (NLP) and more, as well as a perceived threat to Moore's Law (which has "guided" the semiconductor industry for over 50 years to a state of staggering capability and complexity), we're starting to see an impressive range of new output from today's silicon designers. Entirely new chip designs, architectures and capabilities are coming from a wide array of key component players across the tech industry, including Intel (NASDAQ:INTC), AMD (NASDAQ:AMD), Nvidia (NASDAQ:NVDA), Qualcomm (NASDAQ:QCOM), Micron (NASDAQ:MU) and ARM, as well as internal efforts from companies like Apple (NASDAQ:AAPL), Samsung (OTC:SSNLF), Huawei, Google (NASDAQ:GOOG) (NASDAQ:GOOGL) and Microsoft (NASDAQ:MSFT). It's a digital revival that many thought would never come. In fact, just a few years ago, there were many who were predicting the death, or at least serious weakening, of most major semiconductor players. Growth in many major hardware markets had started to slow, and there was a sense that improvements in semiconductor performance were reaching a point of diminishing returns, particularly in CPUs (central processing units), the most well-known type of chip.


Multiple Threshold Neural Logic

Bohossian, Vasken, Bruck, Jehoshua

Neural Information Processing Systems

This observation has boosted interest in the field of artificial neural networks [Hopfield 82], [Rumelhart 82]. The latter are built by interconnecting artificial neurons whose behavior is inspired by that of biological neurons.


Multiple Threshold Neural Logic

Bohossian, Vasken, Bruck, Jehoshua

Neural Information Processing Systems

This observation has boosted interest in the field of artificial neural networks [Hopfield 82], [Rumelhart 82]. The latter are built by interconnecting artificial neurons whose behavior is inspired by that of biological neurons.


Multiple Threshold Neural Logic

Bohossian, Vasken, Bruck, Jehoshua

Neural Information Processing Systems

This observation has boosted interest in the field of artificial neural networks [Hopfield 82], [Rumelhart 82]. The latter are built by interconnecting artificial neurons whose behavior is inspired by that of biological neurons.