Results


What If AI Succeeds?

AI Magazine

Within the time of a human generation, computer technology will be capable of producing computers with as many artificial neurons as there are neurons in the human brain. Within two human generations, intelligists (AI researchers) will have discovered how to use such massive computing capacity in brainlike ways. This situation raises the likelihood that twenty-first century global politics will be dominated by the question, Who or what is to be the dominant species on this planet? This article discusses rival political and technological scenarios about the rise of the artilect (artificial intellect, ultraintelligent machine) and launches a plea that a world conference be held on the socalled "artilect debate." Many years ago, while reading my first book on molecular biology, I realized not only that living creatures, including human beings, are biochemical machines, but also that one day, humanity would sufficiently understand the principles of life to be able to reproduce life artificially (Langton 1989) and even create a creature more intelligent than we are.


Using Artificial Intelligence to Rapidly Identify Brain Tumors

#artificialintelligence

At the 20th International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI 2017), Professor Biros and collaborators presented the results of a new, automated method of characterizing gliomas. The system combines biophysical models of tumor growth with machine learning algorithms to analyze the magnetic resonance (MR) imaging data of glioma patients. The research team put their new system to the test at the Multimodal Brain Tumor Segmentation Challenge 2017 (BRaTS'17), which is a yearly competition at which research groups present new approaches and results for computer-assisted identification and classification of brain tumors using data from pre-operative MR scans. Each individual stage in the analysis pipeline utilized different TACC computing systems; the nearest neighbor machine learning classification component used 60 nodes at once (each consisting of 68 processors) on TACCs latest supercomputer Stampede2.


Brain-Inspired Computing Pushes the Boundaries of Technology

#artificialintelligence

First, the revolution in machine learning, particularly deep learning methods, has renewed excitement in neural network algorithms and other algorithms inspired by the brain. Neural chips focused on low power exist now, and computer scientists are forging ahead developing algorithms without regard to who wins these debates. Interestingly, it is in the domain of machine learning that neural computing technologies face their biggest competitor, and their biggest role model, the graphics processing unit (GPU). At the recent Neuro-Inspired Computing Elements (NICE) Workshop, held at IBM Almaden, researchers not only described the use of neural computers for machine learning applications like deep learning, but also highlighted potential roles in computing tasks as conventional as solving optimization problems and matrix multiplication.


The Brain as Computer: Bad at Math, Good at Everything Else

#artificialintelligence

Computation and data storage are accomplished together locally in a vast network consisting of roughly 100 billion neural cells (neurons) and more than 100 trillion connections (synapses). If the spike meets certain criteria, the synapse transforms it into another voltage pulse that travels down the branching dendrite structure of the receiving neuron and contributes either positively or negatively to its cell membrane voltage. But deep-learning networks are still a long way from the computational performance, energy efficiency, and learning capabilities of biological brains. Simulating 1.73 billion neurons consumed 10 billion times as much energy as an equivalent size portion of the brain, even though it used very simplified models and did not perform any learning.


Machine Brains Advance Towards Human Mimicry

#artificialintelligence

Data intelligence firms like Elastic are building machine learning functions into their software as fast as they can. The software itself claims to enable AI to function more like a human brain because it integrates multiple brain areas. According to the firm, "Human brains integrate sight, sound and other senses when making a decision, but existing AI systems do not. Claiming that his firm's latest developments may help the future of AI, Versace notes that Neurala has patented its latest computer brain development under U.S. Patent No.


Thermodynamic-RAM Technology Stack Published – Knowm.org

#artificialintelligence

Bringing us closer to brain-like neural computation, kT-RAM will provide a general-purpose adaptive hardware resource to existing computing platforms enabling fast and low-power machine learning capabilities that are currently hampered by the separation of memory and processing, a.k.a the von Neumann bottleneck. Rather than trying to reverse engineer the brain or transfer existing machine learning algorithms to new hardware and blindly hope to end up with an elegant power efficient chip, AHaH computing was designed from the beginning with a few key constraints: (1) must result in a hardware solution where memory and computation are combined, (2) must enable most or all machine learning applications, (3) must be simple enough to build chips with existing manufacturing technology and emulated with existing computational platforms for verification of methods (4) must be understandable and adoptable by application developers across all manufacturing sectors. At all scales of organi- zation we see the same fractal built from the same simple building block: a simple structure formed of competing energy dissipation pathways. We call this building block'nature's transistor', as it appears to represent a foundational adaptive build- ing block from which higher-order self-organized structures are built, much like the transistor is a building block for modern computing.


Rise of the Edge – Zeroth.AI Team – Medium

#artificialintelligence

In this future, a large function of the edge computing infrastructure will be in performing data thinning, or sorting the "hot" data from the "cold." And in turn, a large function of the cloud will be applying massive computational resources to the relevant data gathered from the edge in order to train new AI models to then be deployed back out on the edge. While it may make sense for an autonomous car to carry computers on board, not every device will have the space required nor the same magnitude of computational demands. These routers will be mini-servers in and of themselves, storing important data locally for faster access, providing computational power on demand to the devices that require it, and performing the aforementioned data thinning.


Neural networks explained

#artificialintelligence

"There's this idea that ideas in science are a bit like epidemics of viruses," says Tomaso Poggio, the Eugene McDermott Professor of Brain and Cognitive Sciences at MIT, an investigator at MIT's McGovern Institute for Brain Research, and director of MIT's Center for Brains, Minds, and Machines. An object recognition system, for instance, might be fed thousands of labeled images of cars, houses, coffee cups, and so on, and it would find visual patterns in the images that consistently correlate with particular labels. During training, the weights and thresholds are continually adjusted until training data with the same labels consistently yield similar outputs. In addition to directing the Center for Brains, Minds, and Machines (CBMM), Poggio leads the center's research program in Theoretical Frameworks for Intelligence.


Neural Networks Explained

#artificialintelligence

"There's this idea that ideas in science are a bit like epidemics of viruses," says Tomaso Poggio, the Eugene McDermott Professor of Brain and Cognitive Sciences at MIT, an investigator at MIT's McGovern Institute for Brain Research, and director of MIT's Center for Brains, Minds, and Machines. An object recognition system, for instance, might be fed thousands of labeled images of cars, houses, coffee cups, and so on, and it would find visual patterns in the images that consistently correlate with particular labels. During training, the weights and thresholds are continually adjusted until training data with the same labels consistently yield similar outputs. In addition to directing the Center for Brains, Minds, and Machines (CBMM), Poggio leads the center's research program in Theoretical Frameworks for Intelligence.


Explained: Neural networks

Robohub

In the past 10 years, the best-performing artificial-intelligence systems -- such as the speech recognizers on smartphones or Google's latest automatic translator -- have resulted from a technique called "deep learning." Deep learning is, in fact, a new name for an approach to artificial intelligence called neural networks, which have been going in and out of fashion for more than 70 years. Neural networks were first proposed in 1944 by Warren McCullough and Walter Pitts, two University of Chicago researchers who moved to MIT in 1952 as founding members of what's sometimes called the first cognitive science department. Neural nets were a major area of research in both neuroscience and computer science until 1969, when, according to computer science lore, they were killed off by the MIT mathematicians Marvin Minsky and Seymour Papert, who a year later would become co-directors of the new MIT Artificial Intelligence Laboratory. The technique then enjoyed a resurgence in the 1980s, fell into eclipse again in the first decade of the new century, and has returned like gangbusters in the second, fueled largely by the increased processing power of graphics chips.